30 March 2016
Audio visualization, obstacle velocity
Here's a quick update that includes a couple new features: passing velocity in with obstacle data and rendering 3d geometry to a 2d image to use as input (although in this video, actual obstacles and density fields aren't present). I've got a simple particle simulation running on the CPU in TouchDesigner whose initial velocity and direction is being controlled by an audio waveform. I followed along with Derivative's 2016 Norway Workshop to get the system set up. Particle velocities are rendered as colors and piped into the CUDA sim. Adding these features went fairly smoothly.
Here it's very apparent that the simulation container doesn't have any interesting border conditions, which can be a good and bad thing. It's another feature to add. In the coming months, I'll be working on the UI, different composition techniques, experimenting with looks, and improving stability to prepare for MAT's End of Year Show. I've also become very interested in Ted's Wavelet Turbulence paper which would really send the visuals through the roof. No timeline on that yet, but it's going on the list.
16 March 2016
Boundaries and color
This update includes a lot of changes to the simulation algorithm as well as some new features. I ended up restructuring the velocity solve to mirror the one found in GPU Gems, which is still based on Stam's solution, but is more clear about the pressure and temperature fields, and simplifies the gradient subtraction a bit, so the performance is better.
For the reaction coupling, I diffuse two density fields with a laplacian kernel and apply the Gray-Scott equation, then advect them. Since there is a slight amount of diffusion from bilinear interpolation in the advection step, I suspect the Gray-Scott feed and kill values might be a little off. Nonetheless it appears stable. I'm passing a texture from Touch into cuda to define obstacles. The color is coming from the velocity, with hue mapped to its polar angle.
There are a lot of fields and variables to play with now. I'll have to decide if I want to start working on robustness and composition techniques, or to move forward with adding a particle system and pushing the resolution as far as possible.
28 February 2016
Tiling
I returned to some of my previous work on aperiodic tiles from two years ago. This time, I'm overlaying different iterations of the same prototile and rendering them as polygonal tubes with Mantra. Everything is done in Houdini with a little color correction in Photoshop. I'm trying to decide which ones to submit to Bridges this year.
25 February 2016
TouchDesigner + CUDA
I moved over to Windows and got reaction diffusion working in TouchDesigner using a CUDA .dll I adapted from my original experiments in Linux. This is a big checkpoint and hopefully represents the beginning of what will be the final environment and toolkit I will commit to for my thesis. Thinking of TouchDesigner as a modular OpenGL app authoring tool is extremely liberating.
I'm going to shoot for adding user-defined boundary conditions next as that seems to be coming up a lot lately. Color shouldn't be too far behind.
02 February 2016
30 January 2016
particle advection
I got particle advection working with advection-reaction-diffusion and vorticity confinement. The particle system is based on Memo’s MSAFluid example. 100,000 particles being pushed around an advection-reaction-diffusion velocity field.
Moved to Blogger
After putting up with it for 2 weeks, I couldn't stand the extra compression tumblr adds to everything. It handles this type of content particularly poorly. They also don't support the <video> tag. I'm now hosting html5 videos on dropbox and embedding them here. I have a bash script to do the encoding for me.
Here are the guts which take a png sequence and convert it to a YUV file which is used to make a webm which is used to make an mp4 for the Safari people.
png2yuv -I p -f 60 -b 1 -n $numFrames -j $input > $outputYuv
vpxenc --good --cpu-used=0 --auto-alt-ref=1 --lag-in-frames=16 --end-usage=vbr --passes=2 --threads=2 --target-bitrate=3000 -o $outputWebm $outputYuv
ffmpeg -i $outputWebm -r $_frameRate -codec:v libx264 -preset slow -b:v 2500k -threads 0 -codec:a libfdk_aac -b:a 128k $outputMp4
Here are the guts which take a png sequence and convert it to a YUV file which is used to make a webm which is used to make an mp4 for the Safari people.
png2yuv -I p -f 60 -b 1 -n $numFrames -j $input > $outputYuv
vpxenc --good --cpu-used=0 --auto-alt-ref=1 --lag-in-frames=16 --end-usage=vbr --passes=2 --threads=2 --target-bitrate=3000 -o $outputWebm $outputYuv
ffmpeg -i $outputWebm -r $_frameRate -codec:v libx264 -preset slow -b:v 2500k -threads 0 -codec:a libfdk_aac -b:a 128k $outputMp4
Subscribe to:
Posts (Atom)