“In search for peace and serenity, one must embrace the chaos within”
Sun Tzu: “I did not say that”
Following Last week
After the lengthy discussion with Professor Nimrah last week, we expanded on the idea of introducing playful water element to our capstone. Essentially, how the experience is going to play out is that users are entering the space and greeted by water (somewhere in the ground), which they can interact with. The visuals projected into the water would then react, rippling throughout the entire room. The reaction speaks as “chaos” for the audience. As they move less, they are greeted with a cushion in the center where they can sit down, and contemplate on their actions to let the room bounce back to its equilibrium state.
equi·lib·ri·um - a state of balance between opposing forces or actions that is either static (as in a body acted on by forces whose resultant is zero) or dynamic (as in a reversible chemical reaction when the velocities in both directions are equal)
Wandering Hands
Depth Cameras
For this prototype, our group decides to focus on “scaling up” the visuals and trying out the hardest component: that is lights and projections. We split the work into two: a » I’m handling the projections as per agreed since the beginning, and b » my colleague tries hands on tinkering with NeoPixels RGB Strips to do the lighting. My goal for the projection is simple, yet challenging as will be explained later.
”Create a motion reactive visuals, that is not visually overwhelming for the water.”
I borrowed two depth cameras from the IM Equipment: Intel Realsense and Kinect Azure, both offering tempting advantages and disadvantages, which for now, I am not sure which to pick:
| Intel Realsense | Microsoft Kinect Azure |
|---|---|
| (+) Very good at low light situation | (+) Better body tracking with SDK |
| (+) Faster response rate | (+) Wide FOV allows bigger tracking range |
| (+) Small and compact | (+) Can detect low-texture surfaces |
| (-) Needs a lot of tuning | (-) Chonky boi, and beeg |
| (-) Only great at close distances within 3x3m | (-) Lower response rate |
Visuals
Following this tutorial, I quickly (by this I mean spending 1-2 hours figuring out why the Realsense did not connect kekw) installed and played around with depth cameras in TouchDesigner.
| Particles Bouncing | Star Wars Warp Drive thingy |
|---|---|
| Water Color Attempt | 3D Grid Blocks |
![]() |
There’s a lot to unpack behind the visuals and philosophy that entails why I’m doing it. It is 11.30 PM and there’s too much to write but here’s the simplified things:
Particles vs. Blocks: Basically, I wanted to create two distinct things which encompass within generative art. The particles and the blocks are static until movement is detected by the camera. Essentially, since we are going to project this into the water surface, I want to make sure that it’s not super chaotic when the user see it for the first time. Rather, their actions would dictate it to be chaotic. Hence, by not doing anything would once reach a calming state.
Watercolor: Initially, I thought this was an epic thing to as seeing water color dissolve in water and coloring the entire water would be cool. But having extravaganza things in our installation defeats the purpose of serenity and calmness. So I quickly scrapped the idea and stopped moving forward with crazy colors. Yet, who knows? It might do a comeback sometime in the future.
Electric Boogaloo: Part Six
Then on Monday evening, me and my colleague came up together to assemble things. He improved his computer vision to track human emotions which is cool, but the visuals are still laggy imo, which is why the cameras are doing magic trick with their almost instantenous response times. Unfortunately, the light strips were not working. Then, disaster struck when my tripod snapped as it could not bear the projector weight, which then we realize my $50 Temu projector is crappy for short distance projections. But alas, even through misfortune there are lessons to be learned. Below are some of the montages and snaps we got:


