Warning (About this post)
This post contains a very long reflective journey from the previous post all the way to spring break, during which, many chaotic events happened. As such, while I could’ve referred to IM Capstone weekly schedule, a lot of things became fluid.
Thus this post becomes the gap that fills before all things become remote learning.
Websocket Integration - ☑️
One of the toughest hurdles of two semesters is finally done! We finally truly figured out how to send loads of JSON packets into TouchDesigner and parsing them. It works by having a JSON parser and modifying the Websocket DAT which then we feed into a Table DAT. From there, we can feed the values into a CHOP and arbitrarily choose, modify, or manipulate them the way we want them to be.

import json
def onReceiveText(dat, rowIndex, message, *args): # 1. Debugging: Update text1 with raw JSON if op('text1'): op('text1').text = message
try: # 2. Parse the JSON data = json.loads(message) persons = data.get('persons', [])
# 3. Define the Target Table target = op('table1') if not target: return
target.clear()
# 4. Set Headers # We include global data plus individual person data headers = [ 'person_count', 'group_jitter', 'active_layers', 'p_id', 'p_jitter', 'p_stillness', 'p_depth' ] target.appendRow(headers)
# 5. Global data values p_count = data.get('person_count', 0) g_jitter = data.get('group_jitter', 0) a_layers = data.get('active_layers', 0)
# 6. Logic: If persons exist, create a row for each. If not, one row with 0s. if persons: for p in persons: row = [ p_count, g_jitter, a_layers, p.get('id', 0), p.get('jitter', 0), p.get('stillness', 0), p.get('depth_mm', 0) ] target.appendRow(row) else: # Placeholder row when no one is detected target.appendRow([p_count, g_jitter, a_layers, 0, 0, 0, 0])
except Exception as e: print(f"WebSocket Parsing Error: {e}")Fluid, Waves, and Colors
Following the discussion after Post 3, we decided to double down on having fluid-ish waves as the meat and bread moving forward for Senspace. How this was possible is by converting the solar flare “simulation/effects” base visual into using the same visual as the first one. It essentially works by using displace and feedbackEdge to loop the visual input multiple times, plus giving it extra spices creates the visuals as such:

The states work by observing the current activeLayers value from the websocket and then inserting lag and caching them so that they don’t suddenly “jump” between states. Then a Logic networks convert them into a switch which allows the transition between the values. In short, the Controller in our TouchDesigner interface manages and acts as the brain that detects and control the visual states.
Electric Boogaloo: Physical Installations
Key challenges in our project was to install three projectors facing three walls and extending them into a large canvas. Fortunately, IM Lab has Mac Mini M4 that has such capability by allowing three thunderbolt ports using USB-C to Media ports (HDMI, C, USB, etc.) attached. In other words, you can just plug three projectors into the convertor and then attaching them into the mac to get an extended “canvas”.

Because we are using a curtain that has curves, when doing projection mapping, it’s great to consider this in two approaches: a) the first is by using Stoner to adjust the keystone and points and then, b) within perform mode, have a very wide canvas that has the resolution of all three projectors (1080 x 3).
Beyond physical: into the realms of the Internet
Like all things, nothing is truly in our control and all things come and go. Unfortunately, the war transitioned our campus to remote learning which means all the efforts we’ve made towards physical installation goes out of the bin. Fortunately, half of our project is digital which means it is possible, in theory, to convert this project into a web-based experience.
