Craneo Chromatica

by Chiefest in Design > Digital Graphics

161 Views, 3 Favorites, 0 Comments

Craneo Chromatica

HG.png
tsi.png
TouchDesigber Interactive Design

I created an interactive, audio-reactive 3D skull using TouchDesigner for the Colors of the Rainbow Contest. The skull shakes its head to music, while its eyes and teeth glow and shift color based on live hand gestures captured by a webcam. The reflective gold surface animates with color noise, and rising particles in a dark gradient background complete the scene.

Previously, I made and submitted a similar project where a butterfly glowed, morphed, and changed color in response to sound. With this new piece, I wanted to push that idea further — using gesture control, more complex materials, and dynamic movement to bring a 3D model of a skull to life in a more expressive, surreal way.

Supplies

3.png

Here's What I Used:

  1. Laptop with webcam
  2. TouchDesigner (free non-commercial version)
  3. Blender 2.9 (this version works fast on my laptop)
  4. Audio source (Pixabay music, this and that)
  5. 3D skull model (mine was downloaded from Sketchfab)
  6. Hand-tracking library like MediaPipe (TouchDesigner compatible)
  7. PBR material
  8. Photoshop/Gimp
  9. A source for color codes

🎭Inspiration

5965682.jpg

The inspiration behind my design comes from the vibrant and symbolic Day of the Dead skulls found in Mexican culture. I was especially drawn to their bold use of color and meaning, which I combined with the full spectrum of the rainbow to create a dynamic and expressive visual palette.

Additionally, I want to credit Torin's tutorials on Mediapipe hand tracking video tutorials (particularly this one) which were instrumental in helping me build an interactive color-changing feature based on hand gestures.

🔨Prepare the Skull Model in Blender

Drawing1(1).jpg
1.PNG
2.PNG

To control the skull’s eyes and teeth separately in TouchDesigner, I needed them as distinct mesh objects. I downloaded a 3D skull model in FBX format from Sketchfab and imported it into Blender.

Some of the teeth were already separated in the original model, which was helpful. For the eyes, I entered Edit Mode, selected the inner faces of the eye sockets, and used P → Separate by Selection to split them into a new mesh.

After confirming that the eyes and teeth were now individual objects, I exported the model as an FBX. I skipped including original textures in the export since I’ll be using custom materials and shaders in TouchDesigner.

💀Skull Model Setup and Materials in Toucdesigner Part One

TouchDesigner FBX Import
GGGG.PNG
PBR Materials, Gold

🔧 Opening a New Project

  1. Open TouchDesigner.
  2. When you launch a new project, you’ll see a few default nodes already in the network.
  3. Clear the Workspace:
  4. Right-click and drag to select all the existing nodes.
  5. Press Backspace to delete them.
  6. This gives you a clean canvas to begin building your audio-reactive skull setup from scratch.

🎥 Importing a 3D Model

There are two main ways to bring in your 3D model:

✅ Option 1: Use the OP Create Dialog

  1. Double-click anywhere on the network canvas.
  2. In the search bar, type movie file in (or just “movie”).
  3. Navigate through these tabs:
  4. CHOPs – Channel Operators
  5. TOPs – Texture Operators (you will find it here)
  6. SOPs – Surface Operators
  7. MATs – Materials
  8. DATs – Data Operators
  9. Select the movie file in operator and place it into your network.

🚀 Option 2: Drag & Drop (Recommended for Models)

  1. Simply drag and drop your .FBX 3D skull model directly onto the canvas.
  2. TouchDesigner will automatically generate the movie file in TOP node to handle the geometry.

🪙 Skull Material Setup: Scratched Gold PBR

For the skull’s surface, I applied a scratched gold PBR material downloaded from AmbientCG.

  1. I dragged and dropped the following texture maps into TouchDesigner:
  2. Color Map
  3. Normal Map
  4. Roughness Map
  5. Metalness Map
  6. These maps were connected to a PBR MAT node, creating a physically-based gold material with realistic surface shading and reflections.

This material was then assigned to the skull mesh, giving it a metallic, weathered look — a great visual base for the later color and emission effects.

📺 Bonus Help

Because TouchDesigner can be a bit complex, I’ll also share some video tutorials to guide you through key parts of the setup.

Skull Model Setup and Materials in Toucdesigner Part Two

ChatGPT Image Jul 24, 2025, 10_33_52 PM.png
Using image sequence as color source

🎯 Goal

The objective in this phase was to use image sequences as dynamic color sources for parts of the 3D skull — specifically the eye sockets and selected teeth. The long-term plan is to enable hand gesture control to cycle through these images, creating an interactive visual experience.

🖼️ Creating the Color Image Set

To build the color source:

  1. I generated solid color images using hex codes from the W3Schools color reference.
  2. The color set included rainbow shades and three additional custom colors.
  3. These images were saved in one folder.

🧩 Loading and Preparing the Image Sequence in TouchDesigner

  1. Import with Movie File In TOP:
  2. I dragged in a Movie File In TOP node.
  3. In the parameter panel, I entered the path to the folder containing the saved color images.
  4. The node treats the folder as a sequence if the files are named in order.
  5. Configure Playback for Interaction:
  6. Under Play Mode, I selected “Index”.
  7. This disables autoplay and prepares the setup for user input-based control, which will later be handled through gesture recognition.
  8. Extracting Color Data with TOP to CHOP:
  9. I added a TOP to node to convert image data into usable channel values.
  10. From this node, I extracted RGB values from the loaded color images.
  11. Feeding Colors into the PBR Material:
  12. The extracted RGB values were connected to the Emissive Color inputs of a PBR MAT (Physically Based Rendering Material).
  13. This allows the color images to drive the emission shading for selected parts of the skull model — making the eyes and teeth glow with the chosen color.

I used the following Hex codes to create my solid color images:

SaddleBrown #8B4513

NavajoWhite #FFDEAD

Aqua #00FFFF

Indigo #4B0082

Violet #EE82EE

Orange #FFA500

Green #008000

Yellow #FFFF00

Blue #0000FF

Red #FF0000

🔜 What’s Next

This setup creates a flexible, reactive material system. The next steps will focus on:

  1. Using hand gestures (webcam-based tracking) to control the image index, allowing real-time color changes.

👌Control Image Sequence With Pinch Gesture

HandTracking.png

This step sets up a gesture-based counter to control an image sequence using the distance between your thumb and index finger.

🧠 What’s Happening:

We’re using hand tracking to measure the distance between the thumb and index finger (pinch). When the fingers move apart and a defined threshold is reached (which you can set), the counter increases by a specified increment. This counter value is then used to control the image index being displayed.

🔁 How It Works (see diagram above or download the PDF version below):

  1. Pinch Detection:
  2. The webcam tracks your hand using Mediapipe. While the distance is typically measured between the thumb and index fingertips, other finger combinations can be used as well.
  3. Trigger & Counter Logic:
  4. When the pinch distance crosses a set threshold, the Counter CHOP increments.
  5. If the counter reaches its max value, it loops back to the minimum.
  6. When the hand leaves the frame, the counter holds the current value (it doesn't reset).
  7. Driving the Image Sequence:
  8. Select the Count CHOP node.
  9. Drag the channel (e.g., pinch_midpoint_distance) and drop it onto the index parameter of the Movie File In TOP.
  10. This directly links the count to the image sequence index.

Now, every pinch changes the image. You can cycle forward through your images one pinch at a time, and it pauses when your hand is gone.

👉 What’s Next:

In the next step, we’ll incorporate audio input to drive dynamic movement of the 3D skull — adding another layer of interactivity to the experience.

🎧 Audio Input & Reactive Motion

images.png

In this step, we’ll bring the visuals to life by feeding them audio data and making them react in real time. Whether it's pulsing lights, motion, or bloom effects — it all starts with analyzing the sound.

Note: Some smaller details in the image may be difficult to see. For better clarity, consider downloading the PDF version attached below.

🛠 Node Breakdown:

  1. Audio Source
  2. Use the audiofilein1 node to read audio files like .mp3 or .wav. This is the raw input for our setup. You can preview the waveform directly in the node's viewer.
  3. Playback Output
  4. The audiodevin1 node sends the sound to your system's speakers.
  5. Filtering Audio
  6. Add an audio filter to isolate specific frequency ranges. A Low Pass filter helps clean up noisy high frequencies — useful for working with bass lines or kick drums, which are typically more rhythmic and less chaotic.
  7. Visualizing Spectrum
  8. The audio spectrum node shows the frequency spectrum. This can be helpful for debugging or visual effects but isn’t strictly necessary for motion control.
  9. Audio Analysis
  10. Use the analyze CHOP to extract usable numerical data from the audio. The RMS Power setting works well for getting consistent values that reflect overall loudness.
  11. Signal Processing
  12. filter: Smooths the values from the analyzer to avoid jittery motion.
  13. math: Remap or scale the values to usable ranges. For example, you might want to go from a small 0–0.1 range to something more dramatic like 0–100.
  14. Output Control (Null CHOPs)
  15. Rename your null CHOPs clearly. For instance:
  16. bloomIntensity: Controls visual bloom/glow strength.
  17. nod: Controls 3D object motion or transformation.

💡 Pro Tip:

Keep your math and filter CHOPs nearby to tweak responsiveness and scale. Audio can be unpredictable — smoothing and remapping are your friends!

🔗 What’s Next?

These re-mapped values will be used to drive parameters in your scene — whether that's lighting, object movement, or shader intensity. In the next steps, we’ll connect them to actual visuals!

Downloads

💡🎥Environment Lights and Camera

Environment Lights and Camera

It's time to create a vibrant lighting environment that brings it to life—literally reflecting a shimmering spectrum of RGB noise across the 3D skull surface.

🧭 Setting Up the Scene

Start by adding the following components to your network:

  1. Camera COMP – This defines the viewpoint for your render. Place it in your scene and position it to frame the skull clearly.
  2. Environment Light COMP – This type of light simulates ambient lighting from all directions and is perfect for reflective surfaces like our skull.
  3. Render TOP – This will output the final visual based on your camera, geometry, and lighting setup.

At this point, you’ll notice the Environment Light shows a warning icon. That’s because it needs an environment map to function.

🌈 Creating Dynamic RGB Noise

Instead of using a static image, we’ll generate a live, colorful texture using the Noise TOP:

  1. Add a Noise TOP
  2. Drag the Noise TOP directly into the Environment Map parameter of the Environment Light.

The result: your reflective skull will now be lit by a static multicolored noise.

To make the lighting even more lively and constantly shifting, let’s animate the noise:

  1. In the Transform page of the Noise TOP, enter the expression: absTime.seconds
  2. This uses the absolute time in seconds to continuously shift the noise pattern through translation/rotation, creating a dynamic, ever-changing light source that casts moving RGB reflections across the skull’s surface.

🎬 Previewing

  1. Connect everything into a Render TOP. This will be the output that gets passed along to any post-processing or compositing steps you plan to add later.

You should now see your skull shimmering with constantly changing RGB reflections, giving it a futuristic, surreal vibe.

🏞Background Scene Setup

TouchDesigner Background Scene Setup

This step introduces a dynamic background using GPU particles. Admittedly, this pushed my laptop’s limits—despite having 16GB of RAM, the particle simulation nearly maxed it out. I even considered skipping this part altogether… but hey, here we go anyway!

🎯 The Idea

I wanted a more atmospheric and lively background behind the reflective skull. To do this, I created a particle system that reacts in space, with random upward motion and a subtle layered gradient behind the particles. The goal was to give the scene a sense of depth and motion—like the skull is suspended in a lively environment.

🧪 Steps to Build It:

  1. Create a Sphere Geometry (any 3D geometry will do)
  2. Start with a Sphere SOP to define the shape where particles will originate.
  3. Generate Surface Points
  4. Use the Sprinkle SOP to scatter points across the surface. These will act as spawn points for the particles.
  5. Extract Point Positions
  6. Connect a SOP to CHOP to calculate the XYZ positions of these points. These coordinates feed into the GPU particle system.
  7. Drive the GPU Particle Component
  8. Add a Particles GPU component from TouchDesigner Palette under tools and link the position data. Customize your particles by:
  9. Applying forces (e.g. wind, turbulence)
  10. Adjusting particle color, lifespan, and initial velocity
  11. Create a Gradient Background
  12. Use a Ramp TOP to design a smooth gradient (vertical or radial). This adds ambient depth and color behind the particles.
  13. Composite Skull Over Background
  14. Use the Over TOP to layer your scene:
  15. Input 1: the rendered reflective skull
  16. Input 2: the background (particles + gradient ramp)
  17. Enhance with Blur and Bloom
  18. After compositing, add:
  19. A Blur TOP to soften the apearance of the scene.
  20. A Bloom TOP to exaggerate bright reflections, especially from the emissive eyes a teeth of the skull.
  21. These effects give the whole scene a polished, cinematic glow.

With particles moving, gradients glowing, and post-effects added, your scene transforms into a surreal and immersive environment. It’s a GPU-heavy step, but the visual makes everything less static.

🛠Integration

Integrating networks

Now that all the key sub-networks are ready—geometry, audio, gesture, lighting, camera, and background scene—it’s time to bring everything together into a fully functional system.

🔊 Audio to Visual Links

  1. Start by using the audio network to control the bloom intensity.
  2. Simply drag the null node (previously renamed BloomIntensity) and drop it onto the bloom intensity input of the Bloom node
  3. Then, take the null node renamed Nod (also from the audio network) and connect it to the rotation inputs of the geometries. This will cause the objects to rotate dynamically in sync with the beat of the music.
Note: If you use a different song, you may need to tweak the response. That’s where the Math CHOP comes in handy—you can remap or scale the values to better suit the new audio levels.

🧠 Gesture Control to Visual Playback

  1. Move to the end of the gesture network, activate the count CHOP, and link the pinch midpoint distance to the index of the Movie File In TOP (of the image sequence).
  2. This lets your hand gestures control the image sequence playback, making it a responsive and interactive feature.

And that’s it—your system is now connected, reactive, and ready to glow to the beat!

🎥 How to Export the Project As a Video

How to export animation in TouchDesigner

To turn your live TouchDesigner project into a movie file, follow these steps:

  1. Go to the top menu bar, click File > Export Movie...
  2. This will open the Export Movie Dialog window.
  3. Drag the final visual output node—in our case, the bloom TOP—and drop it into the TOP Video input field of the dialog.
  4. For audio, drag your audio file or final audio CHOP into the CHOP Audio input field.
  5. Choose your desired resolution, frame rate, and format if needed.
  6. Click Start to begin the export.
📝 Note: While the export is running, your gestures will still be active—meaning you can interact in real-time and change visuals (like colors via image sequences) during the recording.

This makes it possible to record both pre-programmed reactivity and live interaction in one pass.

🎞Video Play Black

Video Playback: Craneo Chromatica

Step 12: 🎞 Video Playback

Here’s the final exported video, complete with the hand gesture effects that were performed live during export. The combination of real-time interaction and audio-reactive visuals gives the piece a dynamic and personal touch—every playback reflects a unique performance.

Play it back and experience how motion, music, and gestures come together to bring the glowing skull to life.

I've uploaded my TouchDesigner file along with the supporting project files to Google Drive for reference. Feel free to drop a comment if you have any questions—I’ll be happy to help!

🧘🏼‍♂️Reflection...& the Editors Cut

Reflection.png
The Editor's Cut Floral Red

Building this project was a rewarding deep dive into interactive media, blending audio, motion, and visuals into a single reactive artwork. It pushed me to explore multiple aspects of TouchDesigner, from geometry handling and real-time audio analysis to gesture recognition and visual effects.

Along the way, I learned how each element—like math CHOPs, null outputs, and simple visual cues—can come together to create something immersive and dynamic. The process involved plenty of trial and error, especially in fine-tuning responsiveness and syncing visuals to sound, but that experimentation was part of the fun.

In the end, seeing the glowing skull come to life—shaking, pulsing, and reacting to music and hand gestures—was incredibly satisfying. This project not only sharpened my technical skills but also opened up new creative ideas for future interactive mini-projects. As you may have noticed in "The Editor’s Cut Floral Red" video above, the possibilities for creating something unique are endless—truly, the sky’s the limit.

🏃🏼‍♂️Cheers