转到内容 前往主导航 去搜索 更改语言

Mindful Healing & Consciousness Exploration

Interactive Engine Mechanism of "Collective Resonance"
《集體共鳴》互動引擎機制概覽



In the live performance of Collective Resonance, an interactive audiovisual system drives the connection between sound, touch, and light. The entire architecture is orchestrated by a custom algorithm designed by the EchoForge team, enabling deep sensory resonance between the audience, performers, and the space.

在《集體共鳴》的現場演出中,一套即時音像互動系統驅動聲音、觸控與光影之間的關聯。整體架構由 EchoForge 團隊 設計的自定義演算法協調,實現觀眾、表演者與空間之間的深層共感連結。



Performer Module — QingH
Input Behavior:
QingH uses Ableton Push to trigger audio samples and MIDI signals in real-time.
System Response:
MIDI signals → Control the DMX lighting system within Unreal Engine 5.
MIDI signals → Simultaneously transmitted to TouchDesigner, generating dynamic visual imagery.
Audience Voice Interaction Module
Input Process:
Audience members vocalize through a microphone.
The audio signal is processed by Ableton Live for pitch detection.
Real-time transmission to Max for Live / MaxMSP, converting the signal into MIDI notes.
Output Process:
MIDI notes → Control DMX lighting changes within Unreal Engine 5.
MIDI notes → Simultaneously transmitted to TouchDesigner, generating responsive visual effects.
Core System Coordination
The real-time signal dispatch, synchronization, and interaction logic of all modules are driven by a custom algorithm developed by the EchoForge team. This ensures precise response among light, sound, and touch, creating a cohesive and immersive experience.

First Live Presentation:
Date: May 1, 2025
Location: Beijing Times Art Museum
Event: Collective Resonance: An Immersive Ritual of Sensory Awakening

表演者模組 — QingH
輸入行為:
QingH 透過 Ableton Push 打擊墊即時觸發取樣與 MIDI 訊號。
系統反應:
MIDI 訊號 → 控制 Unreal Engine 5 中的 DMX 燈光系統
MIDI 訊號 → 同步傳送至 TouchDesigner,生成動態視覺圖像觀眾聲音互動模組
輸入流程:
觀眾透過麥克風發聲
聲音信號經 Ableton Live 進行音高偵測(Pitch Detection)
即時傳入 Max for Live / MaxMSP 系統,轉換為 MIDI 音符
輸出流程:
MIDI 音符 → 控制 Unreal Engine 5 中的 DMX 燈光變化
MIDI 音符 → 同步傳送至 TouchDesigner,生成視覺反應系統協調核心
所有模組的即時信號調度、同步與互動邏輯,均由 EchoForge 團隊 設計的自定義演算法驅動,確保光、聲、觸之間的精準響應與沉浸體驗的整體統一。

首次現場展示:
2025 年 5 月 1 日
北京時代美術館
《集體共鳴:感官重啟的沉浸儀式》