top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

The Mind We Share

Date

2025

Project type

Collaborative work

Location

Cambridge, MA

The Mind We Share is structured as a guided journey that turns a participant’s preferences, expressions, and relaxation patterns into a personal audiovisual trace.

The experience begins with a brief conversation with an AI, including a short self-introduction and a few questions about personal preferences. Those responses enter the system as data inputs, shaping an initial layer of generative video and sound. The participant then wears an EEG headset and begins a meditation session guided by a somatic meditation expert. Guidance alternates between the human expert and an AI voice model trained on the expert’s voice, creating a shared rhythm between human presence and computational translation.

During meditation, EEG signals and AI-based facial detection capture shifts in relaxation and expression. This biometric stream continuously modulates the real-time visuals and soundscape rendered in TouchDesigner, allowing the media to respond dynamically as the participant settles, drifts, or re-focuses. After the session, the participant revisits a personalized video and audio output generated from their own conversational, physical, and mental data.

This concept was submitted as a proposal for the MIT Museum’s Mind Control exhibition (2025).

Team: Sunny Seon Young Kim, Myeongkyu Kim

#HumanAICollaboration #BiometricArt

bottom of page