top of page

sonic systems via hybrid human+AI explorations
Designed object, interactive installation
2024

Project website

This project idea came into being initially from learning about Dream-field, a text-to-3d model developed by UC Berkeley, Google Research researchers Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole. With the AI field developing so fast, since then, quite a few new models came out. We ended up on a model called stable-dreamfusion, a model developed based on Google’s Dreamfusion model with stable diffusion integration.

We have since generated physical objects representing sound artists and musicians’ sounds. We developed a hybrid human/machine process employing a chain of tools: ChatGPT generates descriptive prompts of physical representations of certain artists’ sounds as described by the artists in their own words, which are then fed into  Stable-Dreamfusion to generate a series of virtual 3D objects, which are then selected, curated, and refined for conversion into physical 3D manifestations, using a combination of Virtual Reality and 3D-modeling environments, ultimately translated into hybrid 3D-printed, tech-connected and hand-crafted interactive objects.

 

Our process of creating these musical objects can be broken down to the steps of: interviewing with musicians, generating text description of the instrument, generating 3d model of the musical objects, manual editing of the models, 3d printing prototypes, creating interactive audio-visual experience, and real scale making of the musical objects.

First phase of the project is to generate physical and virtual representations of sound artists’ sound and create audios for these objects referencing chatGPT interpretations and make this into a growing series of physical manifestations, with which viewers are able to interact. One aspect in which we are particularly interested is in seeking out musicians in the local, underground music scene and engaging them in the process, affording our hybrid making process inputs with greater richness (e.g., motion capture data from live performances, lengthier descriptions of the musical quality of bands’ music, bands aspirations for future sound). We have since been working with 2 local bands from Columbus, Ohio, Catchwords and Abel.

Following that, we want to expand and refine the output of our hybrid making process from scaled instrument prototypes to functioning musical instruments, with which musicians - novice and professional - will be able to use during live, interactive performances.This will be our main focus in the next step of our research.

Parallel to the above, we also aspire to develop new 3D based AI models of our own. Available 3D generating models at present are still heavily relying on text based input. Linguistic representation has its limits, just like any other kind of media. We want to further explore more diversifying inputs that can be used to generate 3D assets, including motion capture data, sound…

On top of that, we imagine the further possibilities of integrating the generated objects into performances, hybridizing creativity between AI and human makers and performers. And again there is the potential of AI assist content creation, including AI music scoring, text based AI script generation, etc.The ultimate goal is to utilize the technology and create a coherent project, performative experience empowered by AI.

bottom of page