EchoCavern
EchoCavern is a Demo of an immersive experience to integrate AI generated 3D assets into the Unity immersive experience development pipeline.
It is a demo (and a tool) for exploration of potential real time 3D models to the CAVERN(Unity) development pipeline. Even though this tool only has a basic working demo of the asset generating and importing pipeline, it showcases the potential of integrating AI generation and CAVERN immersive experiences.
Here is the Demo Video:
Here is the Demo Video:
Goal
The objective of the independent study will look at the pipeline of importing AI generated 3D content into an active Unity Environment running on the Cavern Setup.
While we already know that AI Generated models and meshes can be imported into Unity (and thus the Cavern Toolkit setup) this exploration looks at ways in which a user (or designer) can import AI generated content in close to real time as the unity environment is running.
The promo video produced by NVIDIA demonstrates what this would look like in theory -
While we already know that AI Generated models and meshes can be imported into Unity (and thus the Cavern Toolkit setup) this exploration looks at ways in which a user (or designer) can import AI generated content in close to real time as the unity environment is running.
The promo video produced by NVIDIA demonstrates what this would look like in theory -
USER FLOW
The experience begins with the user speaking into a microphone. Their voice is captured, confirmed, and then processed through an AI speech-to-text system (Whisper), which accurately converts the spoken phrase into clean text. This ensures the system fully understands what the user described, whether it's an object, creature, or scene element.
Once the text is ready, it is sent to the Meshy API, which generates a 3D model based on the user’s description. The model is created in real time, translating the user’s imagination directly into a digital asset. This step transforms simple spoken language into a tangible, visual object through generative AI.
After the model is generated, Unity brings it into the immersive environment instantly. The asset is imported, placed, and displayed on the large screen, allowing users to watch their words become interactive 3D elements in the scene. This loop continues seamlessly, allowing users to keep speaking, creating, and shaping the space through voice alone.
Once the text is ready, it is sent to the Meshy API, which generates a 3D model based on the user’s description. The model is created in real time, translating the user’s imagination directly into a digital asset. This step transforms simple spoken language into a tangible, visual object through generative AI.
After the model is generated, Unity brings it into the immersive environment instantly. The asset is imported, placed, and displayed on the large screen, allowing users to watch their words become interactive 3D elements in the scene. This loop continues seamlessly, allowing users to keep speaking, creating, and shaping the space through voice alone.
The Plugin
During exploration of the develpment of this experience, I also created a tool for testing and playing with the Meshy API in unity. (Meshy has it’s official plugin and this is an in-Unity-Editor version so that there is more control for the development process)
DOWNLOAD LINK
HOT TO USE
HOT TO USE
- Download the Package and Import
- On the top of the Navigation bar, Meshy to CAVERN
- Type your API key
- Type prompt and generate
DEVELOPMENT & ITERATION
Step 1: GUI - Model importer Manually download from Web app and automatically import into the scene
Basic functions: import, randomize and delete
Installed Meshy Plugin
Step 2: Meshy API Using the API key to download the model directly into Unity Folder and Scene
Analysed the differences for each assets pipeline
Adding Prompt input in GUI
Debug the threading control
Step 3: Realtime Generation
Refine the GUI Importer
Adding download format and the task status
Refine the code structure
Implemented the in game UI system (WIP)
Step 4: Final Refinement
Adding texture and material for the model
Finalize documentation
Step 2: Meshy API
Step 3: Realtime Generation
Step 4: Final Refinement