A tangible interface for 3D co-creation with AI.
role
Concept, design, DIY e-textile circuit, silicone casting, text-to-mesh generation, web development, GLSL shaders
authors
Marianne Canu, Sophien Chen, Adrien Chuttarsing, Ninon Lizé Masclef
exhibitions
ENS Paris Saclay, Joseph Gallery, Extended Senses Symposium + Stephen Lawrence Gallery
conference
Ninon Lizé Masclef. (2023). Symbiotic Co-Creation with AI. AI & HCI Workshop at the 40th International Conference on Machine Learning, Honolulu, Hawaii, USA. 2023. Ninon Lizé Masclef and Adrien Chuttarsing. (2023). Latent Organism: Embodied Co-Creation with AI. In Creativity and Cognition (C&C '23). ACM, New York, NY, USA. Ninon Lizé Masclef and Kaitlin M. Reed. (2023). Bio Deepfakes, Mimicry and Memes. In ACM CHI'23 Workshop on Living Bits and Radical Aminos. Hamburg, Germany. Ninon Lizé Masclef and Adrien Chuttarsing. (2023). Latent Organism : a Tangible Interface for 3D Co-Creation with AI. In Proceedings of Electronic Visualisation and the Arts London Conference (EVA London)
prize
creARTathon Inria
keywords
tangible interface, e-textile, interactive installation, 3D graphics, embodiment, AI art
...and all around you the dance of biz, information interacting, data made flesh in the mazes of the black market.
Neuromancer. 2000. W. Gibson
Latent Organism proposes a novel technique for the design of 3D objects using a tangible interface. Our artefact takes advantage of generative algorithms to let anyone produce unique and complex 3D shapes through natural and playful interactions with an intuitive and sensitive tactile interface. This process of co-creation between the human and the machine through a balanced interaction is a way of appropriating an artificial imagination. The spectator is in control, using the imagination of the machine as clay, as a moldable material.
Recent advancements in generative models allow for synthesising visuals from a text prompt, opening doors for new artistic expression, and populating our imaginaries. However, AI-generated art systems are neither embodied nor graspable. While Gibson foresaw the possibility of incarnating information with the ‘‘data made flesh”, today we still wonder where the flesh is in the machine. Our art installation Latent Organism is aimed at overcoming the lack of embodiment of Human-AI collaboration. This interface is tangible, has a biological — flesh-like — appearance and makes the link between the digital imagination of the machine and the knowledge of the human. Thus, it explores embodied artificial creativity through the relationship between the sense of touch and sight.

[W]e intend at creating a new expertise — craftsmanship — of artificial imagination modelling.


Gradually, individuals develop the ability to use the machine's imagination as a sculptable material. They begin to understand and control the latent organism through physical exploration, which is difficult with most AI systems that are opaque to the user. Through the sensitivity of touch, we intend at creating a new expertise – craftsmanship – of artificial imagination modelling. We claim that the development of a culture of technics (Simondon, 2017), namely an understanding of the technicity of AI-generated objects, is necessary to an artistic re-appropriation of artificial imagination. The novel technicity acquired through embodied interaction, where AI itself is palpable would then augment human creativity.



Latent Organism is composed of an e-textile-covered controller for AI-generated 3D visuals. A handcrafted embroidery, created from silicone and fabric, is embedded with ten piezoresistive sensors. These sensors, made from fabric, conductive paint, and Vestostat, are arranged in a 2x2 pressure matrix and are integrated into the structure of a bean bag chair.



The generative aspect of our project is built on DreamField, a model trained to create a link between text and 3D. This model can generate a corresponding 3D shape and texture from a given text input. The process begins with several textual prompts generated by an embedding model trained on the Encyclopedia of Life (eol.org). These prompts are used as inputs for our Dreamfields model, which creates 3D meshes from them. For each generated mesh, a blendshape is calculated, allowing a seamless transition between the initial shape and the new one. These blendshapes are controlled by the sensors of the artwork's physical interface, responding to user interactions. Multiple blendshapes can be cross-faded simultaneously, adding an extra dimension of complexity to the mesh deformation. The generative aspect of our installation relies on a Dream Field, a continuous representation of 3D objects learnt with guidance from a deep neural network conditioned on a text prompt, and simple regularizers. In our installation, each sensor corresponds to a generated text-mesh pair updated in real time with respect to the interaction of the user with the interface.
The mechanism for updating prompts. We average the embeddings of the most requested sensor. We then use its nearest neighbour to obtain a new prompt replacing the least solicited one.