NO FUN STUDIO X LA CASA ARTHOUSE X BOBBLEHAUS
The Secret Garden is the first project publicly displayed under my company, No Fun Studio, utilizing dual projection mapping & body tracking depth sensors to bring life to this immersive space in the heart of Williamsburg, Brooklyn.
La Casa Arthouse Bobblehaus
The Secret Garden
DEC. 2020 — JAN. 2020
The Secret Garden explores a form of augmented reality where the user interacts with their reflection in a flowing particle simulation, building Delaunay triangulations that resemble the structure of plants from the position of their hands. The floor projection controls interaction using aerial blob tracking and a physics engine to displace particles based off momentum & velocity dispersing the interactive meadow at their feet. I developed this realtime project in TouchDesigner with the art direction of Gonzo Gelso.
IMMERSIVE VR AUDIO EXPERIENCE
VR Experience is developed to render realtime in TouchDesigner
Hardware used: Oculus Rift, Leap Motion.
vr soundscape
OCTOBER 2020
Analyzing the different frequencies of audio wavelengths, this VR experience comes to life with music. Mapping different parameters from the generative based elements in the 3D virtual environment to certain aspects of sound. Using this method instead of traditional MIDI or OSC input allows for application with all live & recorded music. To further immerse the user, intergration of a Leap Motion was added to track the users hands & give a more natural experience interacting with the virtual soundscape.

AI | Machine Learning

Machine Learning is a process where a computer uses pattern recognition & algorithms to perform a specific task without following explicit instructions.

ML has the power to understand, predict & define future trends from the information it processes.

It will be used as a tool to expand the boundaries of creativity, rather than a tool to replace it.

Calculated Mirage - ML Paintings

This series is an exploration into generative art utilizing Machine Learning as a tool to create artwork that has never existed previously. By training an ML model with a dataset of 2000+ images of abstract paintings, image synthesis is performed analyzing the existing paintings. Learning colors, features & characteristics, finally creating endless variations of the paintings that are entirely unique. The final output (below) is an animation flowing between generated images that many artists relate to as a “machine hallucination”.

View Hi-Fi Video

Generative Painting Flats

Although these animations are captivating, I do not believe they are the best use of this ML model. The static images can be used for a source of infinite inspiration for visual artists. Each output is complete with the machine's combination of composition, color, texture & contrast derived from the original source dataset, making a collaboration between human & machine a reality.

The key is not only embracing the visually familar cues but also sparking new ideas with unfamiliar surprises. A hand selection from the limitless pool of "paintings" is featured below:

Process behind the work

It all begins with faces

To produce a high-resolution output I started with a 1024px x 1024px pre-trained model "Faces" provided by RunwayML using NVIDA StyleGAN 2 . Below is a Latent Walk from the model:

From there I used a process called Transfer Learning where I re-trained the exisiting model with a dataset of 2,000+ abstract paintings web-scraped from the hashtag #abstractpaintings on IG.

After pre-processing the images to a 1:1 square aspect ratio I began the process of training the model. Below is snapshot of the process of training from steps 1-2000. Final output = 10,000 steps.

After >8 hours of training I was left with a pool of generated ML paintings. I then exported a Latent Video Walk which is a continous video sequence of hand selected images generated by the model.

Finally we have made it the final checkpoint where the images & videos were created at the beginning of the article. Below are some more explorations & iterations from this model that I then imported into TouchDesigner where I post-processed the content and added some creative effects:

the end?

The beautiful part about an ML model is although it reaches a certain point where training on the exisiting dataset becomes inefficient and it no longer learns new features and characteristics it doesn't have to stop there. By transfer learning the existing model with a new dataset, it allows you to build on top of your previous creations.

Face hallucinations - ML Paintings

I know you are probably tired of reading or have  already skipped past my documentation notes altogether to take in the imagery. That's okay too, I'll keep it short and sweet from here forward :)

new Dataset of abstact portrats

Below is snapshot of the process of training from steps 1-2000. Final output = 10,000 steps.

Latent walk Video

View Hi-Fi Video

Back to reality

If you've made it this far, thank you for reading along! I hope this breakdown gave you some insights on how Machine Learning can be applied to your creative process. I will leave you with a less abstract application of a similar process with my personal obessesion, Basketball Courts.

Imaginary courts Latent Walk

Imaginary courts Latent Walk + Post-processing

EXPLORE MORE:

Sorry! This webpage isn't optimized for mobile yet.

Please view on desktop or computer.

Check us out on Instagram