Facebook Mosaic is a platform I developed to allow people to create art using their social data. My current work is highly focused around capturing and visualizing social data to provide utility to the masses. We upload an extensive amount of data to our social networking sites every day, however we, for the most part, can only view that data in the prescriptive context of our virtual social networks i.e. Facebook, Instagram, Twitter, etc. My goal is to find ways to capture this data and visualize it in ways that can actually improve our day-to-day lives.
Although Facebook mosaic may not achieve that goal, the development process was crucial to my understanding of what it takes to query various social networks and make use of the information returned. It also gave me a chance to use my creative side to develop something fun and interactive. You can use Facebook Mosaic to generate images with you social data by visiting the website.
Here is a statement I wrote for the piece:
As an Electronic Artist I am always looking for ways to re-contextualize the role technology plays in our lives. Facebook Mosaic is a program that takes three profile pictures from a user’s Facebook news feed, and blends them together dynamically using one color channel from each photo.
Many of us use Facebook daily to communicate and share with friends and family, locally, and around the world. This forum has become a global “water cooler,” with a reach not bound by time or space. As a result, we are forced to think about our interactions in an entirely different way.
Although there is a distinct level of separation between our “real” selves and our profile, Facebook provides a melting pot for our ideas and identities to blend together like a large mosaic with many facets coming together to create a dynamic collaborative whole. My goal with this piece is to frame this abstract concept in a concise, playful fashion so as to depict our social interactions as works of art.
I created this Max patch as a test for sound reactive visuals. I use jitter physics to give the balls mass in the virtual world and then map ghost objects at the bottom of the world to impulse based on the sound coming in or out of the computer. I use the fffb object to separate the left and right audio channels into different bands which correspond to the ghost objects that send out impulses to move the balls. That way instead of just flying all over the place the direction the balls move directly corresponds to how much bass or treble is in the music and which channel it’s coming from. The song is by my friend Ula from Poland. It was an ideal choice for the test as it has a dynamic range.
Here is the patch being used to do visuals at a show.
Today I debuted my new interactive dome piece, Mind Chimes at ARTS Lab, UNM. The piece generates visuals and music from a live brainwave feed captured by a NeuroSky MindWave Mobile headset. I coded the entire piece with MaxMSP and used vDome, an open source Max based dome player, to skin it to the dome. The audio is generated by sending MIDI notes from my brainwave synth to Camel Audio’s Alchemy MIDI synth instruments. The visuals are generated by the notes played from the audio. They change colors based on your state of mind. This is a great first iteration and I look forward to building it out further.
There’s no good way to capture a dome piece with standard video but here’s a little clip I shot of my friend going to town with his mind.
After a whole semester of on and off work I finally finished the Moai head that I’ve been working on. The “Moai, or mo‘ai, are monolithic human figures carved by the Rapa Nui people from rock on the Chilean Polynesian island of Easter Island between the years 1250 and 1500.” Courtesy of Wikipedia.
The sculpture is built out of carved styrofoam, metal lath, and concrete. I’m not sure how heavy it is, but I know it’s not too bad. It measures at just under 4′ tall. It will eventually live in my backyard. Below are process photos. The final piece is the last one.
For this project I helped my friend Ben Ortega, a MARC student at UNM, build a model he developed with Grasshopper and Millipede. I used Pepakura to design the unfolded parts and lay them out. He and I then built the model using the paper parts and some tacky glue.
Here is the model next to the unfolded parts in Pepakura.
Here are some photos of us building the model. In some of the photos you can see a smaller 3D printed model we were using for reference.
This video shows a brief clip of the projection mapping on the giant 3D moustache for the Moustachio Bashio 2013 at Sister Bar in Albuquerque, New Mexico. I used Millumin for the mapping and Resolume for the VJing.
For the final build I unfolded the full-size paper model and traced it on the foam core. I used push-pins to mark the vertices and then drew lines from point to point.
I used a 45° foam cutter and tried to get as close to the paper on the other side as I could. That way I would be able to make the folds easily with out too much resistance. I cut the outline with a regular Exacto after I made all the 45° cuts.
At this point I stopped taking photos because I was stressing to get the model finished. The Moustachio Bashio was that night. The folding worked pretty well with the 45s taken out. It could have been better though. I did meet some resistance which meant I had to be creative to get the final model stay together without deforming. Using spray adhesive and the brown wrapping paper that the foam came in, I bonded the open faces together.
This was harder than I expected, and because I was rushing turned out to be a little sloppier than I would have liked. The form was still pretty fragile and would not hold its shape well so I threaded a needle and sewed supporting strings into the back. This worked really well. I only needed 5 reinforcing tethers. At this point I was backstage at the venue and the first DJ was warming up so I just used white Gaff tape to attach the model to the back face. Luckily you couldn’t tell once it was hung. We had already done a placement test and calibrated the projector earlier that week. Once I finished the model I hung it right up with two chains that hooked into eyelets bolted into the plywood backing. Boy did being finished feel amazing. I came super close to having nothing to show for all my efforts.
This is from the placement test the week before. Our host Danger is obviously having fun.
For this project I used Millumin to map the geometry, Resolume for the VJing, and Max/MSP to route OCS data between Synapse, which does skeletal tracking/triggering using the Kinect, and Millumin. The video content is a mixture of video loops, a Lucius music video (Turn It Around), and my Light Dreams video which can be found on my channel.
You can see me triggering different videos with gestures in the PIP on the bottom right.
This proposal was a collaboration between Chris Clavio (myself) and Ruben Olguin and was submitted to the Prix Ars Electronica [The Next Idea] competition for judgement by a jury. If selected the proposal will be realized and installed at the 2013 Ars Electronica Festival.
This electronic arts project incorporates engineering, computer science, and creativity with the intention of creating a practical survival solution in tandem with a social dialogue about the way we generate, access, and transport electricity. The technology, at its root, integrates piezoelectric circuits into the sole of a shoe to generate electricity, which can then be used to charge mobile devices.