Category Archives: Interactive

Arcade Fire – Just a Reflector: Explorations in Web 2.0

A friend of mine just shared this with me and I had to write about it. It’s an interactive music video using a handful of web 2.0 tech that lends to a really impressive interactive experience. First and foremost you’ll have to check it out for yourself at www.justareflektor.com.

130909-arcade-fire-reflektor-interactive-video-640x426

Second, after you’ve checked the video out, here is a list of the tech they used and a short explanation of what it does.

Web Technologies

Three.js
JavaScript library that uses WebGL to create fast 2D and 3D graphics

WebGL
JavaScript API that allows access to the user’s GPU for image processing. Part of the HTML5 canvas element.

Tailbone
An open source project that makes it simple for JavaScript developers to deploy on Google App Engine. Includes a mesh network for connecting multiple devices through WebRTC and WebSockets.

WebSockets
web technology that allows browsers to communicate with each other, enabling near real time communication.

getUserMedia();
web technology that gives JavaScript developers access to the webcam and microphone. Using camera vision, the site can then track your phone’s position in front of the webcam.

WebAudio
web technology that lets developers analyze and manipulate audio files.

Device Orientation
Your phone’s orientation is tracked using accelerometer and gyroscope data, which is passed to your computer via WebSockets.

Google Technologies

Chrome
Google Chrome’s advanced features, such as WebSockets, WebGL and getUserMedia(), help create an immersive, interactive experience.

App Engine
App Engine lets web developers build and deploy instantly scalable web applications on Google’s infrastructure.

Compute Engine
Compute Engine is used to run the project’s mesh network, keeping both phone and desktop browsers communicating at all times.

Cloud Storage
Video files are stored via Google’s Cloud Storage for cost effective file serving online.

Information sourced from www.justareflektor.com/tech?home

Facebook Mosaic 2.0: Painting with social data

Facebook Mosaic is a platform I developed to allow people to create art using their social data. My current work is highly focused around capturing and visualizing social data to provide utility to the masses. We upload an extensive amount of data to our social networking sites every day, however we, for the most part, can only view that data in the prescriptive context of our virtual social networks i.e. Facebook, Instagram, Twitter, etc. My goal is to find ways to capture this data and visualize it in ways that can actually improve our day-to-day lives.

Although Facebook mosaic may not achieve that goal, the development process was crucial to my understanding of what it takes to query various social networks and make use of the information returned. It also gave me a chance to use my creative side to develop something fun and interactive. You can use Facebook Mosaic to generate images with you social data by visiting the website.

Here is a statement I wrote for the piece:

As an Electronic Artist I am always looking for ways to re-contextualize the role technology plays in our lives. Facebook Mosaic is a program that takes three profile pictures from a user’s Facebook news feed, and blends them together dynamically using one color channel from each photo.

Many of us use Facebook daily to communicate and share with friends and family, locally, and around the world. This forum has become a global “water cooler,” with a reach not bound by time or space. As a result, we are forced to think about our interactions in an entirely different way.

Although there is a distinct level of separation between our “real” selves and our profile, Facebook provides a melting pot for our ideas and identities to blend together like a large mosaic with many facets coming together to create a dynamic collaborative whole. My goal with this piece is to frame this abstract concept in a concise, playful fashion so as to depict our social interactions as works of art.

Sound reactive visuals with MaxMSP

I created this Max patch as a test for sound reactive visuals. I use jitter physics to give the balls mass in the virtual world and then map ghost objects at the bottom of the world to impulse based on the sound coming in or out of the computer. I use the fffb object to separate the left and right audio channels into different bands which correspond to the ghost objects that send out impulses to move the balls. That way instead of just flying all over the place the direction the balls move directly corresponds to how much bass or treble is in the music and which channel it’s coming from. The song is by my friend Ula from Poland. It was an ideal choice for the test as it has a dynamic range.

Here is the patch being used to do visuals at a show.

Mind Chimes

Today I debuted my new interactive dome piece, Mind Chimes at ARTS Lab, UNM. The piece generates visuals and music from a live brainwave feed captured by a NeuroSky MindWave Mobile headset. I coded the entire piece with MaxMSP and used vDome, an open source Max based dome player, to skin it to the dome. The audio is generated by sending MIDI notes from my brainwave synth to Camel Audio’s Alchemy MIDI synth instruments. The visuals are generated by the notes played from the audio. They change colors based on your state of mind. This is a great first iteration and I look forward to building it out further.

There’s no good way to capture a dome piece with standard video but here’s a little clip I shot of my friend going to town with his mind.

Virtual Traffic

Virtual Traffic is an interactive art installation that composites pedestrian foot traffic at five high-density areas of the UNM campus into a comprehensive shared experience.

Cameras simultaneously capture pedestrians at five high traffic locations, and custom software composites the videos together. Different blending effects activate based on traffic density, direction, and position of the pedestrians in the space. This facilitates a virtual interaction between the people in each space, and helps us begin to understand our daily commute in a new way.

Facebook Mosaic

As an Electronic Artist I am always looking for ways to recontextualize the role technology plays in our lives. Facebook Mosaic is a program I wrote that takes three profile pictures from a Facebook news feed, and blends them together dynamically using one color channel from each photo.

Many of us use Facebook daily to communicate and share with friends and family around the world. This forum has become the new “watercooler,” but with a reach not bound by time or space. This new reach and mode of interaction presents us with new ways to think about identity.

Although there is a level of separation between our “real” selves and our profile, Facebook provides a melting pot for our identities to blend together like a large mosaic with many pieces from different people coming together to create a dynamic collaborative whole. My goal with this piece was to represent this abstract concept in a simple, playful, and interactive way.

Move the cursor from right to left over the image to change the mosaic pixel size. You can also use the letters ‘a’ ‘d’ and ‘f’ to isolate the three individual images. Pressing the space bar loads a new mosaic. Currently, the pool of photos is a preset lot of 276. The next iteration will have a Facebook login that will allow users to pull photos directly from their live feed. Have fun seeing what mosaics you can create.

Create a Facebook Mosaic Now!

     

Pedestrian

Pedestrian is a video game I created in Processing with Carissa Simmons and Isaiah Griego. The game is meant to mimic the old 8-bit “style” visually as well as in game play; we even heisted some clouds from Super Mario Bros., an homage to Cory Arcangel. The game is Frogger-like, but with a twist, the pedestrians you are trying to avoid move faster if you “shout” at them. We tapped into the computer’s microphone to do this so if there’s no built in mic it won’t work.

The game is actually impossible to win unless you discover the “cheat” we inadvertently built in. It was a fun project and gave us a chance to explore and incorporate alternative modes of HCI (Human Computer Interaction) in the context of a video game.

You can play the game online here.

Splash-PageExplainPage