Category Archives: Physical Computing

Project Fringe: Pioneering Social Landscapes – Prologue

This project is focused on developing new means of visualizing and interacting with social data parsed from multiple social platforms (e.g. Facebook, Instagram, Twitter, LinkedIn, etc.). Over the next six months I will focus on integrating many pre-existing technologies both hardware (computers, projectors, wireless headsets and controllers) and software (Max/MSP, Grasshopper 3D, vDome, Google Earth) to develop an interactive, immersive software platform that will provide users with new insights about the data on both local and global scales.

My three-month goal is to complete a software platform that can gather, parse, and store data from several social platforms. The six-month goal is to complete the interface so a user can engage with the social data in a meaningful way. This will include visualizing the data on a research dome and interacting with it using a gesture based control system.

In preparing for this project, my preliminary research has led me to some exciting finds. Phototrails is a cutting edge project along the same vein as my own. They are doing some pretty cool stuff with aggregates of photos from Instagram to visualize cultural patterns around the world. Here are some examples from their Instagram Cities page.

San Francisco

San Francisco

New York

New York





Mind Chimes

Today I debuted my new interactive dome piece, Mind Chimes at ARTS Lab, UNM. The piece generates visuals and music from a live brainwave feed captured by a NeuroSky MindWave Mobile headset. I coded the entire piece with MaxMSP and used vDome, an open source Max based dome player, to skin it to the dome. The audio is generated by sending MIDI notes from my brainwave synth to Camel Audio’s Alchemy MIDI synth instruments. The visuals are generated by the notes played from the audio. They change colors based on your state of mind. This is a great first iteration and I look forward to building it out further.

There’s no good way to capture a dome piece with standard video but here’s a little clip I shot of my friend going to town with his mind.

SolePower: Autonomous Energy Generation

This proposal was a collaboration between Chris Clavio (myself) and Ruben Olguin and was submitted to the Prix Ars Electronica [The Next Idea] competition for judgement by a jury. If selected the proposal will be realized and installed at the 2013 Ars Electronica Festival.

This electronic arts project incorporates engineering, computer science, and creativity with the intention of creating a practical survival solution in tandem with a social dialogue about the way we generate, access, and transport electricity. The technology, at its root, integrates piezoelectric circuits into the sole of a shoe to generate electricity, which can then be used to charge mobile devices.

Hacking Necomimi – Part 1

For this project I am hacking a pair of Necomimi Brainwave Cat Ears so I can control some visualizations in a immersive dome environment.

I finally got a processing sketch up that allows me to control a servo over Bluetooth by moving my mouse on the screen. It’s a bit laggy, but it works. This is the first step to hacking the Necomimi because if I can control a servo over Bluetooth I can also monitor a servo’s angle and get the data back back over Bluetooth. Essentially I reverse engineered what I need to get data from the Necomimi to control my visualizations.

Virtual Traffic

Virtual Traffic is an interactive art installation that composites pedestrian foot traffic at five high-density areas of the UNM campus into a comprehensive shared experience.

Cameras simultaneously capture pedestrians at five high traffic locations, and custom software composites the videos together. Different blending effects activate based on traffic density, direction, and position of the pedestrians in the space. This facilitates a virtual interaction between the people in each space, and helps us begin to understand our daily commute in a new way.

Talking Heads

Talking Heads was a sound art installation located in the atrium of the art building at the University of New Mexico. Each mannequin head was hung facing the entrance of the building and was fitted with a speaker. All the heads were connected to a small amplifier with a motion sensor and
a sound circuit. Whenever someone walked in front of the piece the heads talked.

The sound byte was changed every few hours for several days. The clips ranged from whispers to yelling accusations. Unfortunately I don’t have very good documentation of this piece. I still have all the heads though which means I’ll probably install it again.