Tag Archives: UNM

Visualizing Social Data using Grasshopper and Google Earth

Below is a case study on using Grasshopper and several other plugins to generate visual representations of (social) data on a map. This method along with some additions to query and pull social data automatically and possibly the functionality of tying directly back into Google Earth to update the imagery will provide very useful to inform us of how social systems shift on local and macro levels.

From Metaball Diagrams with Google Earth and gHowl

“Google Earth presents an intuitive, dynamic platform for understanding spatial context. Combined with a parametric modeler likeGrasshopper, Google Earth presents complex datasets relative to geo-positioning in a way that is understandable. Facilitated by GH plugin gHowl, GH meshes and lines can be exported in Google Earth’s .kml format to be viewed by Google Earth or an enabled web browser.

Creating legible geometry for Google Earth is challenging, but one type of geometry I’ve experimented with is GH’s metaballs, which are about as old school as it gets for 3D curvature. Metaballs, as described by Yoda (Greg Lynn), are “defined as a single surface whose contours result from the intersection and assemblage of the multiple internal fields that define it.” (Lynn, Blobs, Journal of Philosophy and the Visual Arts 1995). This aggregation of internal fields can provide an intuitive understanding of various contextual forces relative to the spatial context of a site. While GH metaballs are only curves and not meshes / surfaces you can easily use a delaunay mesh to begin to create a mesh.

This tutorial will walk through the process of creating metaballs from Geo coordinates. I’m using a map I created with Elk that is based off of Open Street Maps info, if you’re interested in doing something similar look here.

Just click on the images below if you’d like to see them in more detail.

Start by positioning your Geo coordinates in GH space through gHowl’s Geo To XYZ module.”

Read More…

Mind Chimes

Today I debuted my new interactive dome piece, Mind Chimes at ARTS Lab, UNM. The piece generates visuals and music from a live brainwave feed captured by a NeuroSky MindWave Mobile headset. I coded the entire piece with MaxMSP and used vDome, an open source Max based dome player, to skin it to the dome. The audio is generated by sending MIDI notes from my brainwave synth to Camel Audio’s Alchemy MIDI synth instruments. The visuals are generated by the notes played from the audio. They change colors based on your state of mind. This is a great first iteration and I look forward to building it out further.

There’s no good way to capture a dome piece with standard video but here’s a little clip I shot of my friend going to town with his mind.

Moai Sculpture

After a whole semester of on and off work I finally finished the Moai head that I’ve been working on. The “Moai, or mo‘ai, are monolithic human figures carved by the Rapa Nui people from rock on the Chilean Polynesian island of Easter Island between the years 1250 and 1500.” Courtesy of Wikipedia.

The sculpture is built out of carved styrofoam, metal lath, and concrete. I’m not sure how heavy it is, but I know it’s not too bad. It measures at just under 4′ tall. It will eventually live in my backyard. Below are process photos. The final piece is the last one.

  • 20130430-152653.jpg 20130430-152653.jpg
  • 20130430-152726.jpg 20130430-152726.jpg
  • 20130430-152748.jpg 20130430-152748.jpg
  • 20130430-152848.jpg 20130430-152848.jpg
  • 20130430-152922.jpg 20130430-152922.jpg
     

Fabricating a Parametric Model with Pepakura

For this project I helped my friend Ben Ortega, a MARC student at UNM, build a model he developed with Grasshopper and Millipede. I used Pepakura to design the unfolded parts and lay them out. He and I then built the model using the paper parts and some tacky glue.

Here is the model next to the unfolded parts in Pepakura.

Parametric Pepakura unfold

Here are some photos of us building the model. In some of the photos you can see a smaller 3D printed model we were using for reference.

  • 20130428-114917.jpg 20130428-114917.jpg
  • 20130428-114903.jpg 20130428-114903.jpg
  • 20130428-114855.jpg 20130428-114855.jpg
  • 20130428-114845.jpg 20130428-114845.jpg
  • 20130428-114834.jpg 20130428-114834.jpg
  • 20130428-114827.jpg 20130428-114827.jpg
  • 20130428-114816.jpg 20130428-114816.jpg
  • 20130428-114806.jpg 20130428-114806.jpg
  • 20130428-114758.jpg 20130428-114758.jpg
  • 20130428-114749.jpg 20130428-114749.jpg
  • 20130428-114738.jpg 20130428-114738.jpg
  • 20130428-114729.jpg 20130428-114729.jpg
  • 20130428-114721.jpg 20130428-114721.jpg
  • 20130428-114713.jpg 20130428-114713.jpg
  • 20130428-114704.jpg 20130428-114704.jpg
  • 20130428-114638.jpg 20130428-114638.jpg
  • 20130428-114628.jpg 20130428-114628.jpg
  • 20130428-114620.jpg 20130428-114620.jpg
  • 20130428-114610.jpg 20130428-114610.jpg

Here is a flyover of the finished model.

Building a Giant 3D Moustache – Part 3

For the final build I unfolded the full-size paper model and traced it on the foam core. I used push-pins to mark the vertices and then drew lines from point to point.

20130421-030736.jpg

I used a 45° foam cutter and tried to get as close to the paper on the other side as I could. That way I would be able to make the folds easily with out too much resistance. I cut the outline with a regular Exacto after I made all the 45° cuts.

20130421-030751.jpg

At this point I stopped taking photos because I was stressing to get the model finished. The Moustachio Bashio was that night. The folding worked pretty well with the 45s taken out. It could have been better though. I did meet some resistance which meant I had to be creative to get the final model stay together without deforming. Using spray adhesive and the brown wrapping paper that the foam came in, I bonded the open faces together.

This was harder than I expected, and because I was rushing turned out to be a little sloppier than I would have liked. The form was still pretty fragile and would not hold its shape well so I threaded a needle and sewed supporting strings into the back. This worked really well. I only needed 5 reinforcing tethers. At this point I was backstage at the venue and the first DJ was warming up so I just used white Gaff tape to attach the model to the back face. Luckily you couldn’t tell once it was hung. We had already done a placement test and calibrated the projector earlier that week. Once I finished the model I hung it right up with two chains that hooked into eyelets bolted into the plywood backing. Boy did being finished feel amazing. I came super close to having nothing to show for all my efforts.

20130421-030809.jpgFinally Done!

20130421-030822.jpg

20130421-033045.jpgThis is from the placement test the week before. Our host Danger is obviously having fun.

Virtual Traffic

Virtual Traffic is an interactive art installation that composites pedestrian foot traffic at five high-density areas of the UNM campus into a comprehensive shared experience.

Cameras simultaneously capture pedestrians at five high traffic locations, and custom software composites the videos together. Different blending effects activate based on traffic density, direction, and position of the pedestrians in the space. This facilitates a virtual interaction between the people in each space, and helps us begin to understand our daily commute in a new way.

Talking Heads

Talking Heads was a sound art installation located in the atrium of the art building at the University of New Mexico. Each mannequin head was hung facing the entrance of the building and was fitted with a speaker. All the heads were connected to a small amplifier with a motion sensor and
a sound circuit. Whenever someone walked in front of the piece the heads talked.

The sound byte was changed every few hours for several days. The clips ranged from whispers to yelling accusations. Unfortunately I don’t have very good documentation of this piece. I still have all the heads though which means I’ll probably install it again.

Talking_heads

Pedestrian

Pedestrian is a video game I created in Processing with Carissa Simmons and Isaiah Griego. The game is meant to mimic the old 8-bit “style” visually as well as in game play; we even heisted some clouds from Super Mario Bros., an homage to Cory Arcangel. The game is Frogger-like, but with a twist, the pedestrians you are trying to avoid move faster if you “shout” at them. We tapped into the computer’s microphone to do this so if there’s no built in mic it won’t work.

The game is actually impossible to win unless you discover the “cheat” we inadvertently built in. It was a fun project and gave us a chance to explore and incorporate alternative modes of HCI (Human Computer Interaction) in the context of a video game.

You can play the game online here.

Splash-PageExplainPage