Introduction to Complexity

I recently learned about a free online course in Complex Systems. The concepts covered in this course are very pertinent to visualizing and making sense of complex social data sets. The course was created and is hosted by the Santa Fe Institute.

You can find more information about this course at www.complexityexplorer.org

In this course you’ll learn about the tools used by scientists to understand complex systems. The topics you’ll learn about include dynamics, chaos, fractals, information theory, self-organization, agent-based modeling, and networks. You’ll also get a sense of how these topics fit together to help explain how complexity arises and evolves in nature, society, and technology. There are no prerequisites. You don’t need a science or math background to take this introductory course; it simply requires an interest in the field and the willingness to participate in a hands-on approach to the subject.

About the Instructor:

Melanie Mitchell is Professor of Computer Science at Portland State University,  and External Professor and Member of the Science Board at the Santa Fe Institute. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour, published in 2009 by Oxford University Press, won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society’s 2010 book prize.

Course Team:

John Balwit (Teaching Assistant) is a Ph.D. student in the Systems Science program at Portland State University. He has a background in biology education and current research interests in theoretical biology, evolvability and natural selection. John is also interested in the use of agent based modeling and machine learning techniques to explore questions in the evolution of cooperation, the nature of social dilemmas and the patterns in human decision-making under extreme conditions. His current emphasis is on the use of computer models and computational exercises to effectively teach general audiences about the constellation of topics called Complexity Science.

John Driscoll (Teaching Assistant) has a background in architecture and is a Ph.D. student in Systems Science at Portland State University. He has worked with, and credits as mentors, Dean Bryant Vollendorf, Professor Emeritus, UNCC, and George Hascup, AAP, Cornell University. John is primarily interested in the rationalization of city planning and the emerging field of the science of cities, the goal being to apply theory and methods from complex systems science to the research, analysis and design of urban environments.
Erin Kenzie (Program Assistant) is a Ph.D. student in Systems Science at Portland State University. Her interests are in the fields of urban sustainability and behavioral and social science research methods.

Visualizing Social Data using Grasshopper and Google Earth

Below is a case study on using Grasshopper and several other plugins to generate visual representations of (social) data on a map. This method along with some additions to query and pull social data automatically and possibly the functionality of tying directly back into Google Earth to update the imagery will provide very useful to inform us of how social systems shift on local and macro levels.

From Metaball Diagrams with Google Earth and gHowl

“Google Earth presents an intuitive, dynamic platform for understanding spatial context. Combined with a parametric modeler likeGrasshopper, Google Earth presents complex datasets relative to geo-positioning in a way that is understandable. Facilitated by GH plugin gHowl, GH meshes and lines can be exported in Google Earth’s .kml format to be viewed by Google Earth or an enabled web browser.

Creating legible geometry for Google Earth is challenging, but one type of geometry I’ve experimented with is GH’s metaballs, which are about as old school as it gets for 3D curvature. Metaballs, as described by Yoda (Greg Lynn), are “defined as a single surface whose contours result from the intersection and assemblage of the multiple internal fields that define it.” (Lynn, Blobs, Journal of Philosophy and the Visual Arts 1995). This aggregation of internal fields can provide an intuitive understanding of various contextual forces relative to the spatial context of a site. While GH metaballs are only curves and not meshes / surfaces you can easily use a delaunay mesh to begin to create a mesh.

This tutorial will walk through the process of creating metaballs from Geo coordinates. I’m using a map I created with Elk that is based off of Open Street Maps info, if you’re interested in doing something similar look here.

Just click on the images below if you’d like to see them in more detail.

Start by positioning your Geo coordinates in GH space through gHowl’s Geo To XYZ module.”

Read More…

Arcade Fire – Just a Reflector: Explorations in Web 2.0

A friend of mine just shared this with me and I had to write about it. It’s an interactive music video using a handful of web 2.0 tech that lends to a really impressive interactive experience. First and foremost you’ll have to check it out for yourself at www.justareflektor.com.

130909-arcade-fire-reflektor-interactive-video-640x426

Second, after you’ve checked the video out, here is a list of the tech they used and a short explanation of what it does.

Web Technologies

Three.js
JavaScript library that uses WebGL to create fast 2D and 3D graphics

WebGL
JavaScript API that allows access to the user’s GPU for image processing. Part of the HTML5 canvas element.

Tailbone
An open source project that makes it simple for JavaScript developers to deploy on Google App Engine. Includes a mesh network for connecting multiple devices through WebRTC and WebSockets.

WebSockets
web technology that allows browsers to communicate with each other, enabling near real time communication.

getUserMedia();
web technology that gives JavaScript developers access to the webcam and microphone. Using camera vision, the site can then track your phone’s position in front of the webcam.

WebAudio
web technology that lets developers analyze and manipulate audio files.

Device Orientation
Your phone’s orientation is tracked using accelerometer and gyroscope data, which is passed to your computer via WebSockets.

Google Technologies

Chrome
Google Chrome’s advanced features, such as WebSockets, WebGL and getUserMedia(), help create an immersive, interactive experience.

App Engine
App Engine lets web developers build and deploy instantly scalable web applications on Google’s infrastructure.

Compute Engine
Compute Engine is used to run the project’s mesh network, keeping both phone and desktop browsers communicating at all times.

Cloud Storage
Video files are stored via Google’s Cloud Storage for cost effective file serving online.

Information sourced from www.justareflektor.com/tech?home

‘Augmented Sculpture’ made for the Four Seasons Hotel Beirut.

This is a beautiful and impressive example of projection mapping.

26th Floor | Augmented Sculpture from urbanscreen on Vimeo.

An ‘Augmented Sculpture’ made for the Four Seasons Hotel Beirut.

This permanent light sculpture was exclusively developed for the rooftop of the Four Seasons Hotel in Beirut. Sculpture development and content development refer to each other in an interdependent design. By assimilating specific aspects of the environment, this work was developed from scratch as a site-specific piece of art.

Premiered on the 23rd of May 2013
Duration (loop): 30 min
Dimension: 12 x 5m

Art Director Content: Max Goergen, Julian Hoelscher
Motion graphics: Julian Hoelscher, Jonas Wiese, Max Negrelli, Moritz Horn, Till Botterweck

Art Director Sculpture: Till Botterweck
3D Designer: Peter Pflug, Moritz Horn, Lorenz Potthast
Mock-up: Lorenz Potthast
Production Manager: Majo Ussat
Creative Director: Thorsten Bauer

Media Engineer: Tobias Wursthorn (www.im-en.com)
Technical assist: Lorenz Potthast

Documentation Director: Jonas Wiese
On-Site Camera: Jonas Wiese
Edit: Jonas Wiese
Music: Jonas Wiese (www.jonas-wiese.de)

Thank you very much for supporting us on-site: Omar Alkheshen and the entire congenial Four Seasons Team Beirut!

Media-Engine support: WINGS VIOSO (www.avstumpfl.com)
Mock-Up projector support: EPSON (www.epson.de)
An www.URBANSCREEN.com production
www.facebook.com/urbanscreen

Facebook Mosaic 2.0: Painting with social data

Facebook Mosaic is a platform I developed to allow people to create art using their social data. My current work is highly focused around capturing and visualizing social data to provide utility to the masses. We upload an extensive amount of data to our social networking sites every day, however we, for the most part, can only view that data in the prescriptive context of our virtual social networks i.e. Facebook, Instagram, Twitter, etc. My goal is to find ways to capture this data and visualize it in ways that can actually improve our day-to-day lives.

Although Facebook mosaic may not achieve that goal, the development process was crucial to my understanding of what it takes to query various social networks and make use of the information returned. It also gave me a chance to use my creative side to develop something fun and interactive. You can use Facebook Mosaic to generate images with you social data by visiting the website.

Here is a statement I wrote for the piece:

As an Electronic Artist I am always looking for ways to re-contextualize the role technology plays in our lives. Facebook Mosaic is a program that takes three profile pictures from a user’s Facebook news feed, and blends them together dynamically using one color channel from each photo.

Many of us use Facebook daily to communicate and share with friends and family, locally, and around the world. This forum has become a global “water cooler,” with a reach not bound by time or space. As a result, we are forced to think about our interactions in an entirely different way.

Although there is a distinct level of separation between our “real” selves and our profile, Facebook provides a melting pot for our ideas and identities to blend together like a large mosaic with many facets coming together to create a dynamic collaborative whole. My goal with this piece is to frame this abstract concept in a concise, playful fashion so as to depict our social interactions as works of art.

Box – Projection-mapping onto moving surfaces

Box from Bot & Dolly on Vimeo.

Box explores the synthesis of real and digital space through projection-mapping onto moving surfaces. The short film documents a live performance, captured entirely in camera. http://www.botndolly.com/box

CREDITS
Production Company: BOT & DOLLY
Executive Producers: Bill Galusha, Nick Read
Creative & Technical Director: Tarik Abdel-Gawad
Design Director: Bradley G Munkowitz
Lead Graphic Designers: Bradley G Munkowitz, Jason English Kerr
3D Artists: Scott Pagano, Bradley G Munkowitz, Jason English Kerr, Conor Grebel
2D Animators: Conor Grebel, Ben Hawkins, Pedro Figueira
Director of Photography: Joe Picard
Lighting Designers: Joe Picard, Phil Reyneri
Projection / TouchDesigner: Phil Reyneri
Robotics Animation: Tarik Abdel-Gawad, Brandon Kruysman, George Banks, Michael Beardsworth
Robotics Operator: Michael Beardsworth, Brandon Kruysman
Prop Fabrication: Matt Bitterman, Ethan Dale
Script Supervisor: Ian Colon
Sound Engineers: Joe Picard, Michael Beardsworth
PAs: Sean Servis, Dakota Smith, Nico Mizono, Eric Wendel, Patrick Walsh
Editors: Ashley Rodholm, Ian Colon
Music / Sound Design: Keith Ruggiero
Sound Mix: Joel Raabe
Performers: Tarik Abdel-Gawad, Iris, Scout

Sound reactive visuals with MaxMSP

I created this Max patch as a test for sound reactive visuals. I use jitter physics to give the balls mass in the virtual world and then map ghost objects at the bottom of the world to impulse based on the sound coming in or out of the computer. I use the fffb object to separate the left and right audio channels into different bands which correspond to the ghost objects that send out impulses to move the balls. That way instead of just flying all over the place the direction the balls move directly corresponds to how much bass or treble is in the music and which channel it’s coming from. The song is by my friend Ula from Poland. It was an ideal choice for the test as it has a dynamic range.

Here is the patch being used to do visuals at a show.

Project Fringe: Pioneering Social Landscapes – Prologue

This project is focused on developing new means of visualizing and interacting with social data parsed from multiple social platforms (e.g. Facebook, Instagram, Twitter, LinkedIn, etc.). Over the next six months I will focus on integrating many pre-existing technologies both hardware (computers, projectors, wireless headsets and controllers) and software (Max/MSP, Grasshopper 3D, vDome, Google Earth) to develop an interactive, immersive software platform that will provide users with new insights about the data on both local and global scales.

My three-month goal is to complete a software platform that can gather, parse, and store data from several social platforms. The six-month goal is to complete the interface so a user can engage with the social data in a meaningful way. This will include visualizing the data on a research dome and interacting with it using a gesture based control system.

In preparing for this project, my preliminary research has led me to some exciting finds. Phototrails is a cutting edge project along the same vein as my own. They are doing some pretty cool stuff with aggregates of photos from Instagram to visualize cultural patterns around the world. Here are some examples from their Instagram Cities page.

San Francisco

San Francisco

New York

New York

Bangkok

Bangkok

Tokyo

Tokyo

Mind Chimes

Today I debuted my new interactive dome piece, Mind Chimes at ARTS Lab, UNM. The piece generates visuals and music from a live brainwave feed captured by a NeuroSky MindWave Mobile headset. I coded the entire piece with MaxMSP and used vDome, an open source Max based dome player, to skin it to the dome. The audio is generated by sending MIDI notes from my brainwave synth to Camel Audio’s Alchemy MIDI synth instruments. The visuals are generated by the notes played from the audio. They change colors based on your state of mind. This is a great first iteration and I look forward to building it out further.

There’s no good way to capture a dome piece with standard video but here’s a little clip I shot of my friend going to town with his mind.