Tuesday, November 21, 2006

The Networked Mash-Up: Gridded Resources in Real-Time

The following is Jonathan's write up of the notes we made for our Network Mashup presentation they are originally on Jonathan Green's Blog. We worked together through this project and we will be both expanding on notes written up here.

An overview of a demonstration by a VJ (Keir Williams), electronic musician (Jonathan Green) and a dancer, of a computer network designed especially for the performing arts.

Identification of Possibilities of E-Science

Gridded technologies allow for collaborative performances. The traditional separation and resulting lack of synchronisation between disciplines during performance is abandoned in favor of a modular system which promotes learning, exploration and experimentation which often leads to happy accidents.

The gridded distribution of artists and technical know-how does not, as may be expected, decrease the strength of relationship between disciplines, but rather makes possible a more integrated performance through interdisciplinary communication. The notion of content fusion is a result of the inclusive methodology employed.

Since usability has been a priority in the conception of the system, performers (dancers, VJs and electronic musicians in this case) are liberated from the burden of poor software interfaces and are instead free to concentrate on interpersonal communications and the reactions of other performers.

We wanted to use novel and user-friendly hardware interfaces to the complex computer software we needed to use. We chose gamepads, fader boxes, microphones (audio analysis) and motion sensors. We took the idea of interfacing a little further and thanks to an iSight and motion capture software especially designed for the system, we were able to use a dancer literally as a bidirectional computer input device. The dancer provided real-time motion data and at the same time was free to react to the sense of space and the resultant projections. This interactive performance ‘loop’ became evident on several levels.
Conclusion of Possibilities

Through the demonstration we had proved that our concept of the networked mash-up had some promise. Our system allows for synchronisation between different performance disciplines. We even managed to prove the system not only in a local network, but also over the internet (see Remote-Controlled Max Patches post). Our system could be scaled up in terms of number of nodes, collaborators and geographical locations (see Future Possibilities below).

Practicalities of a Distributed Performance System

The system is based on a number of nodes. In our demonstration we had three computer nodes (network devices, in other words). It could be argued that the dancer herself is a node, since she feeds and reacts to other nodes. A node, therefore, does not have to be a computer or a traditional network device.

Some example of possible nodes:

* musical instrument (audio analysis, done at source, provides the control data)
* sensor networks (motion capture and environmental data)
* RSS feeds (the internet as a node)

Our gridded system is open to data generated externally. A real-world example of why this might be absolutely necessary: random numbers. Computers cannot generated entirely random numbers. During our demonstration, we accessed a website which publishes random numbers generated by the beta-decay of Krypton-85.


What is Distributed?

* computing power – we could not have done video analysis, audio synthesis and real-time video processing on a single computer
* data – any computer on the network could contribute data or pull down any data and then transform it in a specialised manner

In contrast, the following is not distributed:

* specialist performer skill of specific disciplines
* function – each node carries out its own specialist function

An important concept here is that each node does not need to know how other nodes function, only what data they produce, and even then, each data packet does not have a sender’s address.

At the moment all data exists in a relatively flat hierarchy (like MIDI). There is no means of filtering out unwanted data according to characteristics, type, source or any other meta data. This would certainly be a limitation if the system were expanded.

Data is broadcast in a sterile format that does not favor any particular discipline. It is up to a particular node to filter the data in a way which is appropriate to the function of that node.
Future Possibilities

As explained above, we are limited by the data protocol. At the moment, we are using MIDI which is non-descriptive. This not only makes it difficult to use, requiring a separate document giving a description of each stream of data, but also limits scalability. By tagging data with meta-data, we solve not only issues of scale but also usability. By using a more human-readable protocol (such as Open Sound Control) we can draw a parallel with existing, proven data filtering systems employed by Flickr, YouTube and del.icio.us.
Conclusion (Social Impact)

The notion of distribution, whether computing power, data, knowledge or responsibility is suggestive of the Open Source paradigm. Open Source projects often have a wide user-base and community of enthusiasts. Collaboration is taken for granted.

If scaled up, the Networked Mash-Up idea could bring together artists from diverse backgrounds and working practices in a non-location specific manner (although the location of nodes could be conceptually exploited).
Resources

* Network schematic screenshot
* E-Science Max Patches (used in demonstration)
* Photos from the E-Science workshops on Flickrr> * Remote-Controlled Max Patches – notes on expanding the system
*
Video of performance on YouTube

No comments: