Friday 29 July 2011

Updates

Im going to start rebuilding the gloves in the next week or so but at the moment im working on finishing up an EP and some tracks for compilation albums so please excuse the lack of updates, there is lots more to come when i get finished with these projects!

Thursday 14 July 2011

Gestural Controller Software Design and Conclusion

Firstly a video another proof of concept video for my supporting work for my masters:


Construction 4 Edit from TheAudientVoid on Vimeo.
Another bit of proof of concept video from my supporting documentation for my Masters.
The drums at the start were fed into the markov chain drum recorder earlier. Basically this patch takes what you put into it, makes a markov grid and spits permutations of its input out according to whatever method you use to retrieve the data (in this case it uses the recorded midi sequence being played back to create note on/offs which send a bang to the pitch generation patch. These are quantised to 16th notes and output).

You can see how the gestural control works with the gloves pretty clearly at the start as the hand position is used to control a filter over a drum section.

 Around the 3 minute mark i start playing some percussion live instead of using the markov chain recorded section.




and now the final sections of my Dissertation, please look at the annotated patch pictures that accompany the text as they are meant to be seen in conjunction with this section. There are in fact many more annotated patches in the actual maxforlive device but I will post about those another day in a more detailed breakdown of the software design

 

Software Design

Once the hardware has been designed and created the software must be made to produce useful values for control of Ableton Live. As maxforlive offers an almost infinite possibility of functions it is important to decide what you wish to do with the software before you start building it. “By itself , a computer is a tabula rasa , full of potential , but without specific inherent orientation. It is with such a machine that we seek to create instruments with which we can establish a profound musical rapport . ”(Tanaka 2)


It is important that we create a system whereby the software plays to the strengths of the performer and hardware design, these elements must work in tandem to create an innovative and usable ‘playing’ experience. Firstly the Arduino data has to be understood by Max/Msp. I chose Firmata as it uses the robust OSC protocol to transmit data to max/msp and is also provided with pre made max/msp objects for receiving the data, this code proved to be very stable and fast at passing messages. Once this was uploaded to the board and xbees were configured correctly it becomes simple to receive values that can become usable in your software. As we are using a range of analogue sensors it is important to include a calibration stage in the software so that minimum and maximum values can be set, inputs can be smoothed and then also assigned to a function. To this function I used the “Sensor-Tamer” max patch as a basis for creating a calibration system for all the inputs. These are then scaled and sent to a max patch which allows us to choose an effect from the current Ableton live set.

Left Hand Maxuino Input and other modules, annotated

Right Hand Maxuino Input and other modules, annotated

The analogue inputs can be made to produce midi messages as well as directly controlling parameters of effects from Live menus, the advantage of this is that you can then operate with two distinct modes, one for controlling fx parameters and other for passing midi messages to synths. Due to the fact that Ableton Live merges midi from all input channels to one channel of output you have to use the internal routing (S and R objects) functions of Max/Msp to send midi to a number of tracks. Obviously as this is a control system for live performance you need to have a way to control more than one synth/plugin and you want to be able to control various parameters for each synth. Creating small plugin objects for the channels you wish to control makes it easy to do this and as these simply pipe the midi from the input channel to the selected max receive object and because of this it is possible to assign the same physical controller to a different midi assignment on every channel. This again comes back to the watchword of customizability and allows the user to create dynamic performances where many elements can be changed without touching the computer. This also works neatly around the problem of only being able to send information to the active channel in a sequencer as your midi is routed ‘behind the scenes’ and effectively selects the channel you wish to use without any physical selection of the channel (i.e. no mouse click is required).
The footpedal which currently implements record and step through patch features
As the system is to be used to perform live there are a number of utility functions which also need to be created such as freezing and recording loops, stepping through channels, octaves and patches. These are best implemented away from the gloves themselves as the gloves are most intuitive to play when using both hands (8 notes over two hands), as you can only have a fixed number of switches that are easily playable it makes sense to assign these to notes (with sharps of notes being achieved through a foot pedal). Having switches used for playing on both hands also means that you can create polyphony by pressing down finger switches on both hands simultaneously. There is also the practical consideration that you do not want to have to stop playing a pattern to push a button to record a loop or to freeze things, by moving these functions to your feet you can continue playing whilst accessing control functions. For ease of use recording and freezing functions are assigned to all looping plugins from a single switch, as you are only sending midi data to one channel at a time there is no chance of creating a ‘false-positive’ and recording unwanted sounds in the wrong channel and having one switch to operate freeze or record greatly simplifies control for the end user.

I also decided to use a phone mounted on my arm running touchOSC to control some functions of Ableton live as it is useful in some cases to have visual feedback and again this allows the gloves to be freed up for musical functions. Some of these functions echo the footswitch controls to allow the performer to move away from the laptop and into the audience and as touchOSC has two-way midi control it updates the status of a switch or setting to correspond with the footswitch being pressed so there are no crossed signals. With touchOSC it is easy to design your own interface and to assign buttons to Ableton Live functions. As this essentially operates as a midi controller it is only necessary to put the software into midi learn mode, click the function you wish to assign and touch the button on the phone. This again allows for a high level of customizability for the end user and for interfaces to be made and set up according to the type of performance you wish to create. It is for example particularly suited to triggering sounds or prerecorded loops as many buttons are required for this (one button per clip) and this would not be sensibly achievable using the gloves. Although currently using a predesigned interface due to hardware constraints it is my aim to implement a touchOSC system that as well as providing controls for loops and other parameters provides a full set of feedback from the gloves and foot pedal and thus it will be possible to see what instrument, bank and so forth you have chosen in the software. This will become vital to the projects aim of being able to move completely away from the computer when performing.













At the time of writing this I did not have an apple device to create a custom layout so this HUD was used to show data from Max on the laptop .



Algorithmic Variation

“Each artwork becomes a sort of behavioral Tarot pack, presenting coordinates which can be endlessly reshuffled by the spectator, always to produce meaning”(Ascott 1966 3)

The Markov Chain Recorder/Player, Annotated


I decided that I wanted to be able to manipulate midi data within my performance to produce a number of variations to the input. These variations had to sound human and make intelligent choices from the data that was presented. To this end I have used Markov Chains to analyze midi data to create a system whereby a circular causal relationship between the user and the patch is developed. The patch takes midi input and then creates a probability table as to which note will be played next, after each note is generated it is fed back into the system and used to look up the next note from the probability grid. This means that whatever midi data is fed to the patch will be transformed in a way that preserves the most important intervals and melodic structures of your original data but allows for permutation, this in turn means that the performer must react to what the patch outputs and there is the possibility to input more data to change the markov chain that you are using and thus alter the performance further. In essence I wished to create a system of patches that function very much like an improvising live band, a certain set of melodic parameters are agreed upon, by midi input, and then used as a basis for improvisation. The data from these markov chains can be output in two ways, either the computer can be set to automate the output itself or you may use the gloves the push data from the markov chain into a synth, both of these methods yield different but equally valid musical results and allow the performer to create very different types of results. The idea of using markov chains to create predictable but mutating data has much in common with Cybernetic and Conversation theory where the interaction of two agents and the interpretation of these leads to the creating of a third which in turn influences the original agents. If we consider the original midi data in the patch to be the first agent and the person using the controller to be the second the interpretation of data from the computer influences the playing of the person using the controller and in turn this can be fed back into the computer to create another set of data which is again interpreted, permuted and responded to by the performer. This application of disturbing influences to the state of a variable in the environment can be related to Perceptual Control Theory.
“Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by telling the lower levels what to perceive: that is, they adjust the reference levels (goals) of the lower levels.” (Powers 1995)
Despite this being in reference to control systems in the human mind it is easy to see how it is also applicable to computer control systems, the higher level systems that are accessible by the user tell the software what to perceive, this is done in two ways, firstly the input of midi data, this input allows the software to create a lower level abstraction, being the table of probability, which is then called upon to trigger notes.
The changes must be subtle and controlled enough that the performer is reacting to them and responding rather than fighting the computer to maintain control of the system. The process that is used to determine probability of notes is a closed system to the performer (all one needs to do is feed in a midi file) the performer has access to an open system which can be used to alter key characteristics of the processes after this, they also have access to play along with this process through a separate control system linked to an instrument, hence the feel of improvising along with a band is created. In Behaviourist Art and the Cybernetic Vision Roy Ascott states: “We can say that in the past the artist played to win, and so set the conditions that he always dominated the play”(Ascott 1966 2) but that the introduction of cybernetic theory has allowed us to move towards a model whereby “we are moving towards a situation in which the game is never won but remains perpetually in a state of play” (Ascott 1966 2) Although Ascott is concerned with the artist and audience interaction we can easily apply this to the artist/computer/audience interaction whereby the artist has a chance to respond to the computer input and the audience and to use this response to shape future outcomes from the computer, thus creating an ever changing cyclical system that rather than being dependant on the “total involvement of the spectator” is dependent on the total involvement of the performer.

Improvements

Having worked on developing this system for two years there are still improvements to be made, although the idea to use conductive thread would have been very good from a design and comfort point of view, as it allowed components to be mounted on the glove without bulky additional wiring, the technology proved to be too unstable to withstand normal usage and when creating something for live performance it needs to be robust. It was the case with this design that something could be working in one session and not the next, and obviously if a mission critical thread was to come unraveled it had the potential to take power from the whole system rather than causing a single element not to work. Also the thread, being essentially an un-insulated wire, if not stitched carefully created the possibility of short circuits when the glove were bent in a particular way. In addition to this the switches, even when used with resistors (also made of thread) produced a voltage drop in the circuit that changed the values of the analogue sensors. Obviously this change in values will change what happens to a parameter that the sensor controls and therefore can produce very undesirable effects within the music you are making.
Although the accelerometers produce usable results for creating gestural presets and manipulating parameters the method used to work out the position of the hands could be further improved by the use of gyroscopes instead. Gyroscopes are able to accurately tell the position of an object when it is not in motion where as accelerometers work best when in a state of constant motion. With a gyroscope we would be able to introduce an addition value into our gestural system, we would be able to tell the amount of rotation from the starting position, and this would allow us to use very complicated gestures to control parameters within Ableton.
The current ‘on the glove mounting’ of the components works but is in my opinion not robust enough to withstand repeated usage and so it will be important to build the gloves again using a more modular design. Currently the weak point is stress placed on soldered connections when the gloves twist or bend and even though using longer than necessary wiring helps to alleviate this it does not totally solve the problem, therefore it is necessary to create a more modular design which keeps all soldered components contained and does not subject them to any stress. The best way that this could be achieved would be to mount the Xbee, Arduino and power within a wearable box housing and have all soldered connections housed within it as well. To make sure there is no cable stress it is possible to mount screw down cable connectors in the box for two wire input sensors and three pin ribbon cable connectors for analogue sensors, in this way no stress is put on the internal circuitry and the cabling is easily replaceable as none of it is hard soldered. These cables would run between the box and a small circuit board mounted on the glove near the sensor where the other end would plug in. This also increases the durability of the project as it can be disassembled before transport and as such does not risk any cables getting caught or pulled and makes every component easily replaceable, without soldering, in event of a failure.
I would like to introduce a live ‘gesture recording’ system to the software so that it is possible to record a gesture during a live performance that can be assigned to a specific control, this would allow the user to define controls on the fly in response to what movements are appropriate at the time. However this will take considerable work to design and implement effectively as value changes must be recorded and assigned in a way that does not break the flow of the performance and although it is relatively simple to record a gesture from the gloves by measuring a change in values of certain sensors assigning these to a parameter introduces the need to use dropdown boxes within the software to choose a channel, effect and parameter and how to achieve this away from the computer is not immediately apparent. It may be possible to choose this using touchOSC when an editor becomes available for the android version of the software, but as yet this is not possible.
Further to this the touchOSC element of the controller must be improved with a custom interface which collects suitable controls on the same interface page and receives additional feedback from Ableton such as lists of parameters controlled by each sensor, the sensors current value and the names of clips which can be triggered. Using the Livecontrol API it should be possible to pass this information to a touch screen device but again without an editor being available for the Android version of touchOSC this is not yet possible. I have investigated other android based OSC software solutions such as OSCdroid and Kontrolleur but as yet these also do not allow for custom interfaces. OSCdroid however looks promising and having been in touch with the developer the next software revision will include a complex interface design tool that should allow for these features to be implemented. I will be working with the developer to see if suitable Ableton control and feedback can be achieved once this has been released.

Conclusion

In essence the ideas and implementations I have discussed mean that we can create an entire musical world for ourselves informed by both practical considerations and theoretical analysis of the environment in which we wish to perform. We can use technology to collect complex sets of data and map them to any software function we feel is appropriate, we can use generative computer processes to add a controlled level of deviation and permutation to our input data and we can use algorithms to create a situation whereby we must improvise and react to decisions made by the computer during the performance of a piece. We can have both total control of a musical structure and allow a situation whereby we must respond to changes being made without our explicit instruction. It is my hope that through this it is possible to create a huge number of different musical outcomes even if using similar musical data as input. The toolset that I have created hopefully allows the performer to shape their work to the demands of the immediate situation and to the audience they are playing to and opens up live computer composition in a way that allows for ‘happy mistakes’ and moments of inspiration.
As previously stated it is my hope that these new technologies can be used to start breaking down the performer and audience divide. It is possible to realize performances where the performer and audience can enter into a true feedback loop and can both influence the outcome of the work. In the future there is the potential to also use camera sensing and other technologies (when they are more fully matured and suitable for use in ‘less than ideal’ situations) to capture data from the crowd as well as the performer. The performer can remain in control of the overall structure but could conduct the audience in a truly interactive performance. This technology potentially allows us to reach much further from the stage than traditional instruments and to create immersive experiences for both performer and audience. It is this idea and level of connection and interactivity that should move electronic musicians away from traditional instrument or hardware modeling controllers and look for more exciting ways to use technology.

“All in all, it feels like being directly interfaced with sound. An appendage that is simply a voice that speaks a language you didn't know that you knew” Onyx Ashanti

Updates and more dissertation

Apologies for not updating this for a while, ive been moving house, to Berlin! Also ive been finishing an EP to be released on Planet Terror.

So without further ado, the next section of my dissertation...........


Hardware Design

“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C Clarke

“One way one can attempt their adepthood in magic is to try weaving a spell without using any of the prescribed tools. Just quiet the mind and slip off into a space within your mind that belongs only to you. Cast your will forward into the universe and see if you get the desired results.” (Nicht 2001 47)

When designing my controller I looked at the idea of openhanded magic as a source of inspiration. Rather than being directly related to card tricks and illusion open handed magic is a form of magic in modern occult systems whereby the practitioner does not use tradition ritual props but uses the focus of their will in the moment to achieve the intended results. The performer must achieve some sense of gnosis and ‘at-one-ness’ for this to succeed and as we have previously explored dancing is one route to this state. As explained by Joshua Wetzel:

“Dancing This method could also be termed “exhaustion gnosis.” The magician engages in continuous movement until a trance-like state of gnosis occurs. Dance gnosis is particularly good for visions and divinatory sorts of workings, or at least that is the history of its use. However, it is apparent how it could be used in any type of magical activity. The effort to maintain continuous motion eventually forces the mind to a single point of concentration, the motions themselves become automatic and there is a feeling of disassociation from the mind. It is at this point that the magician performs rituals, fire sigils and various other magical acts. This is also a great form of “open handed magic.” You can do it in a club full of people, with dozens watching, and no one has a clue.” (Wetzel 2006 21)

As discussed earlier I feel that the dance floor has a strong ritual and tribal element associated with it and I believe that these ideas can be incorporated into the design and usage of an adaptive controller system. If the ultimate aim of the design is to interact with the audience and the “blurring of once clear demarcations between himself and the crowd, between herself and the rave” then it is possible to incorporate the ideas of ritual and ritual magick to inform the creation of your controller. Although the idea of creating something ‘magic’ is certainly in one sense that it should ‘wow’ the audience and create something novel and exciting to draw them into the performance I believe that for the performer/programmer the idea must become more abstracted. If we refer back to the earlier idea of having the space within a performance to have moments of inspiration and the room to experiment, take risks and also possibly fail and couple this with the intended purpose of the music we are focusing on, to make people dance, then surely the optimal state for creation of this is to be in the trance like state of the dancer. In the previous section I asked the question “Would they (the performer) not be more fully immersed in their own sonic landscapes if unshackled from the computer screen and became free to roam the space their sound occupies, interacting with the audience and using their whole body to feel their performance in the way the audience does?” and I believe the answer to this is to allow the performer a system of control that allows them to largely forget the mechanism of creation and to ‘feel’ what they are making by being in the same state as the dancer themselves. When looking at how to design my controller I have tried to think about this question throughout and use it as a reference when trying to ascertain the best way to incorporate a feature into the hardware and software design. The controller must be simple to use, requiring natural hand gestures, and notes must be easy to trigger and record so that the flow of the performer is not interrupted by the technology. It has taken a great amount of trial and error to reach a stage where this is possible and indeed the use and design of a controller to allow such interaction with audience and music is, by necessity, in a constant state of flux where new ideas can always be incorporated and refined to move towards the optimal playing experience. As I have previously stated this idea of a continually evolving and demand responsive controller system is the optimum state for these projects and although temporary goals can be established the performer/designer should always be looking for a way to improve and advance their work and as such it can never be described as truly ‘finished’.

It is relatively easy to build your own controller system and use it to interact with a computer and there a number of advantages in creating your own system over co-opting existing computer interface devices. With a basic knowledge of electronics it is possible to create anything from a simple input device to a whole new instrument. Using an interface such as the Arduino you can simply, and with minimal processor load, send analog and digital signals to your software and there are a huge number of sensors on the market that you cannot find in a pre made solution and making your own controller allows a novel approach to the capture of data. The traditional computer controller model of interface relies on pushing buttons to input data and thus even when using a modern controller such as the Wii-mote we are still tied into this idea of physical buttons as the main input device. Other devices such as the Kinect although allowing gestural input only work under specific lighting and placement conditions which would make it largely unsuitable for use in a live performance environment. If we build our own system it is possible for us to use a vast number of different devices such as bend and pressure sensors or accelerometers to receive input. This approach allows us to fully incorporate the idea of gestures to manipulate music as it does not rely on you tapping a key but rather invites you to use your whole body. As previously stated with the controller I wished to design I did not wish to copy or model traditional instruments but rather to create a unique interface with a distinct playing experience to take advantage of the many controls available to us to manipulate. To get the most from the custom controller experience we must develop our own language to interact with computers and the music being made.

In designing a physical controller it is important to think about what you intend to use it for and what controls you need. Do you just need switches or do you need analog control values that you can use to, for example, turn a knob or move a fader? Do you want your controller to play like a traditional instrument or to have a totally non-traditional input method? With my project it was important to have a number of analog controllers as well as digital switches and also some kind of control for moving through the live interface was required, this meant that I added a touchOSC component to my project for feedback and control of Ableton’s midi map triggered features, this allows you to trigger clips and manipulate controls all without having to look at the computer. In my project only the hands contain sensors and the feet perform basic functional software control, which are also replicated on the touch screen device, allowing the performer total freedom of movement. Being free from the computer allows the performer to more fully enter into the flow of the music and to, for example, dance whilst creating. In this aspect my controller is attempting to remove itself from a more traditional model of playing music where you would have to think about the placement of an instruments, your hands on the keys and so on. As my project is particularly focused on creating electronic ‘dance’ music, which has little link to traditional instruments, it seems counter productive to produce something which models itself upon a traditional instrument as in the setting of a live performance this would look misplaced.

Rather than create a system where the user has to hold a controller my system is built entirely into a set of gloves and as such one simply has to move their hand to affect change in the music. The hardware has gone through a number of revisions to find the best setup to compliment my workflow. Initially I used available ready made sensors to create my gloves, and whilst these made for a relatively simple construction they presented a serious set of problems regarding connections to the gloves, keeping the sensors in place and not putting stress on the weak points of their construction. Many commercially available sensors are designed to be used in a static setup where once mounted they are not moved, however when making something such as a pair of gloves it must be recognized that there will be a large amount of movement and that actions as simple as putting on or removing the gloves may produce unwanted stress on connections that may break or impair the functionality of the system.
Over the development time of my project technology has become available that allows you to make bend sensors, pressure sensors and switches out of conductive material. This creates a distinct advantage over traditional sensors as they are more durable, easier to wear and very simple to fix and replace. Conductive thread has, in theory, made it possible to create a controller with less physical wiring, the wires can be sown into controller, are flexible and do not restrict movement and are more comfortable for the user. I initially remade my project using this technology, however this technology also has drawbacks that only become apparent after a period of usage and have meant that it was unsuitable for this project. A prototype version of the gloves were made using conductive thread rather than wiring and although this initially worked it was found that stretching and compressing the thread in a vertical direction lead to it unraveling. As the wire functions in the same way as a multicore wire when the thread is not tightly wound together you get a loss of signal, initially I sought to counter this problem by covering the conductive thread in latex but as this seeped between the strands of the thread this also lead to a loss of signal. This conductive thread technology is certainly useful in some situations however when used on a pair of gloves the amount of stretching required to get them on and off means that the thread breaks very quickly. However it is still used in the project to connect between circuit boards and the conductive fabric fingertips of the gloves and between circuit boards and the analog sensors in places where there is not a great amount of stress placed on them.

The analogue sensors are also made from conductive material and this has the advantage of making the sensors easily replaceable if broken and easy to fine-tune the output values and sensitivity. The bend sensors on the fingers are made using conductive thread, conductive fabric, velostat and neoprene. By sewing conductive thread into two pieces of material and sandwiching layers of velostat between them you can easily create a sensor which is simple to adjust the sensitivity of as this is determined by the number of layers of velostat between the conductive thread, a sensor made this way also has the advantage that it can easily be mounted on the gloves via stitching. These sensors also can be made to look almost any way you desire, in the case of my project simple black circles, and as such they are in keeping with the idea of open handed magic where the actual method is partially obscured from the audience but easy to use and understand for the performer. The switches in the gloves are also made in a way that removes the need for any wiring, electronics or unwieldy physical switches. Using conductive thread it is possible to create a switch that can be closed by applying a voltage across it and this greatly simplifies the construction of the gloves as only one positive terminal is needed, in this case placed on the thumb, thus the switches are constructed by wiring a ground and input wire to each finger and are closed by touching the finger and thumb together. This natural gesture requires no learning on the part of the user and we can, for example, use each switch to trigger a drum hit or play a key on a synthesizer as well as performing more command based functions if required. I have taken the approach of making the switches on both hands produce midi notes (one for each whole tone in an octave with an extra c of the octave above on the last finger of the right hand and a foot pedal to sharpen/flatten the notes) as this yields the most natural playing experience, but it is possible to program these switches to provide other controls is required.
My controllers use accelerometers in each hand to work out the position of your hands, this allows us to seamlessly change between parameters being controlled. For example if your right hand is held at a 45 degree angle the accelerometer can function to control a cut off filter within your music software, however if you tilt the right hand further to 90 degrees the functionality of the left hand can change and could instead be used to control the volume of a part or the length of a sample. As we can produce accurate results with these sensors we are able to build a huge amount of multifunctionality into a very simple control system. Positioning of the hands is very easy for the performer to feel without the need for constant visual re-assurance and this contributes to the ease of use of the system.
I have also incorporated multi colored LED’s into the hand for visual feedback, by using three color LED’s we have a huge variety of potential colors to choose from which can indicate function, and therefore we also cut down on the amount of wiring needed to manage this and space used on the glove. There are three of these LED’s mounted on the gloves, two represent feedback from the notes played and change color corresponding to the instrument chosen and the third is used as a metronome so it is easy to record sections in time with the computers tempo setting and gives the performer visual feedback for their timing.
By using Xbee radios in conjunction with the Arduino and sensors we are able to unwire ourselves from the computer completely. This of course simplifies the use of the controllers as it does not matter where the performer is in relation to the computer and for my project this is vitally important to my core idea of ‘open handed magic’ and audience interaction. The most obvious disadvantage of using wireless communication is increased complexity of setup. To get the xbees to talk to one another over a meshed wireless network is not a simple task and Arduino code that works when the unit is plugged in via USB does not necessarily work when passed over a serial radio connection. For example the Arduino2Max code, available online, is a simple piece of programming that allows the Arduino to pass results from each of its inputs to max/msp. However this does not work when Xbees are introduced as the data being reported by the serial.print function floods the buffers of the xbees and means that data is only reported once every ten seconds or so. Obviously as we are aiming for a system with as low latency as possible this situation is unacceptable and another means of passing the data must be sought. In the case of my project this meant the Firmata system which can be uploaded to the Arduino and which communicates data to the computer by the use of the OSC protocol. Although the code for this system is much more complex than Arduino2Max the results that it produces are far more accurate and do not result in any appreciable latency. However to get this to work in the way I required demands a greater level of coding knowledge for both the Arduino and Max/MSP and messages are passed to and from the serial port using more complicated OSC messages and must, for some functions, be translated into a format that max understands to create usable data. Using series 2 Xbees also creates an additional problem in that they are designed for more complex tasks than serial cable replacement, as such part of their standard behavior is to continually seek nearby nodes that they can connect and pass information to. Through extensive testing and research I have found that if this mode was utilized the stream of data from the gloves to the computer and visa-versa was often delayed by seconds at a time, as the xbees seem to prioritize data integrity over timing. However it is possible to quickly bypass this by setting the xbees to only look for a specifically addressed endpoint and this seemed to solve inconsistent timing issues. There is a distinct advantage to using the Firmata/OSC based communication and that is that if there is a dropout from the controller the flow of data will resume when the connection is restored. I.e. if the battery runs out and wireless communication is lost when a new battery is used the wireless communication is resumed and the data will also resume being seen in max/msp. This does not occur with more simple codes and therefore using this more complex system provides a level of redundancy to our hardware that allows us to continue performing without the need to reboot the computer or software.

When powering an Arduino over USB you do not need an additional power source as the USB bus can provide what is needed to run your sensors, however when using wireless you must include and external power source, this must be powerful enough to provide the correct voltage for the Arduino, wireless module and sensors and must have a long enough battery life to not run out mid performance. This obviously increases the size and weight of the controller and if you are using conductive thread it is important that the power source is placed in close proximity to the most high voltage mission critical elements of the project. This is because conductive thread has a resistance of 10 ohms per foot (i.e. one foot of conductive thread is equal to a 10 Ohm resistor) and therefore you lose power from your source the more thread is placed between it and your components. However if traditional wiring is used this becomes less of an issue. Li-Po batteries were chosen for this project due to their high power output and quick recharge time, one must be aware though that they must not be discharged under 3 volts and that if the packaging is damaged the batteries are liable to expand and potentially become unstable, therefore care must be taken to ensure that they are looked after properly when used. These batteries clearly offer the most potential for a system like this however as they allow somewhere in the range of 1000 − 3000 mAh to be output, this is more than enough to power the lilypad, xbee, sensors and lights for a long duration. Originally I had looked at using AAA batteries and although these powered the system on they ran down very quickly and with some sensors produced a voltage drop that would reset the Arduino and cause unreliable operation.