Thursday 14 July 2011

Gestural Controller Software Design and Conclusion

Firstly a video another proof of concept video for my supporting work for my masters:


Construction 4 Edit from TheAudientVoid on Vimeo.
Another bit of proof of concept video from my supporting documentation for my Masters.
The drums at the start were fed into the markov chain drum recorder earlier. Basically this patch takes what you put into it, makes a markov grid and spits permutations of its input out according to whatever method you use to retrieve the data (in this case it uses the recorded midi sequence being played back to create note on/offs which send a bang to the pitch generation patch. These are quantised to 16th notes and output).

You can see how the gestural control works with the gloves pretty clearly at the start as the hand position is used to control a filter over a drum section.

 Around the 3 minute mark i start playing some percussion live instead of using the markov chain recorded section.




and now the final sections of my Dissertation, please look at the annotated patch pictures that accompany the text as they are meant to be seen in conjunction with this section. There are in fact many more annotated patches in the actual maxforlive device but I will post about those another day in a more detailed breakdown of the software design

 

Software Design

Once the hardware has been designed and created the software must be made to produce useful values for control of Ableton Live. As maxforlive offers an almost infinite possibility of functions it is important to decide what you wish to do with the software before you start building it. “By itself , a computer is a tabula rasa , full of potential , but without specific inherent orientation. It is with such a machine that we seek to create instruments with which we can establish a profound musical rapport . ”(Tanaka 2)


It is important that we create a system whereby the software plays to the strengths of the performer and hardware design, these elements must work in tandem to create an innovative and usable ‘playing’ experience. Firstly the Arduino data has to be understood by Max/Msp. I chose Firmata as it uses the robust OSC protocol to transmit data to max/msp and is also provided with pre made max/msp objects for receiving the data, this code proved to be very stable and fast at passing messages. Once this was uploaded to the board and xbees were configured correctly it becomes simple to receive values that can become usable in your software. As we are using a range of analogue sensors it is important to include a calibration stage in the software so that minimum and maximum values can be set, inputs can be smoothed and then also assigned to a function. To this function I used the “Sensor-Tamer” max patch as a basis for creating a calibration system for all the inputs. These are then scaled and sent to a max patch which allows us to choose an effect from the current Ableton live set.

Left Hand Maxuino Input and other modules, annotated

Right Hand Maxuino Input and other modules, annotated

The analogue inputs can be made to produce midi messages as well as directly controlling parameters of effects from Live menus, the advantage of this is that you can then operate with two distinct modes, one for controlling fx parameters and other for passing midi messages to synths. Due to the fact that Ableton Live merges midi from all input channels to one channel of output you have to use the internal routing (S and R objects) functions of Max/Msp to send midi to a number of tracks. Obviously as this is a control system for live performance you need to have a way to control more than one synth/plugin and you want to be able to control various parameters for each synth. Creating small plugin objects for the channels you wish to control makes it easy to do this and as these simply pipe the midi from the input channel to the selected max receive object and because of this it is possible to assign the same physical controller to a different midi assignment on every channel. This again comes back to the watchword of customizability and allows the user to create dynamic performances where many elements can be changed without touching the computer. This also works neatly around the problem of only being able to send information to the active channel in a sequencer as your midi is routed ‘behind the scenes’ and effectively selects the channel you wish to use without any physical selection of the channel (i.e. no mouse click is required).
The footpedal which currently implements record and step through patch features
As the system is to be used to perform live there are a number of utility functions which also need to be created such as freezing and recording loops, stepping through channels, octaves and patches. These are best implemented away from the gloves themselves as the gloves are most intuitive to play when using both hands (8 notes over two hands), as you can only have a fixed number of switches that are easily playable it makes sense to assign these to notes (with sharps of notes being achieved through a foot pedal). Having switches used for playing on both hands also means that you can create polyphony by pressing down finger switches on both hands simultaneously. There is also the practical consideration that you do not want to have to stop playing a pattern to push a button to record a loop or to freeze things, by moving these functions to your feet you can continue playing whilst accessing control functions. For ease of use recording and freezing functions are assigned to all looping plugins from a single switch, as you are only sending midi data to one channel at a time there is no chance of creating a ‘false-positive’ and recording unwanted sounds in the wrong channel and having one switch to operate freeze or record greatly simplifies control for the end user.

I also decided to use a phone mounted on my arm running touchOSC to control some functions of Ableton live as it is useful in some cases to have visual feedback and again this allows the gloves to be freed up for musical functions. Some of these functions echo the footswitch controls to allow the performer to move away from the laptop and into the audience and as touchOSC has two-way midi control it updates the status of a switch or setting to correspond with the footswitch being pressed so there are no crossed signals. With touchOSC it is easy to design your own interface and to assign buttons to Ableton Live functions. As this essentially operates as a midi controller it is only necessary to put the software into midi learn mode, click the function you wish to assign and touch the button on the phone. This again allows for a high level of customizability for the end user and for interfaces to be made and set up according to the type of performance you wish to create. It is for example particularly suited to triggering sounds or prerecorded loops as many buttons are required for this (one button per clip) and this would not be sensibly achievable using the gloves. Although currently using a predesigned interface due to hardware constraints it is my aim to implement a touchOSC system that as well as providing controls for loops and other parameters provides a full set of feedback from the gloves and foot pedal and thus it will be possible to see what instrument, bank and so forth you have chosen in the software. This will become vital to the projects aim of being able to move completely away from the computer when performing.













At the time of writing this I did not have an apple device to create a custom layout so this HUD was used to show data from Max on the laptop .



Algorithmic Variation

“Each artwork becomes a sort of behavioral Tarot pack, presenting coordinates which can be endlessly reshuffled by the spectator, always to produce meaning”(Ascott 1966 3)

The Markov Chain Recorder/Player, Annotated


I decided that I wanted to be able to manipulate midi data within my performance to produce a number of variations to the input. These variations had to sound human and make intelligent choices from the data that was presented. To this end I have used Markov Chains to analyze midi data to create a system whereby a circular causal relationship between the user and the patch is developed. The patch takes midi input and then creates a probability table as to which note will be played next, after each note is generated it is fed back into the system and used to look up the next note from the probability grid. This means that whatever midi data is fed to the patch will be transformed in a way that preserves the most important intervals and melodic structures of your original data but allows for permutation, this in turn means that the performer must react to what the patch outputs and there is the possibility to input more data to change the markov chain that you are using and thus alter the performance further. In essence I wished to create a system of patches that function very much like an improvising live band, a certain set of melodic parameters are agreed upon, by midi input, and then used as a basis for improvisation. The data from these markov chains can be output in two ways, either the computer can be set to automate the output itself or you may use the gloves the push data from the markov chain into a synth, both of these methods yield different but equally valid musical results and allow the performer to create very different types of results. The idea of using markov chains to create predictable but mutating data has much in common with Cybernetic and Conversation theory where the interaction of two agents and the interpretation of these leads to the creating of a third which in turn influences the original agents. If we consider the original midi data in the patch to be the first agent and the person using the controller to be the second the interpretation of data from the computer influences the playing of the person using the controller and in turn this can be fed back into the computer to create another set of data which is again interpreted, permuted and responded to by the performer. This application of disturbing influences to the state of a variable in the environment can be related to Perceptual Control Theory.
“Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by telling the lower levels what to perceive: that is, they adjust the reference levels (goals) of the lower levels.” (Powers 1995)
Despite this being in reference to control systems in the human mind it is easy to see how it is also applicable to computer control systems, the higher level systems that are accessible by the user tell the software what to perceive, this is done in two ways, firstly the input of midi data, this input allows the software to create a lower level abstraction, being the table of probability, which is then called upon to trigger notes.
The changes must be subtle and controlled enough that the performer is reacting to them and responding rather than fighting the computer to maintain control of the system. The process that is used to determine probability of notes is a closed system to the performer (all one needs to do is feed in a midi file) the performer has access to an open system which can be used to alter key characteristics of the processes after this, they also have access to play along with this process through a separate control system linked to an instrument, hence the feel of improvising along with a band is created. In Behaviourist Art and the Cybernetic Vision Roy Ascott states: “We can say that in the past the artist played to win, and so set the conditions that he always dominated the play”(Ascott 1966 2) but that the introduction of cybernetic theory has allowed us to move towards a model whereby “we are moving towards a situation in which the game is never won but remains perpetually in a state of play” (Ascott 1966 2) Although Ascott is concerned with the artist and audience interaction we can easily apply this to the artist/computer/audience interaction whereby the artist has a chance to respond to the computer input and the audience and to use this response to shape future outcomes from the computer, thus creating an ever changing cyclical system that rather than being dependant on the “total involvement of the spectator” is dependent on the total involvement of the performer.

Improvements

Having worked on developing this system for two years there are still improvements to be made, although the idea to use conductive thread would have been very good from a design and comfort point of view, as it allowed components to be mounted on the glove without bulky additional wiring, the technology proved to be too unstable to withstand normal usage and when creating something for live performance it needs to be robust. It was the case with this design that something could be working in one session and not the next, and obviously if a mission critical thread was to come unraveled it had the potential to take power from the whole system rather than causing a single element not to work. Also the thread, being essentially an un-insulated wire, if not stitched carefully created the possibility of short circuits when the glove were bent in a particular way. In addition to this the switches, even when used with resistors (also made of thread) produced a voltage drop in the circuit that changed the values of the analogue sensors. Obviously this change in values will change what happens to a parameter that the sensor controls and therefore can produce very undesirable effects within the music you are making.
Although the accelerometers produce usable results for creating gestural presets and manipulating parameters the method used to work out the position of the hands could be further improved by the use of gyroscopes instead. Gyroscopes are able to accurately tell the position of an object when it is not in motion where as accelerometers work best when in a state of constant motion. With a gyroscope we would be able to introduce an addition value into our gestural system, we would be able to tell the amount of rotation from the starting position, and this would allow us to use very complicated gestures to control parameters within Ableton.
The current ‘on the glove mounting’ of the components works but is in my opinion not robust enough to withstand repeated usage and so it will be important to build the gloves again using a more modular design. Currently the weak point is stress placed on soldered connections when the gloves twist or bend and even though using longer than necessary wiring helps to alleviate this it does not totally solve the problem, therefore it is necessary to create a more modular design which keeps all soldered components contained and does not subject them to any stress. The best way that this could be achieved would be to mount the Xbee, Arduino and power within a wearable box housing and have all soldered connections housed within it as well. To make sure there is no cable stress it is possible to mount screw down cable connectors in the box for two wire input sensors and three pin ribbon cable connectors for analogue sensors, in this way no stress is put on the internal circuitry and the cabling is easily replaceable as none of it is hard soldered. These cables would run between the box and a small circuit board mounted on the glove near the sensor where the other end would plug in. This also increases the durability of the project as it can be disassembled before transport and as such does not risk any cables getting caught or pulled and makes every component easily replaceable, without soldering, in event of a failure.
I would like to introduce a live ‘gesture recording’ system to the software so that it is possible to record a gesture during a live performance that can be assigned to a specific control, this would allow the user to define controls on the fly in response to what movements are appropriate at the time. However this will take considerable work to design and implement effectively as value changes must be recorded and assigned in a way that does not break the flow of the performance and although it is relatively simple to record a gesture from the gloves by measuring a change in values of certain sensors assigning these to a parameter introduces the need to use dropdown boxes within the software to choose a channel, effect and parameter and how to achieve this away from the computer is not immediately apparent. It may be possible to choose this using touchOSC when an editor becomes available for the android version of the software, but as yet this is not possible.
Further to this the touchOSC element of the controller must be improved with a custom interface which collects suitable controls on the same interface page and receives additional feedback from Ableton such as lists of parameters controlled by each sensor, the sensors current value and the names of clips which can be triggered. Using the Livecontrol API it should be possible to pass this information to a touch screen device but again without an editor being available for the Android version of touchOSC this is not yet possible. I have investigated other android based OSC software solutions such as OSCdroid and Kontrolleur but as yet these also do not allow for custom interfaces. OSCdroid however looks promising and having been in touch with the developer the next software revision will include a complex interface design tool that should allow for these features to be implemented. I will be working with the developer to see if suitable Ableton control and feedback can be achieved once this has been released.

Conclusion

In essence the ideas and implementations I have discussed mean that we can create an entire musical world for ourselves informed by both practical considerations and theoretical analysis of the environment in which we wish to perform. We can use technology to collect complex sets of data and map them to any software function we feel is appropriate, we can use generative computer processes to add a controlled level of deviation and permutation to our input data and we can use algorithms to create a situation whereby we must improvise and react to decisions made by the computer during the performance of a piece. We can have both total control of a musical structure and allow a situation whereby we must respond to changes being made without our explicit instruction. It is my hope that through this it is possible to create a huge number of different musical outcomes even if using similar musical data as input. The toolset that I have created hopefully allows the performer to shape their work to the demands of the immediate situation and to the audience they are playing to and opens up live computer composition in a way that allows for ‘happy mistakes’ and moments of inspiration.
As previously stated it is my hope that these new technologies can be used to start breaking down the performer and audience divide. It is possible to realize performances where the performer and audience can enter into a true feedback loop and can both influence the outcome of the work. In the future there is the potential to also use camera sensing and other technologies (when they are more fully matured and suitable for use in ‘less than ideal’ situations) to capture data from the crowd as well as the performer. The performer can remain in control of the overall structure but could conduct the audience in a truly interactive performance. This technology potentially allows us to reach much further from the stage than traditional instruments and to create immersive experiences for both performer and audience. It is this idea and level of connection and interactivity that should move electronic musicians away from traditional instrument or hardware modeling controllers and look for more exciting ways to use technology.

“All in all, it feels like being directly interfaced with sound. An appendage that is simply a voice that speaks a language you didn't know that you knew” Onyx Ashanti

No comments: