Tuesday 17 May 2011

A video at last

Heres a video of me playing around with the final 'prototype' version of the gloves. They need making again really to be perfect but im pretty pleased with progress so far. Basically they are controlling a load of sounds and effects which are looped via a midi footpedal.
It also uses a little bit of max code which takes an input, records it, creates a markov table of probability for what note you will hit next and then outputs permutations on this data, so it keeps the flavour of the original input whilst mutating it, its pretty damn cool if I do say so myself and probably the best bit of max programming that i have done.
Also there are no pre-existing loops used in this, some arps yes but loops no
Anyway enjoy a video of me messing about with the gloves


Construction6 from TheAudientVoid on Vimeo.

Thursday 12 May 2011

Adaptive Physical Controllers - Part 3 - The Dance Music Ritual

The Dance Music Ritual

As the name suggests Dance music has a specified aim of producing movement in the audience and to create a sense of community amongst them “Your kinaesthetic sense is externalised by being transferred from your own body into the body of the crowd… The room ceases to be occupied by strangers, instead it is filled with the party folk all satisfying their need to be”(Jackson 2004 19). Slogans such as “Peace, Love, Unity and Respect” (PLUR ethos) exemplify the community ideas of dance music audiences and Turners idea of Spontaneous Communitas “the transient personal experience of togetherness” has been taken on by many dance music scholars to explain the feeling of community and connection that the audience may experience in a rave setting.

“If the ecstatic raver is indeed an anonymous body of textless flesh, one that has shed its identity, ideology and language, one that has either divested or radically altered its culturally inscribed body image, then the thematic boundaries that normally delineate our edges are destabilized and perhaps dissolved. Dancing amidst a crowd of ecstatic bodies, the raver is consumed not only by an immediate ‘experience’ of the phenomenal world, but also by his or her body’s subconscious knowledges of unity and alterity (not to mention genderless sexual specificity)—knowledges that are quite different from those of self- reflective thought. Lost in the reflexivity and natural transgressivity of the flesh, in its indeterminacy and interwovenness, the raver is a mute witness to the blurring of once clear demarcations between himself and the crowd, between herself and the rave.” - Landau in (St. John 2004 p.121)

In my past work I have looked at the use of rituals within music in both modern and tribal cultures, in modern society this is seen most clearly on the dancefloor of clubs, there is a tribal and ritual element to dancing together and the musical style that accompanies this “‘repetitive, minimalistic, seamless cyclings of sonic patterns accompanied by a relentless driving or metronomic rhythm” (Fatone 2001) which creates not only community between the dancers but often ecstatic experience. James Landau states “in the psychoanalytical view, ecstacy’s transgressive relationship to binary thought stems from the rave-assemblage propelling its participants into the Real, a cognitive space ‘beyond’ the ego and its organizational structures” (St. John 2004 ) This idea is supported by St John who states, “The party makes possible a kind of collective ego-loss, a sense of communal singularity - a sensation of at-one-ness - is potentiated”(St. John 2004). However often the performer cannot take part in this ecstatic experience due to their physical disconnection from the dancefloor and the movement of the dancers. Is it not strange that a musician can produce music that makes their audience dance but they must remain rooted in place behind their computer? Would it not be more beneficial for the performer to be able to join the dance and become part of the community they are creating music for? Would they not be more fully immersed in their own sonic landscapes if unshackled from the computer screen and became free to roam the space their sound occupies, interacting with the audience and using their whole body to feel their performance in the way the audience does?

One of the huge benefits of electronic instruments is that as they do not have an element which needs a microphone and as such they are not subject to feedback in the same way that a traditional musician would be, this simple fact has seemingly been overlooked in the majority of live performances and the traditional room setup of placing the performer in a position of separation from their audience is adopted, by creating wireless wearable controllers it is possible to move beyond this traditional staging setup. I would assert that the ritual and community aspect that dance music embraces would be furthered immeasurably by the breaking down of the audience/performer divide and creating a situation where no one is placed ‘on a pedestal’ but instead all are intertwined with each other. Within the Electronic Music Scene there are far fewer ‘superstar’ performers than in other musical genres and although some performers break into the mainstream and achieve wide spread acclaim many are much less willing to fulfill the traditional hero archetype. Indeed the community often quickly derides those who are seen to have ‘risen above their station’ or developed an overbearing ego. When talking about this the musician Shackleton says “in rock music you have a projection of the individual, and it’s almost like the extension of a performance art where you have an individual being / doing a very egotistical thing and in that context it’s wonderful… because of course that person is venting something and the crowd can enjoy that, in that context. But I think I’ve never really seen it like that, the artist isn’t so important.”(Brignoli 2011) “I don't need lots and lots of money, I don't need a lot of fame or this sort of thing, I just like doing what I'm doing. That's good for me.”(Keeling 2010) Even very well known DJ’s such as Paul Van Dyke are known for their grounded attitudes towards their work “He is so sincere and is one of the nicest people I've ever met. You don't expect someone that's so well known to be so humble." Jessica Sutta (2001). Some artists and groups take this idea even further, Scot Gresham-Lancaster in his article about ‘the Hub’ (an ‘interactive computer network music group’) states “The Intent to detach ego from the process of music making we inherited directly from Cage. To refine that impulse and make a living machine that both incorporates our participation and lets the breath of these new processes out into the moment”. (Gresham-Lancaster 1998)

If as St John states ‘electronic dance music would be a conduit for experimentation, transgression and liberation, with rave becoming the manifestation of counter-culture continuity”(St. John 2008 156) the this freedom should logically be extended to break down traditional audience performer divides. Onyx Ashanti describes this sensation of using a wireless controller whilst being amongst the audience “I "thought" I would do what I usually do, which was to stand in front of the DJ booth and "perform". Not the case, at all! Before I realized it, I had eased into the crowd and was dancing with a couple of very attractive women, BUT WAIT...I was still creating and playing as well!”(Ashanti 2011) We can see clearly from this quote the excitement of the performer in this situation, that he can interact with the audience whilst creating and feedback into his system the energy from the audience. Within this context it is clear that the gestures one must ascribe to their controller are those of dancing, the performer must be able to dance with the audience and use their gestures to both manipulate the music and interact with others. I believe it is this situation, facilitated by the movement of the performer away from the computer that will truly revolutionize the performance of dance music.

Wednesday 11 May 2011

Adaptive Physical Controllers - Part 2


If we look at the way traditional instruments are played we can see that there is a great deal of body involvement and it is often easy to see the haptic and sonic link between the gesture of the performer and the sound that is produced, for example as we see a guitarist bend a string we can hear the corresponding rise in pitch from the amplifier. This produces a clear semiotic link understandable to the audience and performer; specific defined action produces a consistent result.[1] This is less true when we look at computer controllers that largely rely on the language of synthesizers and studios such as patch cables, knobs and faders. Alex Mulder states:

“The analog synthesizer, which is usually controlled with about a dozen knobs and even more patch cables, is an example of a musical instrument with relatively high control intimacy in terms of information processing, but virtually no control intimacy in terms of semiotics, as access to the structuring elements of the sounds is hidden behind a frustratingly indirect process of wiring, rewiring, tuning sound parameters by adjusting knobs, etc.”(Mulder 1996 4)

This lack of a clear semiotic language for the uninitiated (i.e. those without direct experience of using a synthesizer or being in a studio) means that much of the data that informs the audience of changes being made is lost. Indeed even those that understand patching an analogue synthesizer would not be able to tell, from the position of an audience member, what patch cable conforms to what function. Fernando Iazzetta states “gesture is an expressive movement which becomes actual through temporal and spatial changes. Actions such as turning knobs or pushing levers, are current in today's technology, but they cannot be considered as gestures”(Iazzetta 2000) This interaction and language of expression becomes even less clear when the performer is simply behind a laptop moving a mouse or pushing buttons on a controller. Therefore we need to move towards a system that is responsive to the users demands and which has a clear semiotic language whilst taking into account playability and ease of use. One instrument that attempts to reinforce this semiotic link is the Theremin, however the degree of physical discipline required to become a virtuoso at this instrument is beyond the capabilities of most players,

“You’re trying to stay very, very, very still, because little movements with other parts of your body will affect the pitch, or sometimes if you're holding a low note, and breathing, you know, will make it ... (Tone rising out of key)…. I think of it almost like a yoga instrument, because it makes you so aware of every little crazy thing your body is doing, or just aware of what you don't want it to be doing while you're playing” (Kurstin 2002)

Axel Mulders bodysuit also had a similar problem

“The low level of control intimacy resulted from the fact that the movements were represented as physical joint angles that were placed in a linear relation to the psycho-acoustical parameters representing the sounds. However, a performer rarely conceives of gestures in terms of single joint motions only: multiple joints are almost always involved in performance gestures. Therefore, considerable learning appeared to be necessary to gain control over the instrument and eliminate many unwanted sound effects.” (Mulder 1996 4)

In her Thesis “A Gestural Media Framework” Jessop states that “I have found that strong, semantically-meaningful mappings between gesture and sound or visuals help create compelling performance interactions, especially when there is no tangible instrument for a performer to manipulate”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) Jessop points out that with these systems the performer must also be a programmer to gain the most reward and whilst this is true it is possible to create a coherent GUI (Graphic User Interface) that obscures much of the programming from the user whilst allowing them to effectively calibrate and work with the system. With any controller that uses gestural input it is necessary to have some kind of calibration stage to produce accurate results and cannot be avoided when so much is reliant on, for instance, the amount a person can bend their finger or move their wrist. This also creates the opportunity to create a system so customizable that it is a useful tool for those of impaired mobility, if it is possible to have sensors of a high enough accuracy that a large change can be made in high resolution over a small area of movement you move towards creating a system that with minimal training anyone can use and benefit from. It is possible to create a control system whereby the gestures used can be changed over time or be varied by the specific performer and their needs. Axel Mulder proposes that the existing problems with instruments and controllers are Inflexibility “Due to age and/or bodily traumas, the physical and/or motor control ability of a performer may change. Alternatively, his or her gestural "vocabulary" may change due to personal interests, social influences and cultural trends…” and Standardization “Most musical instruments are built for persons with demographically normal limb proportions and functionality”(Mulder 1996 2). Mulders work is of particular interest as he focuses on using the hands and associated gestures to create a new type of musical interaction. The SensOrg project also looks at the idea of creating an adaptable gestural control system based on the movement of the hands “ the SensOrg hardware is so freely configurable , that it is almost totally adaptable to circumstances . It can accommodate individuals with special needs, including physical impairments”(Ungvary and Vertegaal 176). The creators of this project state “we consider sensory motor tools essential in the process of musical expression”
Jessop (in reference to dance) states “We are now in an era in the intersection of technology and performance where the technology no longer needs to be the primary focus of a piece.  The performance is not about the use of a particular technology; instead, the performance has its own content that is supported and explored through the use of that technology.”(Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010). Whilst this could be assumed to be true in relation to dance I feel that this stage has not been reached yet in electronic music. There is still a fundamental disconnection between the performer and the music they are playing and between the audience and the performer.  As often electronic music is explicitly about the use of technology and its application to create and manipulate sound it seems strange that electronic live performers expose their audience to almost none of the technology they use other than to show them that they have a laptop computer. We have not yet reached a stage where the audience can assume to know what the performer is doing with their laptop and controllers. Performers such as Tim Exile are attempting to change this idea with highly interactive and customizable live shows and controllers that allow room for surprise elements, mistakes and moments of inspiration. As most people use computers to play live in the most limited way, by simply playing back tracks with basic alteration this does not allow room for one of the elements that makes traditional live music so special, the fact that the performance will change every time and that you can change the structure of the song or rearrange it in a different style. It is my goal to move towards developing a system that allows a deep, user defined interaction with the software you are using whilst being unique for each user and adaptive to their performance demands. The application of this idea means that we must attempt to introduce systems into the controller that allow the audience to form a link between the command being performed and the sonic outcome. As it is possible for each controller to be radically different in design and implementation it is important that some kind of visual feedback system is introduced in addition to the performers gestures that aids in the audiences understanding of what is happening.

            As such, systems must be designed that allow a high level of control, allow the performer greater room to improvise within their defined parameters and encourages them to take risks. With so many assignable controls available in computer software and the ease of use of multiple sensor inputs to the computer it is possible, for example, to use the whole body to control a synthesizer or indeed the whole arrangement of the piece. In this way it allows the performer to embody many of the separate parts of the music whilst maintaining control of the whole. In many ways this is utopian concept, whereby the performer has deep control of every aspect of the piece and can easily manipulate it in whatever way he desires, but can also introduce indeterminacy to the piece. Software such as Max/Msp and Puredata allows an almost infinite variety of control combinations to be remapped and recalibrated on the fly and can even be used to provide a constantly mutating backing which you can for example use your controller to play a solo over.[2] This software also has the advantage of being open to the end user, max and Puredata patches can be opened and reprogrammed to suit the users needs if the original design is not flexible enough and it is precisely this open source attitude towards software that will see alternative controller solutions start to appear in many different contexts throughout the musical world. When something has the capability to be anything you desire it to be (with the proviso that you need some skill at programming to realize this) the possibilities to any artist are immediately apparent. However these systems should also be designed with ease of use in mind and the beauty of this approach is that whilst on the surface it is possible to provide a unified and coherent GUI for those that wish to use it anyone who wishes to delve deeper into the inner workings of the controller is free to do so. It is this that allows one to design continually evolving controller concepts that can change based on the artists intent or interests at the time.

            With my project I have chosen to focus on the hand and wrist as the main method of control “the hand is the most dexterous of human appendages and is therefore, for many performers, the best tool to achieve virtuosity”(Mulder 1996 5). By focusing on the hand I am attempting to provide a method of input that is understood by the performer and audience and can provide a rich array of data by which to control aspects of the performance. The lack of tactile feedback from the controller and use of empty handed gesturing in my system makes it unlike traditional instrument models where every action is anchored to a physical device but also provides some similarity in its most simple operation (playing notes) as a physical press of a key is still involved.
Systems such as Onyx Ashanti’s “BeatJazz’[3] involves a controller that provides tactile feedback to the user via pressure applied to force sensing resistors to trigger notes and functions. This allows for a much greater degree of flexibility in the performance and remains true to the instruments that Onyx has traditionally played. Onyx is a skilled wind instrument player and has a background of playing the saxophone and more recently the Yamaha WX5 wind controller. However in designing his own controller system rather than simply choose to recreate a traditional wind controller Onyx has attempted to create a new controller that takes the best aspects of that instrument and combines them with the expanded possibilities of home build controllers. This is most simply seen in the layout of the controller that takes the form of two hand held units, a mouthpiece, and a helmet with visual feedback via TouchOSC. Where as the traditional wind controller looks something like a clarinet the beatjazz controller has a separate wedge shaped controller for each hand. Each hand features switches, accelerometers and lights to control his Puredata and Native Instruments Kore based computer setup. As this design uses force sensing resistors as switches it allows the performer to assign multiple functions to each of the buttons depending on how hard they are pressed which means that for a minimum number of buttons a huge array of controls can be manipulated.
When talking to Onyx about his controller system he stressed the importance of visual feedback for the audience and stated that he had modified his wind controller with brighter LED’s so that the audience could see when a note was played or a breath was being blown. He has carried this idea through to his Beatjazz controller using super bright multi-color LED’s that change patterns depending on what is being done with the controller. This serves to reinforce the link between with the audience and the performers actions and also draws the audience into the performance by creating a unique and performer centric visual display.
These individual and highly specific performance systems are aimed at encouraging the use of the computer to produce a new kind of instrument, not one rooted in classical tradition but an instrument that recognizes the power of the computer as a tool of complete agency over the music produced. Jessop states, “ For these interactions between a performer's body and digital movement to be compelling, the relationships and connections between movement and media should be expressive and the performer's agency should be clear.” (Jessop and Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences. 2010 15) and this is a key principle of the system that I have designed. Although it is possible for subtle movements to produce great change within a performance the controller and software should be calibrated so that there is a clear visual link between the movements being made and the sound being output. It is clear when using an instrument such as the Eigenharp that when a key is pressed a sound is output, however when designing a more esoteric control system it is up to the designer and user to ascribe meaning to certain gestures. Without the presence of a physical input such as a fretboard or breath controller we must ensure that the audience understand what action corresponds to what gesture, this is given further importance due to the fact that our control system is adaptive, we may use one button or gesture to perform a number of functions depending on its assignment at that time and therefore we must ensure that these are clearly demarked through the performance and gestures used. We must create a set of semantically meaningful gestures to support our performance. Using sensors such as accelerometers, gyroscopes or bend sensors these gestures can be as simple or as complicated as the performer desires, from turning the hand in one direction to the other to control, for example, the cutoff of a filter, to a complex gesture involving the placement of both hands. The user of the system should free to define these interactions from within their code and to choose gestures that feel natural to their playing technique without producing ‘false positive’ triggers during normal use. It is also important to consider the setting in which the performance is to take place when defining gestures within a control system as the gestures associated with a conductor and a classical music setting are very different from those of a dance music event. The system I have made will be used mainly to create dance music within a club setting and therefore ideas of appropriate gestures for this must be considered as well as the role of the performer within this context and the breaking down of the performer audience divide.


[1] Ie bending a string always produces a rise in pitch, blowing harder into a wind instrument produces an overblown note and so on
[2] See the later section on Algorithmic variation
[3] See Appendix A 3.1-3.3 for images

Tuesday 10 May 2011

Adaptive Physical Controllers

Well I havnt updated this for a while but its because I have been busy writing my dissertation for my MA in Sonic Arts. There are some videos of my project coming soon when I edit things together but for now I will serialize my dissertation for you all to read if your interested in these topics. It covers both the theory behind alternative controllers and what I have done with my own work. I'll be posting a couple of sections of it up every day, although its very text heavy I hope it will be interesting to some of you who also are interested in these type of projects and will explain a bit about it all. When I've released the whole thing on here I will also put a download link to a printable pdf version which will include all the images, bibliography etc..

So without further ado here is the first section which covers the introduction and also some discussion of existing digital input devices for music


Adaptive Physical Controllers - 

Introduction

I will be looking in particular at the linking of Max/Msp and Ableton Live as it allows us to create complicated controller interfaces and devices whilst allowing us to access the API of a powerful live music system. Harnessing the flexibility of Max/Msp with the more traditional and time domain orientated approach of Ableton Live allows the performer to create an adaptive system whereby a controller can be used to perform multiple tasks and control any parameter, for example the pitch, timbre and velocity of a synth, the tempo of a song, a parameter within an effect, what sounds are playing, and so on. It is this adaptive nature of home built personalized controllers that allow us to explore new ways of interacting with computers and music. Projects such as David Jeffrey Merrills Flexigesture (Merrill and Massachusetts Institute of Technology. Dept. of Architecture. Program In Media Arts and Sciences. 2004) “a personalizable and expressive physical input device” and Onyx Ashantis “Beatjazz” project (Ashanti 2010) move towards this goal and attempt to combine the best aspects of traditional hardware controllers with the possibilities audio programming languages and custom controllers present to us.
In my project I have attempted to make a pair of gloves that can be used to create and manipulate music within Ableton Live. The aim of this is being able to play live improvised electronic dance music without interacting with the computer directly. In many aspects of live electronic music the excitement of performance has been lost, there is often little interaction between the performer and the computer, and even if the music is composed and created in real time there is little for the audience to visually identify with. Unlike a performance with traditional instruments it is almost impossible for the audience member to visualize what the performer is doing. When using a computer the ‘wall of the screen’ separates the performer from the audience and obscures their actions “Conventional sonic display technologies create a “plane-of-separation” between the source/method of sound production and the intended consumer. This creates a musical/social context that is inherently and intentionally presentation (rather than process) orientated”(Bahn, Hahn et al. 2001 1). It is one of the central paradoxes for the electronic performer, although they have a box that is capable of creating almost any sound imaginable the central mechanisms for creating these sounds are obscured from all but the user themselves. I wish to find a way to attempt to overcome this problem by creating a system that allows access to the many features computer music software offers whilst removing the user from a fixed position in front of the computer screen and creating a direct visual feedback for the audience.




There are clearly benefits to modeling controllers on traditional instruments, by doing so you provide a safe reference point for the user and in theory reduce the learning curve required to play it (providing the user has previous instrument training). By working in a familiar framework you play on the existing strengths of the performer, however there are limitations to traditional instruments that I believe make them unsuitable for use as a modern day controller. Traditional use of a computer requires many keys and key combinations to perform specific functions, this is relatively easy using a keyboard and mouse as every key is individually marked and key combinations are easily pressed. However if you translate this idea of a grid of keys to the fretboard of a guitar you can begin to see the problems that may occur. It is very difficult to translate a vast number of controls to a small number of keys and direct midi instrument mapping often yields the problem that software will only allow you to control one parameter at once and many midi sequencers and live performance programs do not allow you to easily switch between channels and instruments.
There is obviously a great difference between an instrument that attempts to simply recreate the analogue in a digital domain and a controller that seeks to redefine performer and computer interaction, for example the Yamaha WX5[1] seeks to recreate the experience of playing a woodwind instrument but with extra keys for computer control, shifting octaves and so on, if seeking to simply replace a traditional instrument with a digital one instruments like this are an effective choice. However as we are looking to create a new type of computer control interface the mechanics and implementation of these instruments are less relevant to us than something such as the Eigenharp[2] which bills itself as ‘The most expressive electronic instrument ever made’ and attempts to go further than simply recreating existing instrument designs. Indeed it looks to incorporate aspects of many existing instruments and to allow the user to play VST plugins to create a hybrid design that straddles both traditional and digital instrument designs. Undoubtedly the quality of the keys, their velocity sensing and the ability to move them in both a horizontal and vertical direction goes a long way to allowing the player to perform all the traditional expressions associated with musical instruments[3] in a way that has not been available in the past and design features such as the inclusion of a breath controller and excellent instrument models allows the player to easily replicate, for example, wind instrument sounds. However it is the more strictly digital interactions with this controller that may leave the end user wanting.
 The Eigenharp, in a desire to remain as traditional as possible, takes the approach of using a complex set of key presses and lights to navigate through un-named menus on the instrument. Whilst usable this requires that the user become familiar with a menu tree that has little visual guide and without the names of the menus appearing and only colored lights to mark where you are or what option is active it is all too easy to choose the wrong option. In addition built in requirements such as having to reselect the instrument you are playing to exit the menu tree add un-necessary complexity. In this case the desire to pretend that the computer the instrument is plugged in to does not exist feels like a denial of the capability of the device and negates much of the goal to present an instrument that can be quickly mastered by the user. Although there is a limited computer interface provided with the Eigenharp this is mainly for choosing sounds, scale modes and VST’s and as such is more of a pre performance configuration tool than something that can be used ‘on the fly’.
I believe that the main fault with the Eigenharp model is that it binds the user to a specific predefined interface. The benefit of creating an alternative controller is that you can create an interface that combines well with your intuitive workflow and techniques. When using a powerful ‘virtual’ instrument that is linked to the computer you have the opportunity to allow the user to reprogram settings to work in a way that suits their needs. This is one of the central tenants of adaptive controller design; the end user can specify how to work with the tool that is used for interaction. For playability the Eigenharp undoubtedly succeeds in creating an instrument that can replicate the experience and sound of playing a ‘real’ instrument with all associated traits but it is in the user interface that stops if from being truly revolutionary and which does not allow the user to access the full capabilities of the instrument in a way that is complementary to their workflow.

“1st law of alternative controllers; adapt to the new reality. 2nd law of alternative controllers; adapt reality.” - Onyx Ashanti

“We feel that the physical gesture and sonic feedback are key to maintaining and extending social and instrumental traditions within a technological context” - (Bahn, Hahn et al. 2001 1)

I believe that a radical approach to instrument control systems is required to get the most from modern computers and audio software. Audio programming languages such as Max/Msp or Puredata and hardware interfaces such as the Arduino make it easy for a musician to design their own instrument and define their interaction with the computer in a way that is most appropriate to their performance. It is possible to create a “dynamic interactive system” (Cornock and Edmonds 1973) where the performer and computer feedback to each other to create ever changing interactive situations. It has become simple to create a system whereby the action of different sensors is easily assignable and changeable from within the software, and it is this flexibility and almost unlimited expandability that makes these tools suitable for creating a truly futuristic control system.


[1] see Appendix A 1.1 for picture
[2] see Appendix A 2.1 for picture
[3] Vibrato, Pitch Bend, Slides, etc....