I stumbled into some interesting thoughts after my gig last night. I was hired to play bass at a Memorial Day event in Appalachicola. The gig went as well as one could expect. We had a relatively new line up on stage with Steve Cosper on guitar and Luke Pinagar on keyboard and Flugalhorn. Dr. Hulon Creighton (sax), Joey Kirkland (drums) and Jeff McBride (vocals) rounded out the group.
For the 2nd time (ever!) I had taken Ableton Live on my MacBook Pro along with a small mixer and an Akai Mini keyboard on stage to add some synth bass lines to the group. I also had my IPad 2 I-rigged up to use as a tuner. I actually got a fringe benefit from the IPad setup since the I-rig allowed for me to use various IPad musical instruments on stage.
That is where my problems (and the solutions I am pondering) began.
We did the very simple but great tune – Let’s Get It On by Marvin Gaye. The bass line can be quite simple, yet very effective. So, on the fly I tapped the tempo into Ableton Live and recorded the bass line for the verse. I know that Live allows for tempo nudging but no matter how many times I recorded a midi bass line and tried to be in sync with the band I was always out of sync with the band.
That was a real education in live Live use.
I am sure I have a lot to learn about using Live live but here is what I have come up with so far. It all comes down to what is being used to synchronize the group or in other words, where is the Clock information originating?
First, if a group of musicians surrender time keeping to a machine (and a midi clock) then manipulating sounds during the performance is fairly straight forward. But if a live band is relying on a human drummer for its time keeping then it would be very difficult to build up parts that work with the group – unless a part isn’t rhythmic in nature – say a pad that provides a background to an entire section of a song. If a part created on the fly is synced to by the live band then that is just a case of cede timing authority over to the computer.
Once the timekeeper in the group cedes metric authority to a machine, live sequencing is greatly simplified.
So, while I’m not giving up with on the fly recording with a live band (and would love to talk to those that have met this problem and conquered it) it gave rise to a few more thoughts. First, why do musicians NOT want to give over timing to a computer based system? I can think of a few reasons; first, time from a machine is perfect. That is very difficult to play with and takes not only practice with a click but also very effective monitoring of the timing reference track (generally a click sent to headphone on the drummer). I used to use this model of live musicians w/click in the late 1980s when our band at the time used a mc-500 to play bass, keys, horns, percussion and a lighting rig. With enough rehearsal the group was able to play very tightly with tempo changes and dynamic changes (the Isley Brothers “Shout” comes to mind – a little bit softer now, a little bit softer now, a little bit louder now, a little bit louder now). It was very effective BUT it leads to the 2nd reason bands don’t like playing with a machine; spontaneity is gone. We had to program our dynamic and tempo changes and then rehearse like crazy to get tight with the machine.
How can machine produced elements be incorporated into live music and retain spontaneity?
It seems to me to be a war of the clock. If, for instance the human drummer is the “clock” of the group then, except in certain cases, note based phrase layering will be very difficult or impossible to do. (A thought occurs to me that perhaps another way to use sounds from the computer with a live group is to use one shot clips. The sound would happen quickly – say for instance the Hey, hey, hey from My Girl and any timing problems would go by quickly enough not to be perceived).
If the clock (time) comes from a machine then predictability is achieved and on the fly composition is better facilitated. Perhaps techno music is a good example of this – time is almost always from a midi clock and the performer is free to improvise across rock steady time.
If you have read this far… all I can say is; “wow”. Thanks.
So one more thought about clocks. In my work with the Panama City POPS Orchestra, our conductor, Eddie Rackley is the clock. Actually he is much more than that but for the sake of this post Eddie is a clock. (Please forgive the one dimensional simplification of you Eddie!). The conductor shows us where time is (acts as a clock) and musicians slave their internal clocks to his. Once we do this successfully we can speed up, slow down, receive dynamic information and more from his hands. I notice that one way that we keep in good time with the conductor is the space between the notes. For instance, when he conducts a downbeat there is invariably an upbeat first. We SEE the upbeat and the downward motion after that culminating as “1” at the bottom of the batons travel. We can anticipate (with varying degrees of success) where one will be – BEFORE IT ARRIVES. This is NOT true with computer based systems. On a computer music sequencer I know a beat arrives when it arrives and unless I am lucky or have accurately guessed where the beat is based on the last beat, I am alway a hair late. This effect is even more pronounced during accelerando and decelerando.
So the thought occurs to me, can two systems be created? The first is the ability to conduct humans and machines at the same time. Basically the conductors baton would be tracked in both and space and time and would be recorded (and editable!) and second, if a piece is conducted and the timing information is recorded into the computer, can a visual interface be developed that returns the visual clues imparted by the conductor to the performers?
So I went to the internet thinking I am on to something. Turns out some very bright people have been thinking about this for quit a while. I’ll add some cool links I found about this to this post. For now, as I sit here on my iPad typing this, I can visualize an iPad interface for recording and then playing back visual conductor information to performers. Or perhaps something like a WII controller for gesture capture. Can it be done? Probably. Expensive? Perhaps. Would it be useful? I don’t know.
And that is as far as I have gotten today. Off to set the rig back up in the studio and keep learning.
Here are a few links to others that have thought about this: