Taki, the violin and Ableton Live

A new song called “Taki” is on Soundcloud.

Here is a bit about the tune:

This song was written for and played by my good friend and great musician Takuya Horiuchi aka “Taki”. I had the good fortune of meeting and playing with him in April in a pit orchestra gig for the play Annie at GCSC. He had just left his concert meister gig with the Cedar Rapids Symphony and was exploring musical opportunities. His playing was nothing less than inspiring to ALL of us in the pit. After the gig I found myself humming this melody. Taki was kind enough to drop by the studio and record the melody. Over the last few months I have been adding “a band” to that melody.

Special thanks to:
David “Fingers” Haines – drums (please find this gentleman on YouTube- it will blow your mind!)

Amanda Matthews – Piano – who deserves special thanks for cutting this track on her last erg of energy for that day

Steve Cosper – Guitar – I finally dragged that rascal out of his house – what a great player!I am playing bass ( six string electric in this case)

And of course Taki for his inspiring violin work. You make me want to practice man!

Working in Ableton Live I have tried to make this track feel like a bunch of  musicians hangin’ out playing some music. Perhaps someday all the players can get together and do a few gigs. In the meantime, I hope you enjoy it!

I’ll post a lead sheet for this soon for download.

Balance and Form

Today’s thought – musical composition essentially comes down to a personal sense of Balance (to create equilibrium) between the elements used in the piece and their position, duration and (re)-occurrence in time – which I refer to as Form. Balance and Form. Why this thought? Why today? I’m not sure – but it seems with Ableton Live and all the sampling I have been doing from my environment (for instance a cicada sample sounds recorded at night in the country processed in Izotope’s Iris software) I spend most of my time tweaking how often, how long, how loud, how often to place the different events in time.

Exciting music (to me) is when these elements of Balance and Form are presented in non-intuitive or non-typical ways. The music of Swazack (for instance “No Sad Goodbyes”)  the music of John Adams (Shaker Loops – if you haven’t heard this – get it!), and the music of Terry Wiley  are great examples of modern approaches to Balance and Form. Really all music has these elements – active listening for these two elements is really rewarding.

It is easy to get caught in ever cooler and more interesting sounds – and that is a worthy and exciting exploration for sure (ok, it’s addicting!). But creating a piece seems more about the balance between the elements and the resulting form of the overall piece. Just like a 12 bar blues has a sense of returning to its beginning and then starts again, not so formulaic music relies on a self generated sense of these two elements. And, since everyone has their own sense of what constitutes a balance between simplicity and complexity, loud and soft, fast and slow, etc. it seems seeking your own unique Balance and Form in a piece creates an infinite set of possibilities for music.

A thought experiment – how few elements can be used to create balanced, well formed piece of music – one, two, three? Is 100 to many?

Even in one part, say a bass line – balance and form come into play. Listening and feeling when a part is “right” is to perceive the elements of Balance and Form directly.

And that is the thought of the day.

Quick Ableton Live Tip – Levels that don’t clip on the fly

I haven’t posted for a while… but my New Years resolution was I will post on this site more often. Let’s see, it’s June – not BAD! 😉

I have been using a simple technique to help me address a common issue I have in Ableton Live and I’d thought I’d post a short description of it in case it helped someone in the same situation.

So I’m up late working on a tune. I want to hear just a little more of the rhythm guitar. A few minutes later I might want the accents on the conga to pop up just a little bit more – up goes the velocity. +2db at 3.5k on the eq for vocal, now the bass drum needs a little more, and on and on it goes. Then you look at your master channel output meters – oh man, the level is in the RED! I’m not one to drop a limiter on the master channel and forget it – nope, I’m to old school for that. I figure getting my levels optimized before a limiter goes on is a good idea (however on a Live gig, a limiter is just the thing).

Now before I figured out this tip I would start the mix over just at an overall lower level. This worked but took quite a bit of time. The fact is often I like the relative levels of the tune I’m working on it is just that they are overall to loud.

THE TIP!

In either Arrangement or Session View left click on Track Title Bar to select it. (In the default “New Live Set” the Track Title Bars say “Audio” on track one and “Midi” on track two). After you have clicked on a Track Title Bar press and hold command and tap the “A” key. This will select ALL of your Track Title Bars.

Now, while these are all selected turn down one of the volume faders. You will see that ALL of them change. Nice eh?

Say I have a mix I am really liking but I get a peak of +2.27db (you can see this in the Peak Level Meter read out in session view on the master channel right above the Master Channel Fader. If you can’t see the Peak Level Meter roll over the line just above the Master Channel Fader, click and drag up on the line. Clicking on the meter resets the read out which lets you see current peak levels.) Select a Track Title Bar, Cmd A, adjust the level of a fader on any selected track down 2.5db (or 3db for just a tad of head room). You are done. No “in the red” level peaks for the overall mix.

 

This has saved me a LOT of time and lets me just keep working rather than trying to optimize levels when I’m trying to stay in a creative or production oriented state of mind.

A few notes:

1) If you have groups created for some tracks this technique only turns down the group master – not the levels in the group itself. That is great since many times, for instance in a string section I want to keep the section balance intact but just want the over all level in context of the entire mix to be reduced.

2) This technique can also be more granular – hold shift and click the various Track Title Bars that you want to make an Ad Hoc group out of for global editing. Very handy!

Note – if your peak is really high +6db or otherwise out of control you might have other issues in your mix. There may be one sample or instrument that is just to loud at a particular instant. This tip is really aimed a mix that doesn’t have other level problems, just those that in the normal process of tweaking tend to get a little hot. Oh, you might also need to bring up your playback systems listen level a bit once you adjust the overall levels since you turned down the entire mix. But clean, none “peaking in the red zone” mixes are what I like to have and this tip helps me keep in the flow while I am working.

BONUS TIP!

This technique of selecting all the tracks and making a change globally works for many if not all the parameters in Live. So if you need to change a send level, a follow action, expand, collapse or resize tracks in both Session and Arrangement view or pretty much anything you can think of, this can be a great tool for global edits.

Thanks for dropping by. Comments and additional tips welcome!

Tracking live musicians with Ableton Live

Since this is a post about tracking live musicians, first you need too find someone you want to record.

I just happened to meet Nick Savage, Alto Saxophonist with Bo Diddley on a gig I did in Destin, Florida. I had taken Ableton Live on stage that night and was switching between my 5 string Ibanez bass and playing bass synth with patches loaded in Live. Nick became intrigued with the sounds I was pushing out and on the break we talked a bit. Long story short we agreed too get together again and see what could come of it.

Nick called one day and said he was coming back to the area. I asked if he might like to play on a track I was developing for my Ableton 1 class project at Berklee Music. He said sure, he’d love to play.

My education tracking live musicians with Live started right then.

Nick showed up with a Genisis MXL mic. What a gorgeous sound we got from that mic. The first night was rough. I hadn’t really figured out my headphone amp nor how to create a submix using Ableton’s return tracks. Nonetheless we got some good takes but the sound in the headphones was too loud (or too soft) and I just wasn’t getting how to create the submix. We agreed to get together the next night and I knew I would spend the next day working with Ableton to get a great submix.

I had written Louden Stearns (my Ableton instructor) basically saying help! I had tried to create the return track and route the audio to its own output on my Focusrite Saffire 24 pro but it wasn’t working. Louden was very helpful and gave me the answers I was seeking. But putting them into practice took me a little while to figure out.

I first set up an audio track with a test mic (an SM 58) and verified I was receiving input from the mic/mic pre (my new VTB-1)/Saffire Pro 24 on the track. I optimized the gain at each point in the chain. I knew I was recording a single track of Nick (not stereo) so I set the Ableton Audio tracks’ AUDIO FROM Input Chooser to EXT IN (the default setting) and the chooser below that to 1 (for only recording 1 mono channel).

I then created a return track for this submix (right clicked on an open area either in Session or Arrangement view/Insert return track). On the master channel with the I/O controls visible I set the submix Return tracks’  Pre/Post toggle to PRE. This made the Audio tracks’ volume fader have no effect on the level sent to the Return track by the SEND level volume control.

The Saffire Pro 24 has a Mix Control program that provides a GUI for setting signal routing to various outputs. DAW 1 and 2 carried the main studio mix. I set DAW 3 and 4 to carry the headphone mix from the return track. I had patched two cables from the Saffire Pro outputs 3 & 4 into the inputs 1 & 2 of the Mini Headphone Amp. I centered the balance and turned up the volume on the Mini Headphone amp about 1/2 way. If I got signal to that unit I would see the LEDs light up.

I talked into my test mic and brought up the Send Level Control for my submix Return Track. And nothing worked. Zip. Nada. Not a peep or an led lighting up.

After a bit of just thinking about what I would do if I was an audio signal trying to reach a headphone amp I just started trying things. (This is the HAA! method meticulously documented here: http://www.goldfliesmusic.com/2011/08/haa/)

The final step (which after I did it seemed really obvious) was to change the setting in the AUDIO TO chooser of the Return Track from MASTER to EXT. OUT. In the 2nd chooser I then selected routing channels 3/4 and there was my signal being routed to my Mini Amp Headphone Amp. Bingo!

A good mix changes everything.

When Nick came by that 2nd evening I was able to dial in whatever we wanted into his headphone mix.  So we started cutting tracks. What was interesting is that we would make a pass and stop. I would then:

  • Command/Drag the Audio track under the one we just cut and create a new track with all my settings intact.
  • I  clicked on the audio data that came with the copied track and deleted it.
  • I turned OFF the Track Activator Toggle on the first track,
  • Clicked on the timeline  where I wanted the next take to start
  • Verified that the new track was record enabled,
  • Clicked on the global record button and then hit play.

It sounds like a lot to do but after a couple of passes this can be done very quickly. Keeping the flow of the session moving forward was critical.  While I’m running Live on a Macbook Pro, Nick is trying to play cool, creative licks. These are two very different mind sets and as the engineer on this session I wanted to get Nick into the flow of creativity and keep him there.

BTW, that is the goal of all of this – to get your performers into a mental state where the recording process is transparent and not a factor. You want their focus on music, expression, timing, pitch, the joy of a great, responsive sound.

It is this last bit – creating a responsive sound where that I found an interesting technique I could use to inspire and in fact influence the performance. Nick and I finally got into a groove -me creating tracks and recording and Nick playing, I started thinking it would be cool to put some effects on the track. At first I tried just turning up the Send volume controls on a return track that had a reverb on it. I now know this would have worked had I assigned the Reverb Return Track to the same EXT. OUT and routing channels 3/4. When I didn’t get any reverb I  just real-time grabbed a Reverb from the Audio Effects Browser in Live and dropped it on the submix RETURN track. I adjusted the dry/wet level and reverb length on the fly.

All of a sudden Nick came alive. Hearing his horn in an ambient space made him FEEL like his horn was creating a real, 3d performance. He was controlling his space with his horn. Nick was able to play with the sustain of the note he just played and his playing just opened up. That was cool.

Never one to leave things alone I then dropped a delay on the track. Nick started playing staccato, pointed lines that worked with the timing of the delay. As I changed the reverb and delay settings Nick would adjust his playing. This got to be really fun. So fun I added an Auto Filter and start working with his timbre real time. He was digging it and dug even deeper for great horn lines.

When we came to the end of this process there was an unexpected bonus. All the changes I had made to the Live Fx patches had been recorded as automation data.. It was all there to use as we pleased.

I learned a lot that night. How to set up the submix. How important it is for the recording system to get out of the way of the performer. And how psycho-acoustic effects really can change the mood and output of a performer. While I’m the first to admit I’m Ableton centric, I couldn’t have been more pleased with the work flow of the software and the total realtime flexibility when tracking a live musician.

Later for this tune I tracked Bassist Steve Gilmore, vocalist Tony Delamont and Pianist Amanda Matthews. In each case the above techniques made each performer comfortable and able to give their best performance.

Here is the tune I’ve been talking about.

I’d love to hear about your tracking experiences. Feel free to comment or add your own tips and stories to this article.

HAA! & PHISE: Ableton Live and original musical parts

This is an article taken from my class work from my Berklee Music online class on Ableton Live (instructor Loudon Stearns). The subject – what are the benefits of original musical parts?

Benefits of creating original musical parts

By parts I will include not only parts determined by instrument/note order (such as a trumpet, timpani or guitar part) but also a part that is either a hybrid or pure synthesis/sampled sound (not related to note order).

I have found a few approaches to building sounds and parts in Live.

The first I will title the “Happy Accident Approach” (HAA – sometime written HAA!). This method is the try it and see what happens method. It is a very powerful approach given the extreme sound shaping control available in Live. Load a patch, lob on some effects, twist some dials and as often as not, be amazed by the cool sounds coming out of your monitors. This method, though time consuming shouldn’t be underrated as the results can be very useful. I have a few tools that help me with this:


1) Touchable for the IPAD: Touchable has a very cool mode when working with devices. It is called snap mode. When snap is enabled the value of the parameters in place at the time of enabling snap mode are “sticky”. Then, when you change a parameter (say filter type, resonance, etc.) you hear the change in sound while you move the device parameter but once you take your finger off the IPAD the parameter snaps back to its original value. This is neat tool for experimentation. Although it can take some hand contortions to modify multiple parameters at once, you can get some very interesting sounds and then “Snap” back to your original sound.

Also the physical modeled (gravity/bounce) X/Y pad… major HAA! potential there.

2) Kapture Pad for the IPAD: This is essentially a librarian for the settings currently in your Live Set. You can work with sounds, device and effect settings and more and then “Kapture” a snap shot of the current state of LIVE. These snapshot can be grouped in BANKS, labeled and recalled non-sequentially. This is a simple and effective tool for allowing sound exploration while not losing what you previously had in terms of previous settings. A cool thing about Kapture Pad is that the bank/setting info is stored on the host computer, not on the IPAD allowing for any Ipad with the Kapture pad software to access the saved information. The information can also be archived (backed up) from the host for safe, off site storage.

3) Koncrete Performer: Somewhat less intuitive but visually alluring is Koncrete Performer. This is an actual controller for Live with a librarian attached. The interface are Multiple Nodes controlled by touch. Individual/Multiple parameters of Live’s devices are mapped to the nodes and can be controlled via touch. You sort of have to see this to get the idea but the interface is visually breathtaking. Projecting this interface during a performance would make a very appealing visual for an audience.

All of these tools allow for creating and archiving “Happy Accidents”.

The second method I use for creating musical parts is what I would call “Pre-Hearing Intuitive Sound Envisioning” (PHISE).

This method is a bit more unreliable but can be a HUGE time saver over HAA!. The main method with PHISE is to Pre Hear what it is you want to hear in the real world. Here are a few of the ways that this manifests:

1) Song idea inspiration – have you ever ridden in your car and started to sing a melody? Quick, grab the recorder on your phone and capture that melody. Now when you sit down to Live, input that melody into Live and start building out the tune. You have successfully used PHISE.

The Next Note Theory (NNT)

(briefly stated – at any moment in a piece of music, the most relevant information is the next note)

2) Using NNT, PHISE is expresed as follows: You have built a song/sound structure and you envision another part as yet UNPLAYED. It can be a vague direction or a clear path to the next part, chord change, sound, effect, rhythm, whatever. That is PHISE at its best.

To use this method you have to allow yourself to be very cognizant of the burst of creativity that your mind has provided to you in terms of a sound choice, a chord change, an effect, – any yet unheard (imagined)  part of the project you are working on (or have yet to start).

Warning – it is VERY easy to ignore these PHISE moments. I recommend you don’t. They are rarely convenient, can invalidate the work that came before them and can lead to to many directions from a very simple core. My recommendation is to document them all and when you aren’t having PHISE moments use the more rational mind and the HAA! approach to flesh out the inspirations.

Of course this process doesn’t have to be linear. You may be working on a project and pre-hear something on an unrelated project. STOP. Record or otherwise document that idea.

Finally, the two methods can work hand in hand. Happy Accidents lead to PHISE that then lead to Happy Accidents which then lead to… you get the idea.

So after doing all of that, what are the benefits?

1) No copyright infringement lawsuits
2) Other musicians dig what you are doing
3) Really great musicians and producers want to interact with original artists
4) You start to appreciate originality in other art forms (dance, architecture, painting)
5) In some case, wealth, fame, and a 401 K can come with successful marketing of original music parts.
6) It feels good to create original music parts.

Ableton’s Soniculture Partner Instrument – the Novachord

I found something interesting. I thought I’d share.

The debut of the Novachord in 1940

First, some quick background. Ableton and various partners offer add on instruments for Ableton Live. These can be auditioned on the Ableton site. Recently they ran a special and I picked up a few instruments.

One instrument that I had dismissed when I was auditioning the various sounds was the “Novachord”; an instrument created by the Hammond Organ company in 1938.
Fortunately there had been a partner instrument sampler issued by Ableton that included a few sounds from the Novachord. The sounds were so compelling I created a new age composition on the spot to hear these great sounds in a musical context.

This morning I was reading the Wikipedia article about the Novachord (http://en.wikipedia.org/wiki/Novachord) and read that composer/arranger, Ferde Grofe wrote music for the Novachord. If you aren’t familiar with Grofe, he is considered the “Father of Arrangers”. This article has interesting information on Mr. Grofe.

http://courses.wcupa.edu/frichmon/mue332/spring2002/dougballard/composer.html

Grofe’s Grand Canyon Suite is a collection of pieces that describe sonically the Grand Canyon. He was instrumental in working with George Gershwin to create “Symphonic Jazz”, specifically Rhapsody in Blue.

So why is this Ableton related? Grofe wrote and conducted a piece featured at the 1939 Worlds Fair that was performed on 4 Novachord Synthesizers. And this morning, I found out there was a video of this!!

The opening bass sound ( a combination of the drone of a old propeller airline/horror movie bass pedal) is even today, a deeply moving and dramatic sound. Then the video plays some pieces from the 1930/40 era but if you go to 2:30 in the video there is a gorgeous, warm, analog string sound. To my ear the sounds that were used for these interludes are sounds that could still be relevant to current music productions.

Anyway, I think the Novachord Sound Set by Ableton/Soniculture shows just how much depth there is to electronic music and I as I am discovering, it’s association with some of the best musical minds of the 20th century.

The great contribution of Ableton is to not only break new ground with its techniques and innovative workflow but to also move the long and great tradition of electronic music forward while preserving the historic roots of the electronic music art form.

I hope you enjoyed that video as much as I did.

The War of the Clocks!

I stumbled into some interesting thoughts after my gig last night. I was hired to play bass at a Memorial Day event in Appalachicola. The gig went as well as one could expect. We had a relatively new line up on stage with Steve Cosper on guitar and Luke Pinagar on keyboard and Flugalhorn. Dr. Hulon Creighton (sax), Joey Kirkland (drums) and Jeff McBride (vocals) rounded out the group.

For the 2nd time (ever!) I had taken Ableton Live on my MacBook Pro along with a small mixer and an Akai Mini keyboard on stage to add some synth bass lines to the group. I also had my IPad 2 I-rigged up to use as a tuner. I actually got a fringe benefit from the IPad setup since the I-rig allowed for me to use various IPad musical instruments on stage.

That is where my problems (and the solutions I am pondering) began.

We did the very simple but great tune – Let’s Get It On by Marvin Gaye. The bass line can be quite simple, yet very effective. So, on the fly I tapped the tempo into Ableton Live and recorded the bass line for the verse. I know that Live allows for tempo nudging but no matter how many times I recorded a midi bass line and tried to be in sync with the band I was always out of sync with the band.

That was a real education in live Live use.

I am sure I have a lot to learn about using Live live but here is what I have come up with so far. It all comes down to what is being used to synchronize the group or in other words, where is the Clock information originating?

Continue reading The War of the Clocks!

Pickels and Bass

Tonight I had a guest at the studio house. Thomas Pickels and Jamie, both of whom work at Lietz Music in Panama City came by for a while. Thomas contact me by Facebook and wanted to know if I would play on some tracks he is putting together for a CD. He emailed some tracks over and I gave ’em a listen. Good playing – in tune, in time and well recorded. But something was missing. So I asked him to drop by and pick a bit. I met them at my office and steered us all over to the studio. You know, you never know what is going to happen when you invite a player over. He carried in his amp and guitar and got situated. We listened to the tunes he sent in LIVE and talked for a bit. I shared a few of the current compositions I’ve been working on and I think LIVE blew Jamie’s mind (he said he was going to learn to use it so he could demo it better at Lietz). Then I said hey, enough of the computer – let’s play!

Continue reading Pickels and Bass