Spectral analysis is one of the more advanced ways to exert more control over your audio material. Electronic Musicians have been using this technique to refine the timbral qualities of audio for many years, since the Fairlight was introduced in 1979.
Today there is an even deeper order of magnitude that we can unlock as Electronic Musicians, in the way of innovative tools such as Max Msp and Pure Data, and the recently released Max For Live, which combines functionality of Ableton Live and Max Msp.
One such implementation of this methodology is the use of external software that extends these products. The Soundhack externals for Max Msp present a myriad of paths to reach into an audio file and flip it inside out, or to glean portions of frequency from the material in order to recreate the sound entirely. One analogy of spectral synthesis is kind of like making the resin-based paint from a Jackson Pollock piece wet again, and swirling it around with a brush or slinging it onto a different canvas to make your own iteration.
The Sound Hack Max Msp externals allow us to accomplish audio engineering tasks previously thought impossible. As an Audio Engineer, I am frequently asked by friends if it is possible to completely remove a vocal or crowd noise from a concert recording or media event. This would be no small task, as the "printed" audio material is sort of tied together with the neighboring frequencies of the recording, and are difficult to free from the adjacent frequencies.
As a more creative application of sound hacking, you can employ granular synthesis techniques to segment audio into tiny bits, to randomize playback of audio "grains" and change pitch and volume in real time.
Check out a voicemail that I received from my friend Jeannie, running through Soundhack's +bubbler external in Max.
The video above stems from a Processing patch by Joseph Grey running through a projBox video controller. One application that I found particularly engaging recently is the interactive performance/installation called Synaesthesia, using paint and sound using Max Msp and Processing. I will expound on the unison of audio and visuals as our study of Max Msp Jitter progresses into working with Ableton Live via Max for Live, and even integrating this into Adobe Creative Suite.
Integrating Max with Max4Live, you can wire up your Max Msp patches with Ableton Live, which really opens up the universe for creative musicians look to push the envelope even further.
Let's take our voicemail, processed by SoundHack, and trigger notes from Max into Live using Max4Live. The idea that I had was to create a chord out of the voicemail material, using something without key, although my friend Jeannie's voice is quite beautiful already, and turn it into something more like a flute or organ perhaps.
We will need to create a chord structure, so let's take the root note of c3, and add two more notes to our root transposed to +2 and +4, for a C chord. This will give us 2 harmonics to choose from, in order to make our chord dynamically change if we like by mapping the transposition to knobs on our midi controller. Then we can re-sample the chords and play them back as a whole in different pitches to make a short chord progression. For this we will use Ableton's Simpler instrument to contain the processed voicemail, driven by Max4Live's custom MIDI transposer and chord generator patch.
I downloaded the voicemail from Google Voice, saved as mp3, exported to wav in iTunes, and opened it in the +bubbler soundhack external by clicking the "open" button (See Figure 2). I then clicked the "1" message to play the audio file (1 is "true", so in other words, "go ahead and play the file" in programmer-speak). I used Quicktime's capture audio feature to record the results. I could have used Max's ability to route audio from Max into Live via M4L, but I will show you how to do that in a future tutorial. I opened the resulting processed audio file and dropped it into a Simpler instrument in Ableton. I stepped through a few Max for Live tutorials by going to Ableton Live, navigating to the library, then opened the Max for Live tutorials, and then chose 05-Processing Midi Notes Lessons, then M4L.md.04.Transpose.amxd. I copied the Transform area of the tutorial 2 more times (see Figure 2), so that I would be able to make a chord out of a single note, so that the keys would all sound at exactly the same time, and so that I could map the transposition amount to a midi controller, extending the ability to change the file further.
I opened the resulting custom *.amxd (the file format for Max for Live) and grouped it with the Simpler instrument, so that I could press a key on my novation remote 25 SL Compact midi controller, and then the customized midi note transposer tutorial would add 2 more transposed notes to make a chord out of the sampled audio.
I mapped the "start", "loop", "length", and "fade" controls to the macro controls in the device rack, and mapped those 4 macros to my Ozone audio interface/midi controller's first 4 knobs. This way, I could gradually narrow the audio file during playback into such a small loop that you eventually hear the audio "shrinking" into a very tiny looped waveform that sounds a bit like an electronic organ. The low resolution of the google voice mp3 voicemail combined with my quicktime audio capture from the laptop speakers kind of gives the result a crystal-clear sheen that is a timbre I have not heard before. It came from my friend Jeannie's voice (albeit only a few repeating microseconds of syllables), so perhaps this makes for an interesting context to this experiment in that regard, in that her voice is in essence, changed from speaking to singing via software!
The resulting audio file sounds like this:
So there you have it, a random piece of audio completely transformed into something else. Time to get to work reversing the process! It's ok, I called Jeannie back already :0)