Ableton Live is a fantastic program. The only complaint that I continue to hear about after about 7 years of using Live is the fact that you cannot record automation in the Session View. This problem can be circumvented with Max for Live. One example of such an application is the production of automation data via an automatic event generator. I have enlisted a few Max objects to produce not only automation data per a specified waveform, but also have made a note generator based on the same principle.
Figure 1 shows the volume envelope being modulated by a three-dimensional waveform. By three-dimensional, I mean that it has contraction and rarefaction in three aspects, over the course of time it is modulated per larger cycle, and within that cycle, and also as a whole. You can see the peaks and valleys of the result in this graphic.
Figure 1: a sine wave with three dimensions plotted to linear vectors
Figure 2 shows a more sophisticated waveform that specifies each "sample" of the automation data before Live's vector algorithm simplifies the curves into lines to maximize efficiency. However, this level of detail can be quite interesting to play with, as it will be pivotal when we increase the frequency of this kind of waveform into the audible range.
Instead of this sine function being deterministic of the timbre of the sound, I instead employed its rise and fall characteristics to define note data from 0-127 in midi note numbers, or rather c-2 to c-4 on the keyboard. Figure 3 shows this result in the Arrange View of Live.
Figure 3: spreading the waveform out manually with 2 midi controls
You can see that the volume envelope below mirrors the sine function, so you have a duality happening. In fact, we can extrapolate this further by creating multiple lines for the notes, one being transposed up 7 semitones and one down 5 semitones, to create a rich unison harmony. The result is that you have all 3 parameters of sound being controlled by one function, which is somewhat representative of the serialism movement wrought by classic electronic musicians such as Steve Reich and Phillip Glass. However, this is quite a rudimentary proof of concept and in no way even comes close to paralleling the brilliant works of those pioneers.
This method that I have developed is simply one of the very early stages of the process of making a structured composition based on rules, that gives way to a more academic approach to sound design and composition that has been a subject of mystery for me since I first heard and began studying the works of Rich and Glass.
With Max for Live, you can achieve the level of programmability that begins to weave the fabric of what your imagination might want to realize. Figure 4 shows the note data being duplicated across multiple lines for a voice.
Figure 4: three harmonies stemming from one voice, driven by the sine function
Figure 5 indicates that this sine wave can have wow and flutter in it, however still adhering to the same basic shape. This gives rise to more of the performance aspect of composition, where you can stipulate changes manually over the system that you have made to behave in a certain way. It is satisfying to finally have the tools to craft your song in such a way as this, and with a little fundamental knowledge of Max Msp and its role in Max for Live, you can let your modulation sources have full reign over what you know Ableton Live is capable of.
Figure 5: modulating parameters manually which influence the overall song design
If the idea of programming in Max Msp seems a little strange, let me explain the patch that creates the sine function, shown in Figure 6. You could adapt this patch to your own design to get a feel for how to construct a waveform in Max. From top to bottom, two live dials are what we map midi knobs to in live. The inherent limitations of midi's range of 0-127 can be changed to represent the data that we need to give the objects below it to accomadate their ways of working. For example, a cycle object produces a waveform by scrolling through 0 to 512, like a graph on the x-axis, with each height going up from 0 to 512 and back down again in a cosine wave.
The scale object takes the numbers that the live.dial objects hand out, and scale them to 0 to 3 on the left, and 30 to 300 on the right. This is because we want the cycle object to scan through all 512 values at a rate varying from 0 to 3 hertz, or complete cycles per second. This is sent to an abs~ object which stands for absolute value and that is just used to break off the numbers below zero (we just want the hertz to go from 0 to 3 instead of -3 to 3 which doesn't quite make sense). On the right we are feeding a snapshot object which takes the signal on the left which is a whole lot of numbers and thinning the stream out such that one sample or "snapshot" of the waveform over time is taken once every so many milliseconds (defined by our right knob).
So thus far, we are saying "make a wave of this varying frequency, and tell Ableton live about it at a rate that I will give."
Again, we "message" the data with another scale object to determine the note numbers that we will give to a makenote object, which takes note, velocity, and duration from the top-left to the top-right, respectively. The note numbers are scaled to not sound too high-pitched, and the velocity and duration are scaled to not be too quiet. This feeds a little number object just to say what note is coming out, and this is packed into a single message that the noteout object understands. In Ableton Live, this Max patch lives in a midi track, and feeds an instrument, in this case, the tension instrument with a mallets patch.
Figure 6: a max patch to make a custom waveform, that can change dynamically
You can hear the result of my work here: https://files.me.com/responsiblet/s7xtgx
It goes from atonal to major scale over the course of several minutes, I hope you enjoy it!
Learn more about Ableton Live here.