We all tend to take modern digital audio editing features for granted, so it’s often a surprise when they don’t immediately work exactly as intended. A good example of this is timeshifting—though the many Warping and Flexing options available in today’s DAWs often do work well on the first try, as often as not the user needs to do at least a little work to get best results. Every timeshifting feature includes a set of algorithms that optimize the processing for different situations, and that’s probably the first place to start. This article will take a quick look at the standard options available for timeshifting, and their specific applications.
Before jumping right into the various modes, it’s worth spending a minute to talk about the underlying technologies behind modern timeshifting features. With analog recording media like tape and turntables, if you wanted to speed up or slow down already-recorded audio, you’d vary the playback speed. But along with the change in speed comes a change in pitch—the familiar “Chipmunk effect”, where the tone of the audio gets either squeaky and high-pitched (speed-up) or deep and spooky (slow-down). Even with digital media, if you simply speed up or slow down the playback (i.e. change the sample rate), you’ll get the same artifacts.
Since timeshifting algorithms manage to change speed without any concurrent change of pitch, obviously they’re doing some additional processing under the hood. The first thing that has to happen is that the audio must be broken up into individual notes and syllables. This is accomplished via transient detection, and in some cases, also by frequency analysis (with material that has separate notes without strong transients). Of course the audio region or clip is not literally cut up in the edit window, but switching to one of the dedicated time-shift displays will reveal the individual markers for the newly-separated notes, which are embedded in the audiofile’s header.
For the best results, the timeshifting algorithm may need to be aware of the current tempo, and if possible, the original embedded tempo of the audio recording itself—if the latter isn’t known, the software will usually try to estimate it. Different timeshifting options (in different DAWs) may be more or less sensitive to this aspect of the process, and more or less effective if any tempo information is missing or unknown.
Once the audio has been prepped, then the software can move the start and end points of the separated notes to effect time shifts. This can be done automatically, instructing the feature to match note positions to different tempos or changing bar and beat positions; alternatively, it can be done by hand, by dragging (or “warping”) the individual notes within a dedicated timeshifting display.
Automatic & Manual timeshifting:
When a note needs to be shortened or lengthened, the software accomplishes this with a technology called granular synthesis—the audio is further divided into very tiny bits, which can then be removed or inserted to shorten or lengthen (respectively) a section of audio. That’s all handled invisibly under the hood (though if you really slow down a file, you may hear the granularization), and the actual editing process—dragging, “warping”, etc—is usually pretty straightforward, but the quality of the results will depend heavily on that initial analysis and separation, and that’s where the various algorithms come in.
Most timeshifting software features some variation of the following algorithms for the initial analysis of the audio file: Slicing; Rhythmic; Monophonic; Polyphonic; and for effects, a traditional old-school mode. Each is optimized for a specific type of audio signal.
Basic Slicing is generally intended for drums and percussive instruments, and it’s the simplest method. This mode analyzes the audio for transients, and then marks each transient as the beginning of a new note. This is pretty much the equivalent of going through the file, eyeballing the transients, and cutting the Region at each one. This is also the simplest approach—there’s no actual timeshifting of audio, the individual slices are just moved around if the Tempo or rhythm is changed. For example, if the tempo is sped up, the slices will be moved closer together, with any extra slice length truncated (cut off).
But if the Tempo is slowed, since there’s no messing with the audio there’ll be gaps between notes, and the ends of notes that are not long enough to fill up the space between beats at the slower tempo may also cut off abruptly. To smooth this over, the algorithm might provide a Decay control, that will fade the ends of any slices that are cut off, for a more natural sound—but the resulting gaps will still remain. This may work fine for percussive tracks, but not so well for instruments with longer decays, where a legato-style phrase could be unintentionally turned into a series of detached notes.
Another way to approach this uses the same basic method of transient detection and note-slicing, but also gives the algorithm the ability to stretch or squeeze the audio of the resulting slices. So, in the same scenario where the tempo is slowed, instead of gaps between the slices, the audio in each slice can be extended to fill any gaps, making it possible to maintain legato playing when necessary. Extending the audio can be done by looping a small segment—a technique once used with samplers—or by applying granular synthesis, depending on how low the designer is trying to keep the CPU hit. This method is labelled Rhythmic in some DAWs, like Logic, but that term may be used in others to describe the more basic Splicing approach. If the algorithm provides some settings, they may make it clear what’s going on under the hood; otherwise a torture test of a major slowdown of a legato phrase (bass or something with cymbals) should provide the answer. Once you know what to expect, you’ll know what kind of audio files that method would be best used on.
Monophonic algorithms are, naturally, designed to work specifically on monophonic audio parts—melodies or phrases with only one note at a time, like vocals or basslines. But watch out—if your guitarist or bass player occasionally hits a diad (two notes together), or a vocal part has a brief harmony mixed in in places, that’ll throw off the Monophonic analysis. Monophonic may take longer to analyze the file, because it has to not only detect transients, but also account for notes that are played without a new attack, like hammer-ons, slides (as on a fretless bass) or melismatic vocal flourishes. To detect these transient-free notes, Monophonic modes also analyze the frequency to determine when it’s changed enough to indicate a different note. Despite the potential for mis-detections, most Monophonic algorithms seem to do quite well with all the examples I just cited, and this mode should be the choice for most solo parts (with the caveats mentioned above) and individual vocal tracks.
Polyphonic mode is designed to be able to handle any audio that contains multiple simultaneous notes—full polyphony—which covers everything from strumming guitars to pianos and keyboards to full mixes. As might be expected, this mode will usually be the most CPU-intensive, but it should be able to handle pretty much any recording. In fact, most DAWs make Polyphonic mode the default algorithm for that reason, though for many non-pitched drum or percussion tracks, one of the less CPU-intensive Slicing/Rhythmic modes might be a more efficient choice, and Monophonic mode—with its specialized algorithm—may provide better-sounding results on appropriate tracks. But for everything else, Polyphonic is the go-to mode.
Modern timeshifting software does an excellent job on most audio, but there still may be some files that can prove troublesome, For some reason, strumming acoustic guitars are often difficult to get good results from—perhaps the combination of multiple rapid attacks (as the pick is dragged across the strings), the slightly detuned vibrations of the strings, and the bright tone with its strong higher harmonics and overtones makes this a torture test, but it’s the type of instrument I’ve most often seen timeshifting features struggle with.
If there are settings available to tweak, it might be necessary to experiment a bit. In some cases (for example Pro Tools) an off-line option is provided for recordings troublesome enough to not respond well to the normally realtime processing. Others (Logic) provide an “advanced” mode, which may work significantly better in realtime, even with problem children, but at the cost of a significant extra CPU hit. But the vast majority of the time, standard Polyphonic timeshifting seems to be able to handle whatever is thrown at it.
There’s usually at least one more timeshifting algorithm provided, which goes by different names in different DAWs—this one mimics the behavior of analog tape and turntable time shifts or old-style sample rate adjustments, with the time and pitch changing together. Obviously, this is intended for special effects, like simulating a classic turntable slowdown/shutoff, and it should be pretty easy to generate those effects by automating the time/pitch change with the DAW’s standard automation options.
So there it is—a variety of algorithms for specific timeshifting tasks. Many people tend to ignore these options, simply enabling the feature and applying it (most likely with the default Polyphonic algorithm), never trying the different options, one of which may provide significantly better results. But if you know they’re there, and what they do, then you have much better odds of getting the best results even on difficult recordings—a few minutes experimentation can be well worth the effort.