I've just bought the excellent Tom Rice tutorial video 207: Logic's Latency Toolbox, and have found it to be the best video i've bought! The information is essential for anyone working with hardware synths and Logic. I've gone through it once already, but have a few questions regarding the setup procedure for midi latency. Hoping that Tom Rice will be able to share his expertise in this. I use a Motu multi input audio interface with all my hardware synths routed to its inputs, and a Unitor8 multi port midi interface. Monitoring is through Logic using software monitoring with a buffer size of 32. Up until now my method for setting external hardware synths was as follows. I'd first work out my recording delay, and i was already using the "ping" method as Tom describes. This ensured that recorded audio would be placed where it should be. However my method for each synths midi latency... or reaction time to midi data, differs from Toms. I'd set up a midi track containing four bars of same pitch, same velocity quarter notes to trigger my synth. I'd then set up an audio track, with its inputs set to the channels that my synth is plugged into on the Motu (ie 1+2) with input monitoring switched on. Then i'd recorded enable this audio track, so as to record the synth signal coming into the audio track, being triggered by the midi track. Once i've a recorded audio file, i'd then open the sample editor, and check how late off the grid the recorded audio is. I'd get an average figure, and this figure would be the synths reaction time/latency to incoming midi. As an example, say i'd find the synth averaging 3.1ms late from the grid, i'd enter a negative value of "-3.1ms" in the track delay parameter box. Running the test again, the recorded audio would now be more or less bang on the grid....where it should be. I'd repeat this for each hardware synth in turn, going along and working out each synths latency time to incoming midi. And each time entering a negative value in ms in each hardware synths instruments track delay parameter. This was how i'd do it....until I saw Tom Rice's method, which is leaving me really confused. Why is Tom not using an audio track to record to, but instead using a bus to record to? That has totally confused me!! In the video called "compensating for midi playback latency 2" there is an example of slicing using the marquee tool, and working out average's etc.... Surely it's simpler to just count the samples from grid markers to starts of transients, work out an average value and then enter this number as a negative amount in the instrument track parameter? Tom's method seems way over complex....That is unless my method is doing something very very wrong.......? Hope Tom, or someone else can explain why or if my method is way off...?