Welcome to the Digido Fora!
There is a lot for you still to absorb and you’ve asked a multi-faceted question! Let’s start with the concept of a buffer: There is always a buffer required for digital audio playback, since the samples have to be assembled for playback, coming off of storage. As the samples are not stored in order of playback. Plus if you do an edit, then your DAW has to access one set of samples from one location and then marry them with another set of samples from another location.
That all takes time. To handle that, the samples are assembled into a buffer memory and then played out. The shorter the buffer, the less the latency or delay between the playback of any sample and the output to the world. The shorter the buffer, the more the chance of getting glitches as the computer tries to catch up with all the data. The longer the buffer, the more stable the playback, leaving room for complex operations like EQ, compression, etc.
DAWs have a built in mechanism where what you see on the screen (the waveform) can be made to appear synchronous with the sound. It’s a trick with the buffer where the computer delays the waveform view until the audio buffer has completed.
If you are overdubbing, buffers and latency can get in the way. So it’s important then to use as short a buffer as your system can tolerate without getting glitches or being unstable. Some DAWs have tricks of programming which allow you to overdub without perceptible delay, maybe a minimum delay of only a few milliseconds, which hopefully is not perceptible.
Yes, every converter is different, but most modern converters have a delay of only 2 or 3 milliseconds max. The higher the sample rate, the shorter the delay.
I suggest you experiment with using the longest buffer that Pro Tools provides and try to overdub. You’ll soon experience issues with the latency. Admin Ryan Sutton is a pro tools expert and could answer your questions in relation to overdubbing in Pro Tools specifically. I think Ryan may see this thread and “amplify” here!