I wonder if it'll work on dirty sounds too. All the sounds in the video are clean. Curious to know if all the extra overtones & distortion would confuse the program.
Also... If you were to plonk a full song into it do you think it would pull the entire thing in to it's different parts?
But even with these questions playing on my mind I do think it's pretty amazing.
It wouldn't pull separate instruments out, because it just works on the harmonics, not the formants of the sound- it doesn't know whether that's a violin, or a guitar, or a piano, it just works on the harmonics produced by the notes. I don't think we'll see this happening in the next 5 or 10 years at least, the human brain is much more advanced than the computer with this kind of thing :)
You can hear artifacts in the video though, so don't expect to have perfect songs from poorly played parts.
I'd imagine it's only one small step (and a large amount of processing power) before it can be used in real time, but it might not come out how you expect- if you played a note on fret 16, how would it know whether you meant to play 15 or 17? Both could be part of the same scale.
It's a great idea, and a clever piece of programming from the Melodyne guys, but i don't think it'll be heavily used in the bigger studios. Perhaps for demos and stuff it'd be great, you could just correct the guitar parts and have a whole demo bashed out in a few hours.