The computer vision wizards of Microsoft (including the Internet sensation Jonny Lee, the guy who hacked a Wiimote into a virtual whiteboard) have been busy working on a controller-less technology that, apparently, can sense shapes and forms and track their motions.
Imagine the uses for puppetry or mime! In the video above, the boy gets to perform the rampages of a giant Japanese monster. The girl drives a car by miming the hands on a steering wheel. I can see this being used for virtual Muppets, where a simple two-handed rod puppet could drive a virtual puppet decorated to look like whatever you want.
Some questions to ponder. Can Project Natal track depth accurately? What's the latency? How many things can it track? If a tracked object gets occluded and then reappears, is there a delay before it gets picked up again?
Low-cost motion capture / digital puppetry inches closer and closer. I hope Microsoft opens this up to XNA so that indie developers can play with it.