updated 03:02 pm EDT, Fri May 25, 2012
iPad and glove combination used to control virtual objects
Researchers at the MIT Media Lab have created a Minority Report-style glove and iPad input system for three-dimensional virtual object manipulation called T(ether). The system lets the team use motion tracking on gloves to create and alter virtual items with the iPads used as a point-of-view window to see what the users are manipulating. A demonstration video shows the system having multiple users simultaneously affecting the compter-generated world, both with and without the glove.
The iPad and glove have markers that are seen by Vicon motion-capture cameras, which in turn feeds the positioning data to a central server. Data includes states of individual virtual objects, as well as the co-ordinates and orientation of the tablet, in order to correctly render a scene. Gestures performed by the glove, such as pinching together a finger and thumb, tell the server to add, remove or alter an object. Users can also draw on the iPad, with the system working out where on the screen the finger is pressing and relating it to the iPad's position in order to create three-dimensional line drawings.
Due to the way that the system relies upon a motion-capture rig instead of the camera on the iPad, it is unlikely for it to be appearing on an iPad for consumer use any time soon. It has potential use in a corporate environment, most likely for animation and design purposes.
The object-tracking capabilities could be compared to Google's recent Project Glass patent filing, which described using two camera points to track a uniquely decorated item such as a ring. However, that system would only be able to deal with volumetric data within a close range of the wearer's eyeline, and based on the demonstration video, Google appears to be more concerned about menu navigation. [via Cult of Mac]