updated 05:00 pm EDT, Thu April 17, 2008
AT&T launches "Surface"
AT&T on Thursday became the first deployment of Microsoft's Surface computing, a new computing paradigm that leverages projection, cameras, and computer running Windows Vista along with a special management layer. As announced earlier this month, the company rolled out its new 22 "multi-touch surfaces" at five locations in four different cities around the US, including New York, Atlanta, San Antonio, and San Bruno ("San Francisco"). Microsoft Surface is made up of a 30-inch screen built into the top of a rugged, clear plastic surface that has the ability to "sense touches" and gestures as well as read barcode-like tags to identify and present information on products placed on the surface.
Unlike traditional touchscreens, Microsoft Surface does not use capacitive components, but instead uses cameras to read gestures on its surface. Like Apple's own multi-touch gestures recognition on the iPhone, the surface responds to a variety of different hand motions, including push/pull (dragging), zoom, rotate. But unlike the iPhone, the surface allows for multiple users, multiple simultaneous gestures, different viewing angles using a 360-degree UI, and object sensing (through ID tags on the phones). Leveraging simple physics and perhaps extending collaboration to a new level, the surface allows users to "push" information across the surface -- as if it were sliding -- to other users at the table.
AT&T reportedly used Microsoft's SDK to built its own application and will continue to refine and improve the application to make it more engaging based on customer feedback. The default screen is the AT&T network coverage map, which allows users to visually identify coverage of AT&T's network at any location across the US using a color-coded map that can be zoomed to the street level. Users can check for both EDGE data as well as faster "3G" data coverage along commutes, travel destinations, or other locations.
Engaging with each object on the screen is fairly intuitive, but sometimes embedded elements may confuse customers. Each information "container" may have its own elements within it, like a video with a play button or a list of features with in a scrolling window (which automatically adjusts the font size proportionately to the zoom of the window). However, trying to resize an object with a scroll text element is not only difficult, but counter intuitive: users must apply the zoom gesture to the (small) portion of the window outside the scroll element, but inside the window. Some sort of visual feedback (e.g. highlight) would easily help users understand which portion of the entire object they are interacting with and which gestures are appropriate.
The Surface is designed to work around a multi-user interface that enables several users to gather around the table and view information from different points of view. Although limited to a single item, users can move, zoom, and rotate information "containers" with either text, movies, or photos, enabling them to interact with the information without regarding to orientation from any position around the table.
The collaborative aspect of Microsoft Surface is intriguing, but it does have some limitations: placing a second phone on the surface while somebody else is viewing information on automatically put the surface into the phone feature comparison mode. Future versions may allow users to simultaneously explore information and specs on multiple phones on the same surface, representatives said.
While not revolutionary, the technology is engaging and at a retail level, gives customers easy access to information and allows them interact with that information in new ways. It certainly has the potential revolutionize the way shoppers engage and learn about products.