Active In SP
Joined: Aug 2009
10-09-2009, 05:35 PM
Microsoft Surface It is a computer with different look & feel. Surface does not have any keyboard or mouse. This uses a multi touch screen as user interface. Change an ordinary tabletop into a vibrant, interactive surface . 30-inch display o Able to recognize actual unique objects that have barcode. Now it is only available in restaurants, hotels, retail and public entertainment venues. o Will transform the way people shop, dine, entertain and live. What is surface computing? A new way of working with computers that moves beyond the traditional mouse-and-keyboard experience. It is a natural user interface. Evolution.. An Idea Inspired by Cross-Division Collaboration In 2001, Stevie Bathiche of Microsoft Hardware and Andy Wilson of Microsoft Research began brainstorming concepts for an interactive table. Their vision was to mix the physical and virtual worlds to provide a rich, interactive experience. Evolution.. 2 003 Humble Beginnings on an IKEA Table In early 2003, the team presented the idea to Bill Gates, Microsoft Chairman, and within the month the first prototype was born, based on an IKEA table. Evolution.. Humble Beginnings on an IKEA Table In 2004, the team grew and became the Surface Computing group. Surface prototypes, functionality and applications were continuously refined. The team built more than 85 early prototypes for use by software developers, hardware developers and user researchers. Evolution.. Hardware Design By late 2004, the Microsoft Surface development platform was established and attention turned its form. A number of different experimental prototypes were built, including the tub model that was encased in a rounded plastic shell, a desk-height model with a square top and cloth-covered sides and even a bar-height model. After extensive testing and user research, the current look and feel of Surface was finalized in 2005. Evolution.. From Prototype to Product Today, Microsoft Surface is a 30-inch diagonal display table that is easy for individual or small groups to use collaboratively. With a sleek, translucent surface, people engage with Surface using natural hand gestures, touch and physical objects placed on the surface. Hardware Specification Screen:A large horizontal multitouch screen, The Surface can recognize objects by reading coded domino tags. Infrared:Surface uses a 850-nm light source. CPUÂ¦. Core2Duo processors 2GB of RAM 256MB graphics card. ProjectorÂ¦ The Surface uses DLP light engine found in many rear project and implimentationion HDTV's. Specification Features: Multi-touch display, Horizontal orientation. Requirements: Standard American 110â€œ120V power System: The Surface custom software platform runs on Windows Vistaâ€žÂ¢ and has wired Ethernet 10/100 and wireless 802.11 b/g and Bluetooth 2.0 connectivity. Surface applications are written using either Windows Presentation Foundation or Microsoft XNA technology. Dimensions: 30-inch (76 cm) display in a table-like form factor,22 inches (56 cm) high, 21 inches (53 cm) deep, and 42 inches (107 cm) wide. Materials. The Surface tabletop is acrylic, and its interior frame is powder-coated steel. Specification (Cntd..) Â¢ At Microsoft's MSDN Conference, Bill Gates told developers of "Maximum" setup the Microsoft Surface was going to have: Â¢ Intel Core Quad Xeon "WoodCrest" @ 2.66GHz Â¢ 4GB DDR2-1066 RAM Â¢ 1TB 7200RPM Hard Drive Â¢ It has a custom motherboard form factor about the size of two ATX motherboards. How Surface Works At a high level, Surface uses five cameras to sense objects. This user input is then processed and the result is displayed on the surface using rear project and implimentationion. Microsoft Surface can also identify physical objects that have bar-code-like tags (Domino tag). Four Key Attributes Direct Interaction Multi-user experience Multi-touch contact Object recognition Direct Interaction Â¢ Direct interaction means that, we can interact with the Surface by using our fingers. Â¢ No other input device is needed to give input. Â¢ This provides a natural interface effect. Multi-user experience Â¢ A single touch screen can support more than one user. Â¢ Each user can interact independently with the surface. Multi-touch contact Â¢ Ordinary touch screens provide only single touch sensing Â¢ In surface more than one touch can be recognized at the same time. Object recognition Â¢ Object recognition is done in the surface by using special bar codes called Domino tags. Â¢ These are infrared sensitive patterns which are read by the infrared sensing cameras inside the surface. ApplicationsÂ¦ Â¢ Â¢ Â¢ Â¢ Â¢ Â¢ Digital photo handling with finger tips. Instantly compares while shopping. Interaction with digital content by share, drag and drop digital images. Surface Restaurant. Quickly browse through play list entries dragging favorite song to the current trackÂ¦ Easy to take complex shopping decisions. Digital photo handling with finger tips. We can handle images directly with our finger. Manipulating the images is even more better than the real photos. Instantly compares while shopping We can directly compare different products just by placing them o the surface. This is done using object recognition technology. Interaction with digital content by share, drag and drop digital images. Digital images are manipulated, sheared & send via technologies like wi-fi, Bluetooth, etc.. Surface Restaurant Orders can be placed on the Surface from a sliding menu Quickly browse through play list entries dragging favorite song to the current trackÂ¦ Huge play lists can be easily manipulated Disadvantages.. Â¢ Incredibly expensive. Â¢ Currently only for restaurants, hotels, etc. Â¢ Need for dim lighting to avoid washing out the screen Â¢ Need to put bar codes on objects for the system to recognize themÂ¦ ËœA computer on every desktop.â„¢ Now we say, ËœEvery desktop will be a computer.â„¢
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP
Joined: Jun 2010
18-12-2010, 05:19 PM
MICROSOFT SURFACE.doc (Size: 5.95 MB / Downloads: 192)
This article is presented by:
INFORMATION SCIENCE AND ENGINEERING
Dept. of CS&E
The name Surface comes from "Surface Computing" and Microsoft envisions the coffee-table machine as the first of many such devices. Surface computing uses a blend of wireless protocols, special machine-readable tags and shape recognition to seamlessly merge the real and the virtual world — an idea the Milan team refers to as "blended reality". The table can be built with a variety of wireless transceivers, including Bluetooth, Wi-Fi and (eventually) radio frequency identification (RFID) and is designed to sync instantly with any device that touches its surface. It supports multiple touch points – Microsoft says "dozens and dozens" -- as well as multiple users simultaneously, so more than one person could be using it at once, or one person could be doing multiple tasks. There is no keyboard or mouse. All interactions with the computer are done via touching the surface of the computer's screen with hands or brushes, or via wireless interaction with devices such as Smartphone, digital cameras or Microsoft's Zune music player. Because of the cameras, the device can also recognize physical objects; for instance credit cards or hotel "loyalty" cards.
Over the past couple of years, a new class of interactive device has begun to emerge, what can best be described as “Microsoft Surface”.
The Surface table top typically incorporates a rear-project and implimentationion display coupled with an optical system to capture touch points by detecting shadows from below. Different approaches to doing the detection have been used, but most employ some form of IR illumination coupled with IR cameras. With today’s camera and signal-processing capability, reliable responsive and accurate multi-touch capabilities can be achieved.
Microsoft Surface (codename Milan) is a multi-touch product from Microsoft which is developed as software and hardware combination technology that allows a user, or multiple users, to manipulate digital content by the use of gesture recognition. This could involve the motion of hands or physical objects.
Figure 1.1 Table-Top
Picture a surface that can recognize physical objects from a paintbrush to a cell phone and allows hands-on, direct control of content such as photos, music and maps. Surface turns an ordinary tabletop into a vibrant, dynamic surface that provides effortless interaction with all forms of digital content through natural gestures, touch and physical objects. Consumers will be able to interact with Surface in hotels, retail establishments, restaurants and public entertainment venues etc. The intuitive user interface works without a traditional mouse or keyboard, allowing people to interact with content and information on their own or collaboratively with their friends and families, just like in the real world. From digital finger painting to a virtual concierge, Surface brings natural interaction to the digital world in a new and exciting way.
It was announced on May 29, 2007 at D5 conference. Initial customers will be in the hospitality businesses, such as restaurants, hotels, retail, public entertainment venues and the military for tactical overviews. The preliminary launch was on April 17, 2008, when Surface became available for customer use in AT&T stores. The Surface was used by MSNBC during its coverage of the 2008 US presidential election; and is also used by Disneyland’s future home exhibits, as well as various hotels and casinos. The Surface is also featured in the CBS series CSI: Miami and Entertainment news. As of March 2009, Microsoft had 120 partners in 11 countries that are developing applications for Surface's interface.
1.1 Interface paradigm shift:
1.1. 1. Command-line Interface:
A Command-line interface (CLI) is a mechanism for interacting with a computer operating system or software by typing commands to perform specific tasks. This method of instructing a computer to perform a given task is referred to as "entering" a command: the system waits for the user to conclude the submitting of the text command by pressing the "Enter" key (a descendant of the "carriage return" key of a typewriter keyboard). A CLI then receives, analyses, and executes the requested command. The command-line interpreter may be run in a text terminal or in a terminal emulator window. Upon completion, the command usually returns output to the user in the form of text lines on the CLI. This output may be an answer if the command was a question, or otherwise a summary of the operation.
Figure 1.1.1 Command Line Interface
1.1.2. Graphical user interface:
A Graphical user interface (GUI) (sometimes pronounced gooey) is a type of user interface item that allows people to interact with programs in more ways than typing such as computers; hand-held devices such as MP3 Players, Portable Media Players or Gaming devices; household appliances and office equipment with images rather than text commands. A GUI offers graphical icons, and visual indicators, as opposed to text-based interfaces, typed command labels or text navigation to fully represent the information and actions available to a user. The actions are usually performed through direct manipulation of the graphical elements.
The term GUI earlier might have been applicable to other high-resolution types of interfaces that are non-generic, such as videogames, or not restricted to flat screens, like volumetric displays.
Figure 1.1.2 Graphical User Interface
1.1.3. Natural user interface:
A Natural user interface (NUI), is the common parlance used by designers and developers of computer interfaces to refer to a user interface that is effectively invisible, or becomes invisible with successive learned interactions, to its users. The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned. A NUI relies on a user being able to carry out relatively natural motions, movements or gestures that they quickly discover control the computer application or manipulate the on-screen content. The most descriptive identifier of a NUI is the lack of a physical keyboard and/or mouse.
Figure 1.1.3 Natural User Interface
1.2 Multi-touch Technology:
Multi-touch is a method of interacting with a computer screen or Smartphone. Instead of using a mouse or stylus pen, multi-touch allows the user to interact with the device by placing two or more fingers directly onto the surface of the screen. The movement of the fingers across the screen creates gestures, which send commands to the device. Multi-touch has been implemented in several different ways, depending on the size and type of interface. Both touch tables and touch walls project and implimentation an image through acrylic or glass, and then backlight the image with LED's.
Figure 1.2.1 Multi-touch
When a finger or an object touches the surface, causing the light to scatter, the reflection is caught with sensors or cameras that send the data to software which dictates response to the touch, depending on the type of reflection measured. Touch surfaces can also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection. Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel's electrical field. The disruption is registered and sent to the software, which then initiates a response to the gesture.
Multi-touch surfaces allow for a device to recognize two or more simultaneous touches by more than one user. Some have the ability to recognize objects by distinguishing between the differences in pressure and temperature of what is placed on the surface. Depending on the size and applications installed in the surface, two or more people can be doing different or independent applications on the device. Multi-touch computing is the direct manipulation of virtual objects, pages, and images allowing you to swipe, pinch, grab, rotate, type, and command them eliminating the need for a keyboard and a mouse. Everything can be done with our finger tips.
HISTORY OF MICROSOFT SURFACE
In 2001, Stevie Bathiche of Microsoft Hardware and Andy Wilson of Microsoft Research began working together on various project and implimentations that took advantage of their complementary expertise in the areas of hardware and software. In one of their regular brainstorm sessions, they started talking about an idea for an interactive table that could understand the manipulation of physical pieces and at the same time practical for everyone to use.
Joined: Jul 2011
06-02-2012, 10:38 AM
to get information about the topic surface computing full report ,ppt and related topic refer the link bellow
Joined: Apr 2012
05-10-2012, 01:26 PM
MICROSOFTsurface.ppt (Size: 887 KB / Downloads: 70)
What is Microsoft Surface?
Project began in 2001
Introduced in 2007
A surface computing platform from Microsoft.
“Microsoft Surface represents a fundamental change in the way we interact with digital content. With Surface, we can actually grab data with our hands, and move information between objects with natural gestures and touch. Surface features a 30-inch tabletop display whose unique abilities allow for several people to work independently or simultaneously. All without using a mouse or a keyboard.”
What is surface computing?
A form of computing that offers “a natural way of
interacting with information,” rather than the
“traditional user interface.”
Direct Interaction: The ability to "grab" digital
information with hands - interacting with
touch/gesture, not with a mouse or keyboard.
Multi–Touch: The ability to recognize multiple points
of contact at the same time, not just one (Ex. one
finger, like with most touch screens), but dozens.
Multi–User: The Surface’s screen is horizontal,
allowing many people to come together around it
and experience a “collaborative, face–to–face
Object Recognition: Physical objects can be placed
on the Surface’s screen to “trigger different types of
digital responses” (Ex. cell phones, cameras, & glasses
How is the Surface used?
Wireless! Transfer pictures from camera to Surface and cell phone. “Drag and drop virtual content to physical objects.”
Digital interactive painting
At a phone store? Place cell phone on the Surface and get information, compare different phones, select service plan, accessories, and pay at table!
At a restaurant? View menu, order drinks and meal at your table! It’s a durable surface you can eat off of (withstands spills, etc.). Need separate checks? Split bill at and pay at table.
Play games and use the Internet.
Jukebox! Browse music, make play lists.
Billboard for advertising
How does it work?
(1) Screen: Diffuser -> ”multitouch" screen. Can process multiple inputs and recognize objects by their shapes or coded "domino" tags.
(2) Infrared: The ”machine vision" is aimed at the screen. Once an object touches the tabletop -> the light reflects back and is picked up by infrared cameras.
(3) CPU: Uses similar components as current desktop computers -> Core 2 Duo processor, 2GB of RAM and a 256MB graphics card. Wireless communication -> WiFi and Bluetooth antennas (future -> RFID). Operating system -> modified version of Microsoft Vista.
(4) Projector: Uses a DLP light engine ( rear-project and implimentationion HDTVs).
Who’s using the Surface today?
Currently only commercially available and being used in the retail, hospitality, automotive, banking and healthcare industries.
Current customers are AT&T, T-Mobile, the Rio All Suite Hotel & Casino in Las Vegas, Sheraton Hotels, Disney Innovations House in California, Hotel 1000 in Seattle, Harrah’s Entertainment, and Starwood Hotels and Resorts Worldwide.
Microsoft Surface’s Future…
Surface will continue to be sold to and used by restaurants, retail, leisure and public entertainment venues.
According to Pete Thompson, Microsoft’s general manager for surface computing, the Surface could potentially be available to the “broader consumer market” as soon as 2010. Microsoft goal is for consumers test the Surface in commercial settings, and then want them in their own households. Microsoft wants to expand to the consumer market by making a product people can use in their home environment (using other surfaces like desks or making a version that hangs on the wall).
Computer scientists hope to incorporate this kind of technology in peoples’ daily lives… Future goals are to surround people with intelligent surfaces (look up recipes on your kitchen counter or table, control TV with coffee table, etc.)
"I firmly believe that in the near future, we will have wallpaper displays in every hallway, in every desk. Every surface will be a point of interaction with a computer, and for that to happen, we really need interfaces like this." - Jeff Han founder of Perceptive Pixel and NYU professor