【正文】
d so that above table gestures will be tracked in three dimensions by determining the disparity between the two images. To allow more robust skin detection, the system will calibrate the skin color parameters based on the colors being projected on the screen, rather than predefined values. Tangible objects can be integrated because the skin tracking algorithm can be readily adjusted for any color or intensity, thereby making objects placed on the surface distinguishable from projected images. Current interactions have been fairly basic。 White reflects detected skin。 Blue shows user A touch。s skin tone. By having one user standing on each side of the table, each touch can be associated with a corresponding user with very little risk for error. With the added information of which hand created a touch, role specific functions can be given to different users for increased collaborative power. Users should avoid overlapping hands to prevent occlusions in the overhead camera39。附 錄 A 翻譯原文 翻譯原文: Enhancing Multiuser Interaction with Multitouch Tabletop Displays Using Hand Tracking . Dohse 1, Thomas Dohse 2, Jeremiah D. Still 1, Derrick J. Parkhurst 13 Human Computer Interaction Program 1, Computer Science Department 2, Psychology Department Iowa State University { kcd | dohse | derrick | jeremiah } Abstract A rearprojection multitouch tabletop display was augmented with hand tracking utilizing puter vision techniques. While both touch detection and hand tracking can be independently useful for achieving interaction with tabletop displays, these techniques are not reliable when multiple users in close proximity simultaneously interact with the display. To solve this problem, we bine touch detection and hand tracking techniques in order to allow multiple users to simultaneously interact with the display without interference. Our hope is that by considering activities occurring on and above a tabletop display, multiuser interaction will bee more natural and useful, which should ultimately support collaborative work. 1. Introduction Large displays are useful for information visualization when multiple people must jointly use the information to work together and acplish a single goal. The social interactions that result from using a shared display can be highly valuable [1]. However, these displays can fail to allow multiple users to simultaneously interact with the information. Tabletop interfaces can provide a large shared display while simultaneously accepting natural and direct interaction from multiple users through touch detection. For example, multitouch surfaces that utilize the phenomenon of frustrated total internal reflection (FTIR) have received widespread attention recently [2]. FTIR detection techniques allow the system to track a large number of simultaneous touch points with very high spatial and temporal frequency. The FTIR has several advantages over other multitouch detection technologies, such as being both low cost and scalable [3]. Other puter vision based tracking systems have a limited ability to detect touches versus near touches which is an important element of interacting with the table surface. FTIR tracking alone has two shortings pared to other methods of touch tracking. Each touch in FTIR appears as an independent event. Although inferences based on distance between touch points can be leveraged to guess which touches are part of the same event, each touch ultimately remains a standalone piece of data. As the number of users and plexity of their actions increases, so does the probability of incorrectly grouping touch points with a single user. The other issue is that the system is inherently susceptible to problems with spurious IR noise (. poor lighting conditions or flash photography). To solve lighting and touch differentiation problems, we augmented a FTIR tabletop display with an overhead camera. Using the camera, hands on or over the table can be tracked using skin color segmentation techniques. With hand coordinates available, touch points can be assigned the ownership necessary to support multiple users and correctly identify events prised of multiple touches. This technique works well even when gestures are made by multiple users in close proximity because it does not need to differentiate touches based on closeness. The fusion of hand and touch point locations also increases the robustness of touch sensing in the presence of unwanted IR light because of the redundancy of the point’s location. Additionally, tracking hands allows users to generate interactions without touching the surface, but rather making movements above the table. This creates a hybridization of the two interaction techniques that is still being explored. Figure A1: Three users working together using a rearprojection multitouch tabletop display augmented with hand tracking using an overhead camera. 2. Related Work Techniques and technologies used for interaction detection on tabletop displays are rapidly maturing, but many researchers are still seeking better methods for capturing natural interactions made by multiple users within the context of real world applications. There are a number of approaches to tracking user interactions with a tabletop display. One successful method is to use a surface material that is laden with sensors, such as the mercially available DiamondTouch system. This system uses a technique where a circuit is capacitively closed when the user touches the table [4]. Interfaces like this one use front projection du