Sunday, September 25, 2011

Paper Reading #12: Enabling beyond-surface interactions

References
Thomas Augsten, et al.  "Enabling Beyond-Surface Interactions for Interactive Surface wit An Invisible Projection". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

Author Bios
Li-Wei Chan is a Ph. D. student in the Graduate Institute of Networking and Multimedia at the National Taiwan University. He received his master's and bachelor's in Computer Science from the National Taiwan University and from Fu Jen Catholic University respectively.

Hsiang-Tao Wu, Hui-Shan Kao, and Home-Ru Lin are students at the National Taiwan University.

Ju-Chun Ko is a Ph. D. student at the Computer & Information Networking Center, National Taiwan University. He got his master's in Informatics from Yun Ze University.
Mike Y. Chen is a professor in the Department of Computer Science at National Taiwan University. His research interests lie in mobile technologies, HCI, social networks, and cloud computing.

Jane Hsu is a professor of Computer Science and Information Engineering at National Taiwan University. Her research interests include intelligent multi-agent systems, data mining, service oriented computing and web technology.

Yi-Ping Hung is a professor in the Graduate Institute of Networking and Multimedia at National Taiwan University. He received his bachelor's from National Taiwan University and his Master's and Ph.D. from Brown University.


Summary
  • Hypothesis - Using IR (infrared) cameras to place invisible markers will improve reliability for interactive tabletops.
  • Method -For this experiment, they used a custom interactive tabletop prototype. It projected both color and IR from under the table, and used two IR cameras under the table to detect touches. The IR projector also selectively projects white space on the tabletop to perform multi-touch detection. The tabletop itself is comprised of two layers: a diffuser layer and a touch-glass layer. Due to the reflective nature of the touch-glass, it caused problems whether it was above or below the diffuser layer. They found that when it was above, it reflected the visible light of projections from above the tabletop, which caused not only a degrade in the luminance of the projection, but also shined the light on observers. When the glass was under the diffuser layer, it partially reflected the IR rays from beneath the table, resulting in dead zones for the image processing. They found that they could fix the dead zone problem by using two IR cameras instead of one, so they implemented the table with the touch-glass underneath the diffuser layer. The IR cameras used a dynamic sizing system to track projections and move/resize markers as needed. The proposed 3 different projection systems: the i-m-Lamp, the i-m-Flashlight, and the i-m-View. The first was a combination pico-projector/IR camera which appeared as a simple table lamp. Its small dimensions were thought to be ideal for integration with personal tabletop systems. The second (i-m-Lamp) implementation proposed is a mobile version of the i-m-Lamp. Users can inspect fine details of a region by focusing the i-m-Flashlight at the desired location. The i-m-View is a tablet PC attached to an IR camera. The programmed use for it was to intuitively explore 3D geographical information. They used the i-m-View to explore 3D buildings from above a 2D map shown on the prototype tabletop system. They asked 5 users to try out their systems and were encouraged to think aloud.
  • The main problems found for the i-m-Lamp was that because the i-m-Lamp and the tabletop system both project on the same surface, the overlapped region caused a blue artifact. To avoid it, they masked the tabletop projection where the projections overlapped. For the i-m-Flashlight, they encountered a focus problem; the lens focus of the pico-projectors needed to be manually focused. This limited usability; however, they proposed that replacing the projector with one that contains a laser (such as the Microvision ShowWX) would provide an image that is always in focus. The largest problem with the i-m-View was that it was easy for users to get lost in the 3D view and not be able to pay as much attention to the 2D map. They fixed this by showing the boundaries of the 2D map inside the 3D view, allowing the user to simultaneously see what was changing on the table and what it represented in the 3D view. During for the i-m-View users often found that the buildings in the 3D view were too tall for the view; they wished to either pan up or rotate the tablet in order to get a portrait view of the landscape, neither of which were currently supported by the system. Another problem was that they i-m-View occasionally got lost because no IR markers entered its field of view; this was dealt with by continuously updating the orientation of the i-m-View. The overall feedback from users was positive, and the problems discussed are supposed to be addressed in future work.

    Discussion
    Quite frankly, I found this entire paper awesome. I thought that much of it was quite advanced, a huge step in HCI. While it may not have much application for me personally (I can't readily see this augmenting programming in many ways), it would have huge impacts on artists, the military, modelers and designers, engineers (such as civil or mechanical) and many more. Artists could use it to selectively edit only certain portions of their work without using the cumbersome selection methods used in today's art development programs. The military could quite easily use this for strategic purposes such as battle maps or location coordination. Modelers and engineers could use this to select certain pieces in a 3D model or blueprint to edit. In short, this technology has a huge range of applications that would make great use of it. I hope to see this technology distributed widely soon.

    No comments:

    Post a Comment