Thursday, September 1, 2011

Paper Reading #1: Imaginary Interfaces

References:
Patrick Baudisch, Daniel Bierwirth, and Sean Gustafson. "Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback". UIST '10 Proceeding of the 23nd annual ACM symposium on User interface software and technology. ACM New York, NY, USA ©2010.

Author Bios:
Sean Gustafson received his degree from the University of Manitoba. Before his research career, he participated in the development of medical devices using embedded systems. He now works under Patrick Baudisch at the Hasso Plattner Institute. His main interests lie in gestural input and eyes-free portable technology.

Daniel Bierwirth received his master's degree at the Hassno Plattner Institute. His recent careers have been a software developer for start-up companies, and also as an independent contractor. His area of study centers around user-centered mobile software development and design.

Patrick Baudisch studied Computer Science at the Darmstadt University of Technology. He has worked for Xerox and Microsoft as a research scientist, and is currently a professor at the Hasso Plattner Institute. Most of his work deals with touch screens.

Summary:
  • Hypothesis - Humans can use surface-based input technology efficiently without visual feedback, using only their imagination and short-term visual memory (which they dub visuospatial memory).
  • Method - A random sample of test subjects from the institute were selected. During this experiment, they were required to draw several types of shapes and characters in mid-air using Imaginary Interfaces (the device used for testing). The device worked by recognizing a plane of input when the user made an L with their non-dominant hand. This L-shape served as a frame to outline an x-y plane of a 3d coordinate system. The device then compensated for its orientation during movement analyzing, then threw out the z-coordinate entirely, resulting in apparent 2d shapes on a 2d plane.
    The subjects were not given any sort of screen to show them what they were drawing; instead, they were given an image, and when they had taken the image into their memory, tried to duplicated in the air without that reference. Throughout the experiment, several different tests were preformed during and after the drawing of images. In the first test, they were simply asked to draw different shapes and characters.
    In the second test, they were asked to draw simple glyphs, rotating 90 either during or after, and then point to particular spots on that glyph (pointed out to the subject after the drawing of the object was finished). This second test was to show how much movement and a change of background could hinder a subject's frame of reference.
    The third test was to pick out points on a user-defined space/scale. For example, the user would be told to point to a coordinate (2,1) with their 'L' as the origin. The user would then need to find a coordinate in which the x-coordinate was twice the length of their thumb and the y-coordinate at the same length as their pointer finger. The units were decided by thumb and forefinger for the x and y planes respectively.
    The last experiment was a simple experiment to see how much worse accuracy became with more strokes (more complicated shapes).
  • Results - During the first test, it was found that people could create simple shapes and characters with a surprisingly high degree of accuracy. During a test of 6 one-stroke characters and 12 users, only 4 out of the possible 72 characters were not recognized by the computer to be accurate. In the second test, the researchers were generally disappointed in the results, but found what type of things hindered the human visuospatial memory. In the third test, it was found that the closer to the origin users chose their spot, the more accurate they were in selecting the correct coordinate. It was also seen that users could not identify the correct points for negative coordinates very well. The last test showed that a person's accuracy greatly decreased when more strokes were involved in the drawing of an object. This was used to project how a person's visuospatial memory would fade over time.
  • Contents - The contents of this paper was the experiment and results of users interfacing with Imaginary Interfaces. They analyzed how people performed at different tasks and published their results and statistics. Note that the main point of this experiment was not to prove that their device worked; indeed, that technology was already available. The experiment was instead to prove that humans can use it without feedback with a high degree of accuracy. The conclusion was a human's visuospatial memory was much more capable than research performed in the past, and that technology like this may be a viable solution for mobile devices.
Discussion
I was personally very excited about this technology. If I were to get into the field of HCI, which is a distinct possibility, this is the area I would like to join. Ever since I read the book Daemon by Daniel Suarez, I have been fascinated by the concept of "d-space" user interfaces. This is the concept that with visual feedback from glasses or contact lenses, additional information is projected onto things in the real world. The user interfaces with it using hand and body gestures and voice commands in order to manipulate things in this "d-space". I believe that someday, this alternate reality parallel to our own will be realized; the technology used in this article is the first stepping stone toward that end. However, on the order of visuospatial memory, I believe that for any useful application, humans will always require some sort of feedback, whether it be visual or touch-based. If we truly wish to create user interfaces without feedback, the human imagination and short-term memory (which has been declining in recent years) need to improve drastically. A few seconds (or even a minute) of spatial memory is not enough to finish anything practical. Short notes perhaps, even a shopping list, but never could an essay be written on something like this.

No comments:

Post a Comment