Saturday, November 26, 2011

Paper Reading #5: A Framework for Robust and Flexible Handling of Inputs with Uncertainty

References
Julia Schwarz, Scott Hudson, Jennifer Mankoff, and Andrew D. Wilson.  "A framework for robust and flexible handling of inputs with uncertainty". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Julia Schwarz is a Ph.D. student at Carnegie Mellon University.

Scott Hudson is a professor at Carnegie Mellon University, where he is the founding director of the HCII PhD program.

Jennifer Mankoff is a professor at the Human Computer Interaction Institute at Carnegie Mellon University.  Mankoff earned her PhD at the Georgia Institute of Technology.

Andrew Wilson is a senior researcher at Microsoft Research.  Wilson received his Ph.D. at the MIT Media Laboratory and researches new gesture-related input techniques.
Summary 
  • Hypothesis - The researchers hypothesized that using the uncertainty aspect of a gesture during the process to interpret it will give more accurate results.
  • Method/Content - They created several examples to test their framework. It works by sending the user input to all possible recipients of the gesture, along with the information about it. When it sends this along, it also calculates the probability of it being the correct choice. However, the evaluation system used in this framework is lazy; that is, it waits until the last possible second to actually decide which action to take. Once the gesture or gesture sequence is completed (noted by specific actions, such as taking a finger off the screen or a specific length period of silence), it then by probability decides which action to take. One of the examples they used was voice recognition where 'q' and '2' sound the same.
  • Results - All of their tests came out positive. The findings were that their system did increase the accuracy of interpretation by a large margin. The results from four movement impaired subjects with conventional motion gesture recognition was that between 5 and 20% of all inputs were interpreted incorrectly. However, with the new system, less than 3% for all four of them were misinterpreted.
  • Content - The researchers first discuss the limitations of current gesture search systems. They put forth their claim that getting rid of uncertainties in the beginning of the system greatly increases the chances of misinterpreting a gesture. The then created a system in which it keeps these uncertainties and the associated information. Using this new system, they test several participants and compare the new system with the current one. They hypothesis was supported, and they go on to discuss future possible applications.
 Discussion
The thoughts of the developers were that rather than getting rid of uncertainties in the beginning when they have nothing else to go on, they should wait to get rid of them until the end when something inputted later may narrow down the choices. This is a surprisingly new concept that has not been strongly implemented yet. Especially in today's world where screens are getting smaller and touch-interaction is the norm, being able to select actions based upon probability is a must. This would be a great thing to integrate into many systems in use today, like smart phones and tablet PCs.

No comments:

Post a Comment