Saturday, November 26, 2011

Paper Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays

References
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay "Mid-air pan-and-zoom on wall-sized displays". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Mathieu Nancel is a Ph.D. student in HCI at the University Paris-Sud XI.

Julie Wagner is a Ph.D. student at the InSitu lab in Paris.

Emmanuel Pietriga is the interim leader of INRIA team in Situ where he is a full-time reasearch scientist.

Oliver Chapuis is a research scientiest at LRI. He is also a member of the InSitu research team.

Wendy Mackay is a research director with INRIA Saclay in France. She is in charge of the research group InSitu.

Summary 
  • Hypothesis - The researchers hypothesized that they could improve interaction with wall-sized displays by studying several factors of gesture interactions. They made an individual hypothesis for each factor:
    •  Two-handed gestures are more accurate and easier to use.
    • Two-handed gestures are faster than one-handed.
    •  Users will prefer clutch-free gestures.
    • Linear gestures would be best mapped to zoom but should eventually be slower due to aforementioned clutching.
    • Gestures utilizing fingers instead of larger muscle groups will be faster.
    • One-dimensional path gestures should be faster when using lesser haptic feedback.
    • Three-dimensional gestures will be more tiring.
  • Method - Their study involved 12 participants. Each participant was asked to use all patterns of interaction. Each participant completed these patterns in a number of sessions, preventing inaccurate data caused by exhaustion and memory loss.
  • Results - The results found supported the second hypothesis, as well as the fifth and sixth. The fourth hypothesis was refuted; linear gestures actually ended up being faster than circular gestures. Hypothesis seven was also found to be true. The results for the other hypotheses were inconclusive.
  • Content - This paper discusses the best ways a user might interact with a screen too large to interact with directly. They form several hypotheses on how well certain gestures would work, and perform a study based around their hypotheses. They then conducted an experiment to emphasize their key points.
 Discussion
This paper was very interesting. I though their methods may have been a little off, but their results were for the most part expected. I can't say whether or not they thoroughly proved their points, but the paper was well done. Technology like this makes me excited every time I read about it; we keep getting closer to virtual reality rooms and smooth motion gestures for all interactions. I can't wait for the day that I have a display on all of my walls and can change anything I want with a word and a hand gesture.

Paper Reading #21: Human model evaluation in interactive supervised learning

References
Rebecca Fiebrink, Perry R. Cook, and Dan Trueman "Reflexivity in digital anthropology". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Rebecca Fiebrink is an assistant professor in Computer Science and affiliated faculty in Music. Until recently she was a postdoc at the University of Washington.

Perry Cook earned his PhD from Stanford University.  His research interests lie in Physics-based sound synthesis models.

Dan Trueman is a professor at Princeton University.  In the last 12 years he has published 6 papers through the ACM. He is also a musician, primarily with the fiddle and the laptop.

Summary 
  • Hypothesis - The researchers hypothesized that Interactive Machine Learning (IML) would be a useful tool to improve the current generic machine learning processes currently used.
  • Method -The researchers first developed a system of IML to help with music composition, called Wekinator. They then conducted three studies. The first study included several PhD students aimed towards improving the system itself. They used the software while composing their own music, and met regularly to discuss their experiences and suggest improvements for the software. The second study involved undergraduates. They were told to use the software in an assignment specifically geared towards supervised learning in interactive music performance systems. The third and final study had a professional cellist use the system to create a gesture recognition system. The gestures were to provide correct musical notation, such as staccato.
  • Results -  Although some results were expected, they also ran into a few things they had not. For one, users tended to overcompensate; that is, they provided more than enough information to make sure the system got it right. Also, the system's performance sometimes surprised users, encouraging them to expand their ideas of the desired goal.
  • Content - The researchers observed as users interacted with the machine learning software. They found that while users liked the cross-validation, most of them preferred direct validation. The IML was determined to be useful because of its ability to continuously improve the effectiveness of the learning model itself.
 Discussion
This paper was very well done. The experiments were well thought out, carried out, and explained. The proved their hypothesis and were successful in explaining why. Using three independent studies, they were able to compile a large amount of data to use. I think that these results will be very useful, not just in the application they chose but in a widespread realm of problems. 

Paper Reading #20:The aligned rank transform for nonparametric factorial analyses using only anova procedures

References
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins "Reflexivity in digital anthropology". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Jacob Wobbrock is an associate professor in the Information School at the University of Washington.  He directs the AIM Research Group which is part of the DUB Group.

Leah Findlater is currently a professor at the University of Washington.

Darren Gergle is an associate professor at the Northwestern University School of Communication.

James Higgins is a professor in the Department of Statistics at Kansas State Unversity. 

Summary 
  • Hypothesis - The researchers hypothesized that modifying the Aligned Rank Transform to support an arbitrary number of factors would be useful for researchers in analyzing data.
  • Method - The researchers developed the method for the expanded ART and then created a desktop tool (ARTool) and a Java-based verson (ARTWeb). After creating these tools the researchers analyzed three sets of previously published data. This analysis allowed them to show the effectiveness and usability of their software.
  • Results - The results were positive. Reexamining old studies showed data that had not shown up before. For one of them, data was found that was unexaminable by a Friedman test. The second case showed how the new system can free analysts from assuming distributions of ANOVA. The last was run using the nonparametric ART method, new information was revealed.
  • Content - The authors presented their Aligned Rank Transform (ART) tool, which is useful for nonparametric analysis of factorial experiments. They discuss the process in detail, and show three examples of how it is useful and where it is applicable. It is shown that this tool can show some relationships between variables that cannot be seen with other analyses.
 Discussion
Honestly, this paper went way over my head. It did seem obvious to me, however, that the authors were able to effectively support their hypothesis and were able to create a very useful tool for analysts. The amount of information analysts gets out of data greatly affects their ability to extrapolate. I thought their examples were well chosen and explained well (even on a broad spectrum) how the ART system can produce more specific and more accurate results.

Paper Reading #19: Reflexivity in Digital Antrhopology

References
Jennifer A. Rode "Reflexivity in digital anthropology". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Jennifer Rode is an assistant professor at Drexel's School of Information. Rode has produced several interface design projects. She received her PhD from the University of California, Irvine.

Summary 
  • Hypothesis - Rode hypothesized that some methods of digital anthropology can be utilized by researchers while conducting field studies.
  • Method - This paper did not contain any user studies, and thus no methods were presented.
  • Results -  The author spent most of this paper describing different methods of ethnographic study, namely Positivist and Reflexivity, where the definitions were from previous research. She then goes on to argue why many of these unused methods could be beneficial in digital research.
  • Content -The author discussed the various forms of ethnography, and how reflexivity can help in the design process for HCI.
 Discussion
This paper seemed to me to be a lot of explanation for not a lot of concept. It wasn't very concise and the author tended to ramble a bit. It was a bit hard to get through, but had a good overall concept: don't ignore the users you're developing for. However, she had no proof to prove her point, and she had little of her own ideas in the first place. Her main point was that bringing together certain already established methodologies can help aid HCI.

Paper Reading #18: Biofeedback game design: using direct and indirect physiological control to enhance game interaction

References
Lennart Erik Nacke, Michael Kalyn, Calvin Lough, and Regan Lee Mandryk "Biofeedback game design: using direct and indirect physiological control to enhance game interaction". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010. 
 
Author Bios
Lennart Erik Nacke is an assistant professor for HCI and Game Science at University of Ontario Institute of Technology.

Michael Kalyn is currently a graduate student in Computer Engineering at the University of Saskatchewan. 

Calvin Lough is a student at the University of Saskatchewan

Regan Lee Mandryk is an associate professor at the University of Saskatchewan.

Summary 
  • Hypothesis -They hypothesized that they can increase the enjoyment of video games by using physiological input to change the dynamics of the game.
  • Method - The study consisted of 3 versions; the first was a control, where physiological input was not included. The second two used physiological input to augment a game controller for a  shooter game. After playing each version, the players were asked to fill out a questionnaire about their experiences.
  • Results - The participants preferred when physiological input was mapped to more natural inputs, like flexing muscles to get more power out of something (jumping, for instance). They enjoyed the increased level of involvement, but they also expressed concern that it made gameplay more complicated. They also commented that it was a novel idea and that some of the sensors had a learning curve; however, once the learning curve was passed, it was on the whole a more rewarding experience.
  • Content -The authors of this paper developed a simplistic shooter game that integrated physiological input with controllers. It explored the learning of new methods of controlling a game, and how the players were able to use them effectively. They concluded that physiological sensors can increase the enjoyment of video games. The indirect controls were shown to not be as enjoyable, as they did not present instant feedback.
 Discussion
This is the direction I've been expecting gaming to go for a long time. They already have certain aspects of physiological feedback in games, such as using music to increase a person's anxiety or excitement levels; it was only a matter of time before they started using physiological input to change the way a character moves or the way something can be used. In particular, breathing too hard in a stealthy game could cause enemies to become aware of you more easily; being more relaxed while sniping could decrease reticle movement, and so on. I'm ready for this type of thing to be implemented on a wide scale.

Paper Reading #17: Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment

References
Andrew Raij, Animikh Ghosh, Santosh Kumar, and Mani Srivastava.  "Privacy Risks Emerging from the Adoption of InnocuousWearable Sensors in the Mobile Environment". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Andrew B Raij is a Post-Doc fellow with the University of Memphis.

Animikh Ghosh spent time as a researcher at the University of Memphis.

Santosh Kumar is an associate professor at the University of Memphis.

Mani Bhushan Srivastava is a well known researcher from AT&T Bell Laboratories. He is currently a professor at UCLA.

Summary 
  • Hypothesis -User concerns over privacy in terms of mobile devices due to the higher amount of personal devices that hold personal information.
  • Method -The researchers divided 66 participants into two groups. One group was monitored for basic information for a week, while the other (the control group) was not monitored at all. Both groups filled out a survey at the beginning of the study to indicate their feelings on potentially private information in the next week. After the week was over, both groups again took a similar survey. Before the second survey, however, the monitored group was given the data collected on them and the conclusions drawn from that data.
  • Results - The results from the second survey definitely indicated that those with a personal stake in the data expressed higher concerns about privacy than those without. The concern of those being monitored also showed increased concerns after the week. The researchers also found that the group of people the information would be given to changed the level of concern. Concern also increased when a schedule of behaviours or a timeline was created. The two main areas of concern were dealing with stress and conversations. Some were concerned that the wrong conclusions would be drawn because they wouldn't have the whole picture.
  • Content - This paper discussed the growing public concern about privacy risks in the information age. The results from its study show that when the stakes are personal, the level of concern rises. The researchers proposed removing as much information from transmissions as possible.
 Discussion
This paper didn't register as a big deal for me. I purposefully keep things I want private off my cell phone and Facebook. I don't post things I don't want random people to see, for it's going to be seen, and I know that. If I have private information on my computer(s), I ensure that it has varying levels of security on it, depending on how sensitive the information is. It seems to me that especially nowadays, this study should have been relatively obvious. At any rate, if people are that concerned about privacy, they should take steps to conceal it themselves and not demand larger companies or the technology manufacturers be the ones responsible.

Paper reading #16: Classroom-Based Assistive Technology

References
Meg Cramer, Sen H. Hirano, Monica Tentori, Michael T. Yeganyan, Gillian R. Hayes. "Classroom-Based Assistive Technology". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Meg Cramer and Sen Hirano are both currently graduate students in Informatics at UC Irvine in the School of Information and Computer Science.

Monica Tentori is currently an assistant professor in computer science at UABC in Mexico, and is a post-doc scholar at UC Irvine.
 
Michael T. Yeganyan is an Informatics STAR Group Researcher at UC Irvine and hold an MS in Informatics.

Gillian R. Hayes is currently an assistant professor in Informatics in the School of Information and Computer Science at UC Irvine.  She also directs the STAR group.

Summary

  • Hypothesis - The vSked system is an improvement over current systems to help autistic students in school.
  • Method - There are three main stages in the development of vSked. During each stage, the teachers and aids were interviewed and asked to comment on the system. The students were not directly interacted with, as the demonstrated minimal communication skills. Several pieces of information were taken into account when evaluating the system, such as level of consistency and predictability in the schedule, student anxiety, and teacher awareness. The researchers analyzed data to ensure that the system was seeing to teacher and student needs.
  • Results -The results were highly positive. The teachers expressed a large amount of surprise at some of the results. Students were able to learn concept much faster with the images given to them, as well as answer questions deemed too complicated due to the new system. Students were able to progress through the day's activities with much less prompting, and they were much more comfortable with the new calendar system.
  • Content - This paper introduced the vSked system, which was engineered towards helping autistic students succeed in school. They tested it by introducing it to a class and interviewing the teachers about student progress and how well the students used the system. The results were positive across the board. It was noted that it was somewhat inflexible, but that it had a lot of room for changes.
 Discussion
Although I am not particularly interested in this kind of technology, this was a large step in a good direction. Helping these students succeed is a great goal. Instead of producing a new type of technology, their aim was to improve existing concepts and implementing them in a real-world environment. They were able to prove that their system was a large improvement over current systems, and it is highly possible this system will be used widely in the future.

Paper Reading #15: Madgets: actuating widgets on interactive tabletops

References
Malte Weiss, Florian Schwarz, Simon Jakubowski, and Jan Borchers.  "Madgets: actuating widgets on interactive tabletops". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Malte Weiss is currently a PhD student at the Media Computing Group.  His research focuses on interactive surfaces and tangible user interfaces.

Florian Schwarz is currently an assistant professor of linguistics at the University of Pennsylvania.  He received a PhD in Linguistics from the University of Massachusetts.

Simon Jakubowski is currently a Research Scientist at AlphaFix. He was a research scientist at the University of Texas Medical School in Houston. 

Jan Borchers is currently a professor at RWTH Aachen University.  He received a PhD in Computer Science from Darmstadt University of Technology.

Summary 
  • Hypothesis - The researchers hypothesized that they could create small, light-weight physical widgets to be used on top of an interactive touch display that could modify the position of the widget.
  • Method - The goal of this paper was to make an interactive tabletop that was low cost and light-weight. The realized this by using magnetic widgets and an electromagnetic array below the screen. They utilize infrared reflectors and sensors to classify a given widget and get its location. By changing polarities and strengths of the magnets below the display they are able to move the Madgets along a calculated path.
  • Results - The researchers were able to construct their prototypes as well as several other types of widgets. The widgets themselves to not take long to build, although registering new controls can take considerably longer (up to 2 hours). The developers are currently working on a method to make the process faster to allow for rapid prototyping.
  • Content - The authors of this paper presented Madgets, a method to integrate physical objects with the virtual world. They also demonstrated that the system can do much more with the widgets than just move them physically; they can alter their properties and make them perform much more complicated tasks, such as ring a bell or act as a physical button. The paper offers a complete description of how the system works and why it is beneficial.
 Discussion
All in all, I rather enjoyed this concept. Although it wasn't a research paper per se, it did hold some interesting ideas. While reading it I toyed with the idea of playing chess with a system such as this. This could also be useful for representing a military battle, physically representing troop movements without moving pieces inaccurately by hand. It also made me think of the hologram game in the original Star Wars movie (4th), which looked a little like chess but with monsters. Not quite the same concept, as that deals with holograms, but it did remind me of it.

Paper reading #14: TeslaTouch: electrovibration for touch surfaces

References
Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison.  "TeslaTouch: electrovibration for touch surfaces". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
The authors were all researchers at Disney Research, except for Chris Harrison who was a graduate student at Carnegie Mellon.

Summary 


  • Hypothesis - The researchers hypothesized that electrovibration can be used as an effective method for haptic feedback on touch surfaces.
  • Method/Content - Electrovibration was created by placing a transparent electrode layer between a glass plate and a thin insulation layer. Tests ensued to find human thresholds of what they could detect with their fingers. Ten participants were tested for thresholds in frequency and amplitude. They were also asked to describe the "feeling" of each level. After the user study was conducted, they developed several applications to show off their findings. 
  • Results - Users found that higher frequencies led to smoother surfaces, low frequencies to rough surfaces. It was also found that the effect of amplitude depended upon the underlying frequency. Increasing amplitude for a high frequency increased the level of smoothness, where a low amplitude low frequency vibration induced a perception of stickiness. It was also noted that while users could feel the sensation of friction, they were able to perceive the vibration at the same time. 
  • Content - The paper introduced TeslaTouch, a method of tactile feedback that does not require moving or mechanical parts. They tested different levels of electrovibration and categorized user perceptions. They then compared this new technology with existing mechanical feedback and discussed its advantages over the latter. They then discussed possible practical uses for this technology.
 Discussion
This technology seems to be very close in concept to an idea I had a long time ago for a screenless feedback device. This paper was very important to me in relation to that; it gave me a few ideas. Other than that, I loved the idea of this technology; I believe it can be very useful and it progresses us further towards virtual reality environments; can you imagine a full-body suit of electrovibrations that could allow you to feel different sensations on different parts of your body? It would be an amazing experience.

Paper Reading #8: Gesture Search: A Tool for Fast Mobile Data Access

References
Yang Li.  "Gesture search: a tool for fast mobile data access". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Yang Li received his Ph.D. from the Chinese Academy of Sciences and conducted postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build community while a professor at the University of Washington. He is now a Senior Research Scientist at Google.

Summary 

  • Hypothesis -Yang Li theorized that some methods of phone input are inappropriate for some situations; for example, voice commands in a quiet environment is generally frowned upon. Also in some situations, touch-based typing is difficult to accomplish. The hypothesis is that Gesture Search is a better alternative to access mobile data.
  • Method/Content - He developed an Android app that he made publicly available to the company (Google). The application logged user data and after a certain period was used to analyze performance. Users were not required to use it at any time and could stop using it when they wanted. However, when choosing which data to analyze, he required that the user used it at least once per week. Yang then set up a study in which he asked for users to perform certain actions on a mobile device using standard GUI interfaces. They were not told what the study was about so as to create a natural environment. 
  • Results - After comparing the two types of mobile interactions, they hypothesis was supported in certain situations. Users typically used the Gesture Search for contacts, occasionally for apps, and very rarely for web pages. The majority of searches were found quickly, in under 3 seconds. The average rating for the app was 4/5 stars, with few outliers. Most users were happy with the app due to the ability to find information without going through a hierarchy of menus. 
  • Content -  Yang Li developed an Android application that utilized gesture input to search through a smart phone for contacts, music, and other such information. He solves several problems of ambiguity by using a time-out method to decide whether a stroke is still the same letter. Also, he gives search results weights; when something is selected, its weight increases. With time, weights fade and they don't show at the top of the search results any longer. After developing the application, he studies people's usage of the application and compares it to GUI based operation.
 Discussion
I actually downloaded the app and used it for a while myself. It was useful occasionally, but mostly only when I was holding something in one hand. However, the biggest thing I didn't like about it was that it didn't search through files. That was the one thing I wanted it to do; finding a contact for me takes about 5 seconds anyway. However, going into a file manager, navigating through folders and scrolling through files can take a good bit of time, especially when there are many small files in the same folder. It seems to me that it would not take a huge amount of effort to extend this application to search through the phone's memory, so I was disappointed to find that it did not support this functionality.

Paper Reading #7: Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop

References
Jochen Rick.  "Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Jochen Rick received his Ph.D. from Georgia Tech. He is currently a professor at Saarland University.

Summary 

  • Hypothesis - The layout of a keyboard drastically influences user performance when using stroke-based input.
  • Method/Content - The author originally created a mathematical model of user input performance in order to compare the performance of different keyboard layouts. He created this model by means of a user study in which he collected data such as the speed of strokes in different directions and turning speed. He was able to then create a visual representation of the data. Once he had this model, he was able to find constants for each equation that represented a good scaling potential. He then applied these equations to newly created keyboard based upon this information and the standard keyboards in use (such as QWERTY and Dvorak).
  • Results - The results were as expected; the optimized keyboards outperformed the standard key layouts, although the standard key layouts are effective for tap-typing. He mentioned that it was expected as these layouts were designed for typing with ten fingers, where the compact layouts were designed with one fingered strokes specifically in mind.
  • Content - The author claims that there is a need for more optimized keyboard layouts based around stroke-based input. He goes into the history of keyboard layouts and the reasons behind the development of the current most common ones. He then develops a model with which to compare keyboard layouts' efficiency. After developing this model, he compares optimized keyboards to current keyboards. He then proposed new keyboard layouts that show overall improvement over current layouts.
 Discussion
This paper seems pretty intuitive at every point, but it seems that no one had done a study specifically on it until now. However, I'm all about efficiency, so this held my interest pretty well. I personally tap-type, so I can't relate to the stroke-based typing; however, if a good keyboard layout were implemented for stroke typing, I may give it a try. It just seemed to me originally that if I were to use a standard keyboard layout why use a different input method? I was especially interested in the hexagonal layout; that is very different from the norm and look fun to try.

Paper Reading #6: TurKit: Human Computation Algorithms on Mechanical Turk

 References
Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller.  "TurKit: human computation algorithms on mechanical turk". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Greg Little is a graduate of MIT.

Lydia Chilton is a graduate student at the University of Washington.  She has interned for Microsoft Research in Beijing.  She is also a graduate of MIT.

Max Goldman is a professor at MIT and is part of the User Interface Design Group.

Rob Miller is also a professor at MIT.  Miller is the leader of the User Interface Design Group.

Summary 

  • Hypothesis - The researchers hypothesize that using TurKit they can aid in the development of algorithmic tasks for mTurk at the cost of efficiency.
  • Method/Content - They based their design on what they called the "crash and re-run" paradigm. This is an advanced implementation of dynamic programming; in essence, it allows the expensive functions to run only once and subsequent callings of the same function simply retrieves data stored in a database. Thus when the program crashes, the user can pick and choose which functions to run again, without needing to run the entire thing.
  • Results - The results were overall positive. The main complaints were that the scripts TurKit could run needed to be deterministic, for if it were to change with different inputs it would need to be run again anyway. Another complaint was that some users did not know of some of the features, but this also may be that they were using it in the early stages of TurKit's development, when some had not been implemented yet. They did, however, discover that the running time of all of the TurKit script is faster than nearly all of the human function calls.
  • Content - TurKit is a tool that automates mTurk for ease of use and repetition. It allows users to specify which parts of the program to rerun in the event of a crash. It goes in depth for some of the features. It also describes user reactions to TurKit.
 Discussion
The researchers successfully developed a useful tool to help with mTurk. Its crash and re-run design paradigm complements the slow human function runtimes very well. They did state, however, that while it is useful for research, it is doubtful that it will be useful in any large-scale project. While it is hard to say whether human computational resources are truly a valuable resource are not is up to mTurk, but TurKit is indeed a useful tool to be used with mTurk. I hope they can continue their development of TurKit so they may scale the tool for large-scale projects.

Paper Reading #5: A Framework for Robust and Flexible Handling of Inputs with Uncertainty

References
Julia Schwarz, Scott Hudson, Jennifer Mankoff, and Andrew D. Wilson.  "A framework for robust and flexible handling of inputs with uncertainty". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Julia Schwarz is a Ph.D. student at Carnegie Mellon University.

Scott Hudson is a professor at Carnegie Mellon University, where he is the founding director of the HCII PhD program.

Jennifer Mankoff is a professor at the Human Computer Interaction Institute at Carnegie Mellon University.  Mankoff earned her PhD at the Georgia Institute of Technology.

Andrew Wilson is a senior researcher at Microsoft Research.  Wilson received his Ph.D. at the MIT Media Laboratory and researches new gesture-related input techniques.
Summary 
  • Hypothesis - The researchers hypothesized that using the uncertainty aspect of a gesture during the process to interpret it will give more accurate results.
  • Method/Content - They created several examples to test their framework. It works by sending the user input to all possible recipients of the gesture, along with the information about it. When it sends this along, it also calculates the probability of it being the correct choice. However, the evaluation system used in this framework is lazy; that is, it waits until the last possible second to actually decide which action to take. Once the gesture or gesture sequence is completed (noted by specific actions, such as taking a finger off the screen or a specific length period of silence), it then by probability decides which action to take. One of the examples they used was voice recognition where 'q' and '2' sound the same.
  • Results - All of their tests came out positive. The findings were that their system did increase the accuracy of interpretation by a large margin. The results from four movement impaired subjects with conventional motion gesture recognition was that between 5 and 20% of all inputs were interpreted incorrectly. However, with the new system, less than 3% for all four of them were misinterpreted.
  • Content - The researchers first discuss the limitations of current gesture search systems. They put forth their claim that getting rid of uncertainties in the beginning of the system greatly increases the chances of misinterpreting a gesture. The then created a system in which it keeps these uncertainties and the associated information. Using this new system, they test several participants and compare the new system with the current one. They hypothesis was supported, and they go on to discuss future possible applications.
 Discussion
The thoughts of the developers were that rather than getting rid of uncertainties in the beginning when they have nothing else to go on, they should wait to get rid of them until the end when something inputted later may narrow down the choices. This is a surprisingly new concept that has not been strongly implemented yet. Especially in today's world where screens are getting smaller and touch-interaction is the norm, being able to select actions based upon probability is a must. This would be a great thing to integrate into many systems in use today, like smart phones and tablet PCs.

Paper Reading #4: Gestalt

References
Kayur Patel, Naomi Bancroft, Steven M. Drucker, James Fogarty, Andrew J. Ko, and James Landay "Gestalt: integrated support for implementation and analysis in machine learning". UIST '10 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2010.

 Author Bios
Kayur Patel is a PhD student at the University of Washington specializing in machine learning.

Naomi Bancroft is a Senior undergraduate researcher at the University of Washington. Her interests lie in HCI.

Steven M. Drucker is a  Principal Researcher at Microsoft Research with who specializes in HCI. He received his PhD from MIT.

James Fogarty is currently an assistant professor at UW. His research focuses on HCI and Ubiquitous computing. He received his PhD from Carnegie Melon.

Andrew J.Ko is also currently an assistant professor at UW. His research focuses on the “Human aspects of software development”. He also received his PhD from Carnegie Melon.

James A Landay is a professor at UW, whose research focuses on Automated Usability Evaluation, Demonstrational Interfaces, and Ubiquitous Computing. He received his PhD from Carnegie Melon as well.

Summary 
  • Hypothesis - A general purpose Machine Learning tool that allows developers to analyze the information pipeline will lead to greater efficiency and fewer errors.
  • Method/Content - The researchers created two problems, one for movie reviews and one for gesture recognition. Eight testers were then given a program for each problem; each program had 5 bugs in it. Within an hour, they were asked to find and fix as many bugs as they could. The tools they used to find and fix these problems were their newly developed Gestalt Framework, and the other was to use a customized version of Matlab. Each participant was asked to solve each problem with each program (4 tests in all).
  • Results - The results showed that participants were able to find significantly more errors while using the Gestalt framework. Some even tried to create Gestalt functionality within Matlab. All eight of the users preferred Gestalt over Matlab, and most of them stated that they would likely benefit from using Gestalt in their work. 
  • Content - This paper presented Gestalt, which is a new tool for developers of Machine Learning. It then conducted a user study to compare it with other similar software, and found that it was indeed a good tool. It then discussed its strengths and weaknesses. Its main strength lies in its ability for users to view the information pipeline.
 Discussion
Although a general purpose tool cannot necessarily perform all of the same tasks as well as a domain-specific tool, they are often flexible enough to still be a powerful tool. Gestalt seems as if it has a good ways to go before it sees general use, but the results were promising. Throughout the paper, the greatest thing that I saw about the framework was its ability to let you view (and manipulate) the information pipeline. This is key for many applications, especially machine learning. Although their testing methods were not robust, they did serve to show a general sentiment of how Gestalt can be useful to developers.