Thursday, December 8, 2011

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

References
Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob "Gesture avatar: a technique for operating mobile user interfaces using gestures". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
This research paper involved all people from Tufts University. Everyone on the list is either a post-graduate student, a professor, or a researcher at or for the University.

Summary 
  • Hypothesis - The hypothesis is that an fNIRS tool will be as effective in capturing the tasking of a human mind as an fMRI machine. The researcher also hypothesized that a system using this tool can be created that will aid users in their tasks.
  • Method - The experiment involved participants interacting with a robot simulation, in which they sorted rocks on Mars. Based on the pattern of rock classifications, they then divide tasking into three groups: delay, dual-tasking, and branching. They also tested whether the fNIRS tool would truly be as effected as the fMRI. They asked users to do varying tasked and analyzed how often the fNIRS tool correctly classified their tasking state.
  • Results - The fNIRS was able to correctly classify some tasking states, but was not very accurate. The researchers noted that there were certain modifications that could be made to improve this statistic. They also created a system based off of this data in order to test the ability of a system to help users in multitasking.
  • Content - The researchers compared the fNIRS machine to an fMRI machine, and noted that the fNIRS was not as accurate or as powerful as the fMRI, but that in a real-world application it is much more feasible. They then created a system to use the fNIRS to help users in their multitasking. The system showed some good results, hinting that it may be a good direction to follow.
 Discussion
This paper held my interest well. I (attempt to) multitask all the time, and frequently am bogged down by the time it takes to switch between the proper state of mind for each task. Going from phone to programming to email and back to programming takes much longer than it could, because I often do not wait to get to a stopping point before switching tasks. As I just did. And I lost my train of thought...well I suppose this blog's done.

Paper Reading #26: Embodiment in Brain-Computer Interaction

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures


References
Kenton O'Hara, Abigail Sellen, Richard Harper "Embodiment in Brain-Computer Interaction". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Kenton O'Hara is a Senior Researcher at Microsoft Research.
Abigail Sellen is a Principal Researcher at Microsoft Research. She received her Ph. D. from the University of California San Diego.
Richard Harper is a Principal Researcher at Microsoft Research. He received his Ph. D. from Manchester.


Summary 
  • Hypothesis -The researchers hypothesized that in brain-computer interaction, the whole body and social interactions are equally important for study.
  • Method - The study used a game called MindFlex. Three groups were given the game to take home and video themselves playing it. They were encouraged to bring in random people such as outside family and friends to play as well. They were told to choose where and when to play, allowing for a natural environment.
  • Results - The analysis showed many different things that was not necessary for impacting the game. For example, body position played a large role in how users interacted with the game, although it had no direct effect. Another is that users would frequently use imaging, such as imagining or telling the ball to go up when all they needed to do was concentrate more.
  • Content - The researchers analyzed the interactions between players and between player and game in order to better understand how brain-computer interaction works. They told different groups to play the game while acting naturally, in order to get a representation that was as little skewed as possible. They observed many common patterns between players.
 Discussion
I thought that the game itself was interesting, but found that the paper didn't seem to hold much application. It more or less was saying that we need to study brain-computer interaction more in order to expand the field, which seems to me pretty obvious. However, the paper was well done and I enjoyed the experiment itself. Some of the observations they found were surprising to me. I would like to play that game, as I'm not sure I would do very well; unlike some people, I can't ever stop thinking and can't "turn my brain off".

Paper Reading #25: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration

References
Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, Robert C. Miller "TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Adam Marcus is a graduate student at MIT. He received his bachelor's from Rensselaer Plytechnic Institute.

Michael S. Bernstein research interests lie in social computing and crowdsourcing.

Osama Badar is a graduate student at MIT.

David R. Karger used to work for Google, and is now a part of the AI lab at MIT.

Samuel Madden is an associate professor at MIT. He has worked on systems for mTurk.

Robert C. Miller is an associate professor at MIT. He currently leads the User Interface Design Group.

Summary 
  • Hypothesis - A study of massive amounts of microblogs can provide accurate feedback on major events in real time, in this case Twitter.
  • Method -The researchers developed TwitInfo, which analyzes all posts with specific tags. These tags are set for an event, such as a World Cup Game. They developed a user interface that was intuitive for users and analyzed events in real time. They then evaluated the usefulness by letting average users test it, as well as a major journalist.
  • Results - The results were favorable. The evaluation showed that TwitInfo was accurately able to predict when events occurred. It was also able to analyze on a basic level people's reactions to that event. For example, during a World Cup game, it was able to localize where people were generally happy or generally unhappy towards certain events, such as goals. The journalist maintained that this knowledge was too shallow to rely on alone, but that it was a useful tool to gain a basic understanding at a higher level.
  • Content - The paper presented a tool for analyzing Twitter information to gain accurate data on world events. It was able to do so rather successfully, and users generally gave positive feedback. The limitations on this system was that it was not able to show all of the major events, such as a yellow card in the World Cup game, and that the information gathered is too shallow to use as a sole source of data.
 Discussion
This tool impressed me. I rather liked how it was developed, and it was pretty fun to look at the UI (while it was still up and working). I liked how it was able to analyze the massive amounts of data in real time. The sentiment calculation left something to be desired, but such is our current technology. I particularly liked the World Cup game, as it was interesting to see which events were portrayed accurately and what went undetected.

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

References
Hao Lu, Yang Li "Gesture avatar: a technique for operating mobile user interfaces using gestures". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Yang Li received his Ph.D. from the Chinese Academy of Sciences and conducted postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build (DUB) community while a professor at the University of Washington. He is now a Senior Research Scientist at Google.

Hao Lu is a graduate student at the University of Washington. He is a member of the DUB group.

Summary 
  • Hypothesis - The researchers had three hypotheses:
    • Gesture Avatar would be slower than Shift on large targets, but faster on small ones.
    • Gesture Avatar will be less error-prone than Shift.
    • The error rate for Gesture Avatar will not be affected as much by walking as Shift's.
  • Method - The participants were to test both systems. Half of them learned Gesture Avatar first, half Shift. The researchers then measured the time from gesture to selection with many variables, such as walking or sitting, number of repeated letters, and the size of the targets. They then compared the results between Gesture Avatar and Shift.
  • Results - The results confirmed hypothesis 1: Shift was much faster on large targets, but much slower on small ones. The error rate for Gesture Avatar remained mostly constant, while Shift's went up as target size became smaller. They also confirmed the third hypothesis; Gesture Avatar remained constant between sitting and walking, while Shaft's performance decreased significantly. Only one participant in the study preferred Shift over Gesture Avatar.
  • Content -The paper presented the implementation for Gesture Avatar, an API geared towards minimizing errors due to touch-screen based selection. They suggested minor adjustments and modifications to their system that are possible and may be desirable. They developed their product and tested it against Shift to test its strengths and weaknesses. Their results matched their hypotheses and Gesture Avatar had an overall good reception.
 Discussion
 I appreciated the clarity of this paper, and also how relevant its application was to the the general populace. I personally have some troubles (having fat fingers) while selecting small text in a web browser. I have not used Gesture Avatar yet but may pick it up, as it seems others have had good results. I don't particularly like letter-based gestures as a whole, as they can fail at recognizing letters correctly. However, due to the relative simplicity of this one's use, the gesture errors may be minimized more because does not require as much input.

Saturday, November 26, 2011

Paper Reading #22: Mid-air Pan-and-Zoom on Wall-sized Displays

References
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay "Mid-air pan-and-zoom on wall-sized displays". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Mathieu Nancel is a Ph.D. student in HCI at the University Paris-Sud XI.

Julie Wagner is a Ph.D. student at the InSitu lab in Paris.

Emmanuel Pietriga is the interim leader of INRIA team in Situ where he is a full-time reasearch scientist.

Oliver Chapuis is a research scientiest at LRI. He is also a member of the InSitu research team.

Wendy Mackay is a research director with INRIA Saclay in France. She is in charge of the research group InSitu.

Summary 
  • Hypothesis - The researchers hypothesized that they could improve interaction with wall-sized displays by studying several factors of gesture interactions. They made an individual hypothesis for each factor:
    •  Two-handed gestures are more accurate and easier to use.
    • Two-handed gestures are faster than one-handed.
    •  Users will prefer clutch-free gestures.
    • Linear gestures would be best mapped to zoom but should eventually be slower due to aforementioned clutching.
    • Gestures utilizing fingers instead of larger muscle groups will be faster.
    • One-dimensional path gestures should be faster when using lesser haptic feedback.
    • Three-dimensional gestures will be more tiring.
  • Method - Their study involved 12 participants. Each participant was asked to use all patterns of interaction. Each participant completed these patterns in a number of sessions, preventing inaccurate data caused by exhaustion and memory loss.
  • Results - The results found supported the second hypothesis, as well as the fifth and sixth. The fourth hypothesis was refuted; linear gestures actually ended up being faster than circular gestures. Hypothesis seven was also found to be true. The results for the other hypotheses were inconclusive.
  • Content - This paper discusses the best ways a user might interact with a screen too large to interact with directly. They form several hypotheses on how well certain gestures would work, and perform a study based around their hypotheses. They then conducted an experiment to emphasize their key points.
 Discussion
This paper was very interesting. I though their methods may have been a little off, but their results were for the most part expected. I can't say whether or not they thoroughly proved their points, but the paper was well done. Technology like this makes me excited every time I read about it; we keep getting closer to virtual reality rooms and smooth motion gestures for all interactions. I can't wait for the day that I have a display on all of my walls and can change anything I want with a word and a hand gesture.

Paper Reading #21: Human model evaluation in interactive supervised learning

References
Rebecca Fiebrink, Perry R. Cook, and Dan Trueman "Reflexivity in digital anthropology". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Rebecca Fiebrink is an assistant professor in Computer Science and affiliated faculty in Music. Until recently she was a postdoc at the University of Washington.

Perry Cook earned his PhD from Stanford University.  His research interests lie in Physics-based sound synthesis models.

Dan Trueman is a professor at Princeton University.  In the last 12 years he has published 6 papers through the ACM. He is also a musician, primarily with the fiddle and the laptop.

Summary 
  • Hypothesis - The researchers hypothesized that Interactive Machine Learning (IML) would be a useful tool to improve the current generic machine learning processes currently used.
  • Method -The researchers first developed a system of IML to help with music composition, called Wekinator. They then conducted three studies. The first study included several PhD students aimed towards improving the system itself. They used the software while composing their own music, and met regularly to discuss their experiences and suggest improvements for the software. The second study involved undergraduates. They were told to use the software in an assignment specifically geared towards supervised learning in interactive music performance systems. The third and final study had a professional cellist use the system to create a gesture recognition system. The gestures were to provide correct musical notation, such as staccato.
  • Results -  Although some results were expected, they also ran into a few things they had not. For one, users tended to overcompensate; that is, they provided more than enough information to make sure the system got it right. Also, the system's performance sometimes surprised users, encouraging them to expand their ideas of the desired goal.
  • Content - The researchers observed as users interacted with the machine learning software. They found that while users liked the cross-validation, most of them preferred direct validation. The IML was determined to be useful because of its ability to continuously improve the effectiveness of the learning model itself.
 Discussion
This paper was very well done. The experiments were well thought out, carried out, and explained. The proved their hypothesis and were successful in explaining why. Using three independent studies, they were able to compile a large amount of data to use. I think that these results will be very useful, not just in the application they chose but in a widespread realm of problems. 

Paper Reading #20:The aligned rank transform for nonparametric factorial analyses using only anova procedures

References
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins "Reflexivity in digital anthropology". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Jacob Wobbrock is an associate professor in the Information School at the University of Washington.  He directs the AIM Research Group which is part of the DUB Group.

Leah Findlater is currently a professor at the University of Washington.

Darren Gergle is an associate professor at the Northwestern University School of Communication.

James Higgins is a professor in the Department of Statistics at Kansas State Unversity. 

Summary 
  • Hypothesis - The researchers hypothesized that modifying the Aligned Rank Transform to support an arbitrary number of factors would be useful for researchers in analyzing data.
  • Method - The researchers developed the method for the expanded ART and then created a desktop tool (ARTool) and a Java-based verson (ARTWeb). After creating these tools the researchers analyzed three sets of previously published data. This analysis allowed them to show the effectiveness and usability of their software.
  • Results - The results were positive. Reexamining old studies showed data that had not shown up before. For one of them, data was found that was unexaminable by a Friedman test. The second case showed how the new system can free analysts from assuming distributions of ANOVA. The last was run using the nonparametric ART method, new information was revealed.
  • Content - The authors presented their Aligned Rank Transform (ART) tool, which is useful for nonparametric analysis of factorial experiments. They discuss the process in detail, and show three examples of how it is useful and where it is applicable. It is shown that this tool can show some relationships between variables that cannot be seen with other analyses.
 Discussion
Honestly, this paper went way over my head. It did seem obvious to me, however, that the authors were able to effectively support their hypothesis and were able to create a very useful tool for analysts. The amount of information analysts gets out of data greatly affects their ability to extrapolate. I thought their examples were well chosen and explained well (even on a broad spectrum) how the ART system can produce more specific and more accurate results.