Thursday, December 8, 2011

Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

References
Erin Treacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob "Gesture avatar: a technique for operating mobile user interfaces using gestures". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
This research paper involved all people from Tufts University. Everyone on the list is either a post-graduate student, a professor, or a researcher at or for the University.

Summary 
  • Hypothesis - The hypothesis is that an fNIRS tool will be as effective in capturing the tasking of a human mind as an fMRI machine. The researcher also hypothesized that a system using this tool can be created that will aid users in their tasks.
  • Method - The experiment involved participants interacting with a robot simulation, in which they sorted rocks on Mars. Based on the pattern of rock classifications, they then divide tasking into three groups: delay, dual-tasking, and branching. They also tested whether the fNIRS tool would truly be as effected as the fMRI. They asked users to do varying tasked and analyzed how often the fNIRS tool correctly classified their tasking state.
  • Results - The fNIRS was able to correctly classify some tasking states, but was not very accurate. The researchers noted that there were certain modifications that could be made to improve this statistic. They also created a system based off of this data in order to test the ability of a system to help users in multitasking.
  • Content - The researchers compared the fNIRS machine to an fMRI machine, and noted that the fNIRS was not as accurate or as powerful as the fMRI, but that in a real-world application it is much more feasible. They then created a system to use the fNIRS to help users in their multitasking. The system showed some good results, hinting that it may be a good direction to follow.
 Discussion
This paper held my interest well. I (attempt to) multitask all the time, and frequently am bogged down by the time it takes to switch between the proper state of mind for each task. Going from phone to programming to email and back to programming takes much longer than it could, because I often do not wait to get to a stopping point before switching tasks. As I just did. And I lost my train of thought...well I suppose this blog's done.

Paper Reading #26: Embodiment in Brain-Computer Interaction

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures


References
Kenton O'Hara, Abigail Sellen, Richard Harper "Embodiment in Brain-Computer Interaction". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Kenton O'Hara is a Senior Researcher at Microsoft Research.
Abigail Sellen is a Principal Researcher at Microsoft Research. She received her Ph. D. from the University of California San Diego.
Richard Harper is a Principal Researcher at Microsoft Research. He received his Ph. D. from Manchester.


Summary 
  • Hypothesis -The researchers hypothesized that in brain-computer interaction, the whole body and social interactions are equally important for study.
  • Method - The study used a game called MindFlex. Three groups were given the game to take home and video themselves playing it. They were encouraged to bring in random people such as outside family and friends to play as well. They were told to choose where and when to play, allowing for a natural environment.
  • Results - The analysis showed many different things that was not necessary for impacting the game. For example, body position played a large role in how users interacted with the game, although it had no direct effect. Another is that users would frequently use imaging, such as imagining or telling the ball to go up when all they needed to do was concentrate more.
  • Content - The researchers analyzed the interactions between players and between player and game in order to better understand how brain-computer interaction works. They told different groups to play the game while acting naturally, in order to get a representation that was as little skewed as possible. They observed many common patterns between players.
 Discussion
I thought that the game itself was interesting, but found that the paper didn't seem to hold much application. It more or less was saying that we need to study brain-computer interaction more in order to expand the field, which seems to me pretty obvious. However, the paper was well done and I enjoyed the experiment itself. Some of the observations they found were surprising to me. I would like to play that game, as I'm not sure I would do very well; unlike some people, I can't ever stop thinking and can't "turn my brain off".

Paper Reading #25: TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration

References
Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, Robert C. Miller "TwitInfo: Aggregating and Visualizing Microblogs For Event Exploration". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Adam Marcus is a graduate student at MIT. He received his bachelor's from Rensselaer Plytechnic Institute.

Michael S. Bernstein research interests lie in social computing and crowdsourcing.

Osama Badar is a graduate student at MIT.

David R. Karger used to work for Google, and is now a part of the AI lab at MIT.

Samuel Madden is an associate professor at MIT. He has worked on systems for mTurk.

Robert C. Miller is an associate professor at MIT. He currently leads the User Interface Design Group.

Summary 
  • Hypothesis - A study of massive amounts of microblogs can provide accurate feedback on major events in real time, in this case Twitter.
  • Method -The researchers developed TwitInfo, which analyzes all posts with specific tags. These tags are set for an event, such as a World Cup Game. They developed a user interface that was intuitive for users and analyzed events in real time. They then evaluated the usefulness by letting average users test it, as well as a major journalist.
  • Results - The results were favorable. The evaluation showed that TwitInfo was accurately able to predict when events occurred. It was also able to analyze on a basic level people's reactions to that event. For example, during a World Cup game, it was able to localize where people were generally happy or generally unhappy towards certain events, such as goals. The journalist maintained that this knowledge was too shallow to rely on alone, but that it was a useful tool to gain a basic understanding at a higher level.
  • Content - The paper presented a tool for analyzing Twitter information to gain accurate data on world events. It was able to do so rather successfully, and users generally gave positive feedback. The limitations on this system was that it was not able to show all of the major events, such as a yellow card in the World Cup game, and that the information gathered is too shallow to use as a sole source of data.
 Discussion
This tool impressed me. I rather liked how it was developed, and it was pretty fun to look at the UI (while it was still up and working). I liked how it was able to analyze the massive amounts of data in real time. The sentiment calculation left something to be desired, but such is our current technology. I particularly liked the World Cup game, as it was interesting to see which events were portrayed accurately and what went undetected.

Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

References
Hao Lu, Yang Li "Gesture avatar: a technique for operating mobile user interfaces using gestures". UIST '11 Proceedings of the 23rd annual ACM symposium on User interface software and technology.  ACM New York, NY, USA ©2011.

 Author Bios
Yang Li received his Ph.D. from the Chinese Academy of Sciences and conducted postdoctoral research at the University of California at Berkeley. Li helped found the Design Use Build (DUB) community while a professor at the University of Washington. He is now a Senior Research Scientist at Google.

Hao Lu is a graduate student at the University of Washington. He is a member of the DUB group.

Summary 
  • Hypothesis - The researchers had three hypotheses:
    • Gesture Avatar would be slower than Shift on large targets, but faster on small ones.
    • Gesture Avatar will be less error-prone than Shift.
    • The error rate for Gesture Avatar will not be affected as much by walking as Shift's.
  • Method - The participants were to test both systems. Half of them learned Gesture Avatar first, half Shift. The researchers then measured the time from gesture to selection with many variables, such as walking or sitting, number of repeated letters, and the size of the targets. They then compared the results between Gesture Avatar and Shift.
  • Results - The results confirmed hypothesis 1: Shift was much faster on large targets, but much slower on small ones. The error rate for Gesture Avatar remained mostly constant, while Shift's went up as target size became smaller. They also confirmed the third hypothesis; Gesture Avatar remained constant between sitting and walking, while Shaft's performance decreased significantly. Only one participant in the study preferred Shift over Gesture Avatar.
  • Content -The paper presented the implementation for Gesture Avatar, an API geared towards minimizing errors due to touch-screen based selection. They suggested minor adjustments and modifications to their system that are possible and may be desirable. They developed their product and tested it against Shift to test its strengths and weaknesses. Their results matched their hypotheses and Gesture Avatar had an overall good reception.
 Discussion
 I appreciated the clarity of this paper, and also how relevant its application was to the the general populace. I personally have some troubles (having fat fingers) while selecting small text in a web browser. I have not used Gesture Avatar yet but may pick it up, as it seems others have had good results. I don't particularly like letter-based gestures as a whole, as they can fail at recognizing letters correctly. However, due to the relative simplicity of this one's use, the gesture errors may be minimized more because does not require as much input.