Wednesday, April 6, 2011

Paper Reading #20: A Multimodal Labeling Interface...

Comments:
Comment 1
Comment 2

Reference Information:
Title: A Multimodal Labeling Interface for Wearable Computing
Authors: Shanqing Li and Yunde Jia
Presentation: (Conference Paper) IUI '10, February 7-10, 2010, Hong Kong China.

Summary: In this paper, the authors provide a solution for the inconvenience of labeling objects with portable keyboards and mice when using wearable environments. They developed an interface that uses visual and audio modalities which work together in order to achieve the desired results. How it works is that the wearer visually tracks the the object with the integrated camera and the pointing gesture tracking feature, and then using a speech recognition library the user can speak out the label for the object. Besides the gesture tracking system, they also propose a virtual touchpad interface where 
the wearer can identify the object in a more intuitive 
way. 

The evaluation of the system was carried out by encircling several circle regions with different radii and giving it labels. The application given for this interface and discussed in the paper consisted of online learning under wearable computing environments.




Discussion: Even though I think this is a really interesting interface, I think their evaluation methods were really poor. I think this development gives room for some really interesting user evaluations that would of provided researchers with better feedback than what they got. Also, I feel like this system can be implemented for a variety of application and if they would have discussed them in the beginning of the paper it would have made a great difference in, at least my reactions about the paper.

2 comments:

  1. From what I got from reading your summary, I believe I saw a video some time ago about new language translator technology. The purpose was to point at an object and the translator would display the item's "label" and it would also display it in your language of preference. I am not sure if this is what you are talking about, but the idea of the paper sounds very interesting.

    ReplyDelete
  2. They definitely need to do a wider spread user study of this to get more feedback. I like the concept of this a lot, but a user study could help them make it more ready for commercial use than it probably is at the moment.

    ReplyDelete