Showing posts with label Paper Reading. Show all posts
Showing posts with label Paper Reading. Show all posts

Wednesday, April 27, 2011

Paper Reading #24: The Why UI

Comments:
Comment1
Comment2

Reference Information:
Title: The Why UI: Using Goal Networks to Improve User Interfaces
Authors: D. A. Smith, H. Lieberman
Presentation: IUI '10, February 7-10, 2010, Hong Kong, China

Summary: This paper discusses the idea of integrating the user's goals into interfaces in order to provide users with better experience and results when using the interface. In the first part of their development they analyzed data from 43Things.com, a web site that gathers goals from users. The data is analyzed to determine the sub goals that the user might also want to consider in order to accomplish their goal. The second part of their work was to develop an application where users could indicate their goal and they would receive information about what other people did in order to accomplish their goals.


Discussion: The idea of analyzing the users goals in order to aid the user in accomplishing them is really interesting. However, I'm not sure in what kind of settings would this tool be of much help. The paper presents a scenario where the goal is to buy a house, thus the user obtains information about relaters. However, they do not present any data demonstrating the application's efficiency or appeal to users.

Tuesday, April 26, 2011

Paper Reading #23: Automatic Generation of Research Trails in Web History

Comments:
Adam Friedli
Zack Henkel

Reference Information:
Title: Automatic Generation of Research Trails in Web History
Authors: E.R. Pedersen, K. Gyllstrom, S. Gu, P.J. Hong.
Presentation: IUI'10, February 7-10, 2010, Hong Kong, China

Summary: This conference preceding discusses the development of an interface providing researchers with automatic generated trails of their research. Based on an ethnographic study, they realized the lack of such a system that would provide users with information about the websites they visited, when they visited them and what information they could find in there.

As previous works they describe the History features provided in most web browsers, and the Google History feature. However, these interfaces are not specific to one topic, and do not provide users with more information but URLs. Their system would actually keep track of individual research trail objects, it would analyze the data semantically as wells as the events of each visit.

Their system is currently implemented using a user interface and a model server. Their user interface can be obtained from the New Tab feature that most web browsers have. From there, users can see the most recent research trails. Their model server is currently using Google history data. They assessed their development internally, by using colleagues history data, and as they were developing the system, they were assessing it using this data.

Discussion: This is a great idea! I can't believe there isn't something like this available already. I think it has happened to everyone that while doing web searches we forget what website we were on, and how to get there. Something else I liked was how they describe their motivation, and how they believe this tool can be useful for people performing different types of research, not just continuous research. If someone goes back to work on something they haven't worked on for a while, they can know exactly was was the last things they worked on.

However, I would have loved to see some images of their implementation, and a user study. With a user study they can find out more accurately how appealing this would be to users. We know it would be useful, but would users really use this tool?

Source
The picture above show the current implementation Google provides for its users. Something similar and mentioned in the paper.

Paper Reading #25: Finding Your Way in a Multi-dimensional Semantic Spce with Luminoso

Comments:
Reference Information:
Title: Finding Your Way in a Multi-Dimensional Semantic Space with Luminoso
Authors: R. Speer, C. Havasi, N. Treadway, H. Lieberman
Presentation: IUI'10, February 7-10, 2010, Hong Kong, China

The selected point is a canonical document representing the
expected content of a good review. The gray line connecting the point
to the origin is always shown, as a reference for comparing with other
points.
Summary: Luminoso is a system that allows researchers and users to associate data. Th input data are text files that are dropped in the the folder where the analysis of the data can be found as well. The interface then "grabs"  point, and allows the user to see the related data and analysis. Such actions are also known as data mining, and it has been known that when the user is active in the processes better results can be obtained. To obtain such common sense from the user, the are utilizing an interface called ConceptNet. The way data is analyzed is for example by looking into word repetition and semantic context associations.One of the applications mentioned in the paper is that of creating semantic networks. Because Luminoso provides a visual way for displaying data, it becomes relevant when great amounts of data are to be displayed.

Discussion: Even though I think a really interesting concept they are trying to develop, their explanations about the system and motivations were not very clear described. They also did not present an evaluation to he system, which makes me doubt how useful the system could be. If they would have included some type of user study and discussed it, the reader could obtain a more clear understanding of what things such an application is useful for. The image presented above demonstrates the user display of the interface.

Paper Reading #22: Vocabulary Navigation Made Easier

Comments:

Reference Information:
Title: Vocabulary Navigation Made Easier
Authors: S. Nikolova, X. Ma, M. Tremaine, P. Cook
Presentation: IUI'10, February 7-10, 2010, Hong Kong, China

Related words in ViVA are displayed above the basic
hierarchy, e.g. tea and dessert are associated with food.
Summary: This paper discusses the development of ViVA, a visual interface that makes navigation easier when trying to find words. The challenge they are faced with is implementing such an interface that will help individuals with lexical disorders, such as aphasia. Majority of the past and current work on this area includes interfaces with some kind of hierarchy or category of words, which may lead to disorganization and long search trials.

ViVA is a visual vocabulary interface which allows a more  efficient way of finding words by modeling a "mental lexicon." ViVA will organize vocabulary based on contextual organizations. For example, if you are looking for the word milk, you can find it in the kitchen category. They based their development in an already existing vocabulary hierarchy called Lingraphica, and added the associative features.

They conducted a user study with sixteen individuals. They were provided with a set of missing words, and the users task was to find it. The roup was divided into two, one using a simpler hierarchical language, and the other using ViVA. The results show that there was great improvement in the user's experience with the interface.

Discussion: Reading about technology that aids people with disabilities is always appealing to me. It helps me see that technology is not just about making things simpler or more productive, but it is also there who really need their help. 

I had never heard about this condition before, but from the point of view of an English Second Language speaker I ca n tell that even for us, this kind of interface could work more efficiently than only a translation dictionary. This interface may aid in our thought process, thus learning new words and exercising our language skills. 

Wednesday, April 13, 2011

Paper Reading #21: Epistemology-based Social Search

Comments:
Comment 2

Reference Information
Title: Supporting Exploratory Information Seeking by Epistemology-based Social Search
Authors: Y. Mao, H. Shen, C. Sun
Presentation: (Conference Paper) IUI'10, Febrary 7-10, 2010, Hong Kong, China

Summary: In this paper the authors present a system called Baijia, an epistemology-based social search solution for the problem of finding the proper keywords and evaluating results in search engines. The system reuses and refines previous successful searches to provide users with successful, accurate and desired information.

The system uses packages of information derived from a number of resources such as search processes, queries, results, ranking, annotation, comments, etc, in order to provide the user with a successful search process even if he entered keywords that are not as relevant or are vague. Basically, the system has an epistemology repository, and every time a user adds a query, it is added, and can then be linked to previous epistemology records, comments, web pages, etc. all with the purpose of presenting the user information about previous searches.

Even though they have not yet performed user studies, their experimental evaluations have showed that such a system outperforms conventional search systems.

Discussion: Even though I think this may be a good idea to improve search results, I don't think the paper did a great job in explaining the system. I suppose such an application can be useful for people who are trying to find out about something, but they don't really have the words to describe it. Also, if users have the capability of commenting on the results of the search, that can allow users to warn each other about certain websites that are not useful.


Wednesday, April 6, 2011

Paper Reading #20: A Multimodal Labeling Interface...

Comments:
Comment 1
Comment 2

Reference Information:
Title: A Multimodal Labeling Interface for Wearable Computing
Authors: Shanqing Li and Yunde Jia
Presentation: (Conference Paper) IUI '10, February 7-10, 2010, Hong Kong China.

Summary: In this paper, the authors provide a solution for the inconvenience of labeling objects with portable keyboards and mice when using wearable environments. They developed an interface that uses visual and audio modalities which work together in order to achieve the desired results. How it works is that the wearer visually tracks the the object with the integrated camera and the pointing gesture tracking feature, and then using a speech recognition library the user can speak out the label for the object. Besides the gesture tracking system, they also propose a virtual touchpad interface where 
the wearer can identify the object in a more intuitive 
way. 

The evaluation of the system was carried out by encircling several circle regions with different radii and giving it labels. The application given for this interface and discussed in the paper consisted of online learning under wearable computing environments.




Discussion: Even though I think this is a really interesting interface, I think their evaluation methods were really poor. I think this development gives room for some really interesting user evaluations that would of provided researchers with better feedback than what they got. Also, I feel like this system can be implemented for a variety of application and if they would have discussed them in the beginning of the paper it would have made a great difference in, at least my reactions about the paper.

Monday, April 4, 2011

Paper Reading #19: A $3 Gesture Recognizer

Comments:

Reference Information:
Title: A $3 Gesture Recognizer - Simple Gesture Recognition for Devices Equipped with 3D Acceleration Sensors
Authors: Sven Krats and Michael Rohs
Presentation: (Conference Paper) IUI'10, February 7-10, 2010, Hong Kong, China

Summary: In this paper, the authors present a very simple  gesture recognizer for input devices with 3D acceleration sensors, such as a Nintendo Wii control.  Their development is really simple, not complex to use or implement. Thus, it is a great system to use for testing prototypes. The authors based much of they research and design on the "$1 Recognizer" by Wobbrock, a 2D gesture recognizer. Their recognizer is an extension of it, but also a really simple one. The main advantage and major contribution of this system is their "true" ability to recognize 3D motion.  
Figure 1. The reference gesture vocabulary containing
the gesture classes used for the preliminary evaluation.
(b) describes a clockwise circular motion, (c) a wrist
rolling motion (e) stands for a gesture resembling the
serve of a tennis player and (j) represents a repeated
rapid forward-backwards motion.
They evaluated the system with a user study with twelve participants and a set of 10 unique gesture classes.The study consisted in having each participants to enter a gesture class fifteen times using a WiiMote. Their results demonstrate an 80% recognition rate. However, the individual recognition rate varied from 58% to 98%. The authors do state that their recognition rate is lower than  than that showed in previous works. But, they believe this is an expected rate since they utilized simpler methods and implementing more gesture classes but less gesture training per class.

Discussion: I don't really know much about this type of recognizers, so I wouln't be certain if there is any other developments supporting 3D gesture recognition. Even though it seems like a very simple development, and the results are not very high or better than previous works, this may just be a start. I like the idea of them finding a use for their system. Using this for prototype testing sounds like a reasonable application for it.




Sunday, April 3, 2011

Paper Reading #18: Activity Awareness

Comments:

Reference Information:
Title: Activity Awareness in Family-Based Healthy Living Online Social Networks
Author: S. Kimani, S. Berkovsky, G. Smith, J. Freyne, N. Baghaei, D. Bhanari.
Presentation: (Conference Presentation) IUI'10. February 7-10, 2010, Hong Kong, China

Summary: In this paper, the authors describe an activity awareness user interface combined with a social network system. They believe that social relationships and family involvement can improve the family members' health management techniques. Thus, they developed a system where more interaction between family memebers' can be achieved and where each one can track and access tools relating to their healthy living activities.

The social network system is structured for families, where a community has many member families. The Activity Awareness Interface allows them to interact online noting their healthy activities in interactions such as forum posting, an activity diary and a blog. Their activity diary is where individuals can write about their real world healthy living activities. Each activity recorded in the diary can be later display as a report card with graphs about their performance, as well as their social interaction performance. 

In order to evaluate the system, a user study was carried out. They were measuring how the Activity Awareness Interface contributed to the healthy living in contrast to just the family-oriented healthy living social network. They targeted families of 4, two parents and 2 children. Part of the families utilized the social network on its own, while the rest used the social network along with the activity awareness interface.  

Their studies where divided into three stages, pre interaction, interaction, and post interaction. In the pre-interaction stage, participants where introduced to the system. In the interaction stage, participants utilized the system for 3 weeks, and in the post interaction stage participants filled out experience questionnaires.
Overall they noted the benefits of using both system together, instead of just the social network.

Discussion:
I think that this is definitively an innovative way to help families maintain healthy lives. However, I do not think the paper was really information as it could have been. I would have liked seeing more explanation about how the social network worked, and how it interacted with the interface. In the study they mentioned how they compared the effectiveness of the interface comparing it against the results obtained from the families using the social network only. However, they did not explain what features were available in the social network and how that would also help them maintain a healthy life. 


Tuesday, March 29, 2011

Paper Reading #17: Language Complexity

Comments: 

Reference Information: 
Title: Using Language Complexity to Measure Cognitive Load for Adaptive Interaction Design
Authors: M. A. Khawaja, F. Chen, N. Marcus
Presentation: (Conference Paper) IUI'10, February 7-10, Hong Kong, China.

Summary: Cognitive Load is the mental load that a persons' memory carries when the person is performing a problem solving task. There is a limited capacity a person can maintain in order to process new information, so when a person is going through a process with large loads of information and time limitations. Such a load can be experienced by users of an interaction system. 

In this study, researches investigated patterns of speech from cognitively low and high load tasks. Their intention is to use the results obtain from the study to see how they can be used in the development of a user interface evaluation and interaction design improvements. They believe that if such a system can determine the user's cognitive load, it could adjust to the user in order to provide him or her with a better experience. 

Researches believe that the choice of words and the form of a speech is very different from written content, since there is no really much time to analyze and think about the way we want to present our ideas. Thus, it was better to carry out their studies in transcribed speeches. They obtained the data from members of bushfire management teams from Australia.

The complexity measurements they concentrated in included: Lexical Density (ratio of unique words to the total number of words), Complex Word Ratio (ratio of complex words, three syllables or more, to total number of words), Gunning Fog Index (sentence lengths and complex words), Flesch-Kincaid (estimates the number of years of education a person would require to understand text),  SMOG Grade (also focuses on the education persons require to fully comprehend text), and Lexile Level (measure of complexity).

Majority of the results obtained from the study were correct to the hypothesis they presented. With these results, they may be able to measure cognitive load of interaction systems and develop something that will aid in the interaction experience for the user.

Source
Discussion: This is the first conference paper I read that is not directly describing a technology development. In this study, researchers were interested in obtaining data that could later help in the development of a system. It was interesting to learn about the measurements that they use in order to determine the cognitive load of users. I would be really interested in reading about how the results from the study are implemented into the design of an interaction system.

When I was trying to find a relevant picture for the blog, which there weren't many, I found the following image in a scholastic website. The content of the page explained what the Lexile Level is, and how it is used with kids in order to measure their reading levels. It was interesting to read about how they are using these complexity measurements in the education field.

Thursday, March 24, 2011

Paper Reading #16: The Satellite Cursor

Comments:

Reference Information:
Title: The Satellite Cursor: Achieving MAGIC Pointing without Gaze Tracking using Multiple Cursors
Authors: C. Yu, Y. Shi, R. Balakrishnan, X. Meng, Y. Suo, M. Fan, Y. Qin.
Presentation: UIST' 10, October 3-6, 2010, New York, New York.

Summary:
The Satellite Cursor is a technique developed for use of multiple cursors with the goal of improving pointing performance. The developers are able to achieve their goal by reducing input movement, how much movement the user needs do achieve before reaching the target. Previous techniques were based on Fitts Law, they tried to improve pointing performance my reducing the amplitude and width of the targets and motor space. However, these techniques do not focus on reducing distraction factors caused  by bypassed targets.

The Satellite Cursor focuses on achieving an appropriate layout of targets in motor space as well as minimum distraction. It employs one cursor per target, but only one cursor is actually on the target when the user is selecting it. In other words, there is only one cursor able to select a target at a time. Thus, there is one constrain that must be met, targets cannot overlap. When the users moves the mouse, all cursors move synchronously. The developers propose a two step algorithm, "Aggregate and Expand." In the 'Aggregate' step all targets are aggregated to the main cursor. Then in the 'Expand' step, the location of all satellite cursors are calculated in order to distribute the targets to the satellite cursors. 
In this image, there are four different satellite cursors that move synchronously, and demonstrates how only one cursors is able to select a target at a time.
In order to evaluate the Satellite Cursor, developers carried out two experiments, one was a simple pointing task, while the second one was a more complex task with multiple targets of varying layout densities. Based on their results, there are two main areas where the Satellite Cursor are successful: it can save significant mouse movement to reach a target, and it is especially beneficial for target layouts that are sparse. They concluded that the satellite cursor performance can be modeled using Fitts Law successfully.

Discussion: Even though I think this is a very creative development, and the results show that it should decrease the mouse movement to reach a target, I am not sure how effective this would be. I mean, having all these cursors floating around could cause more distraction than bypass targets. I cannot tell for sure, since I have never tried something like this before. They do discuss in the paper how clutter can affect visual aspect of it, that is why they affirm it is more effective in sparse layouts. I would like to try it out and see how confusing or not confusing it gets.

Tuesday, March 22, 2011

Paper Reading #15: Enhanced Area Cursors

Comments:

Reference Information:
Title: Enhanced Area Cursors: Reducing Fine Pointing Demands for People with Motor Impairments
Authors: L. Findlater, A. Jansen. K. Shinohara, M. Dixon, P. Kamb, J. Rakita, J.O. Wobbrock.
Presentation: UIST '10, October 3-6, 20010, New York, New York.

Summary: This paper describes the development of four different enhanced area cursors that can alleviate some of the challenges faced by computer users with motor impairments.  In order to click with a pointing mouse, there are two phases needed: the ballistic phase and a corrective phase. The corrective phase of pointing is the one that represents more problems for motor impaired users, precision and control are very important. Their main concern was to develop something that would help users with small targets in reduced spaces.

The four enhanced area cursors developed are as follows:
  • Click-and-Cross: the users moves a circular area cursor over the area where the desired target is located. Once the the circular area is in place, the user activates it with a click. In order to make their selection the user needs to cross the cursor through the area selected for the small target. 
  • Cross-and-Cross: the user moves the area cursors to the desired location, then by crossing the red trigger act the area is activated. Then, to select the desired target, the user needs to cross the target area, just as it is done in the Click-and-Cross.
  • Motor-Magnifier: the user moves the area cursor to the desired location and clicks once. Then, a Bubble cursor appears, and the user must select the target by pointing and clicking.
  • Visual-Motor-Magnifier: the user moves the cursor to the desired area and clicks once. Then,  the area is magnified visually as well as motor magnification like in Motor-Magnifier. A Bubble cursor is used once again, and the user must select and click.
Two enhanced area cursors. Click-and-Cross: an area
cursor (top-left) transforms covered targets into crossing arcs (topright).
Visual-Motor-Magnifier: an area cursor (bottom-left) expands
visual and motor space for point-and-click selection (bottom-right)
There were two other designs that were abandoned. Even though the developers believed they were promising designs, informal evaluation demonstrated differently. In both of these designs, Ballistic Square and Scanning Area Cursor, the selection process was longer. In the first one the user must decrease the selection area manually, while in the second one the cursor would iterate through all the targets found inside the area cursor, and the user would select when the desired target was highlighted. 

They evaluated their designs with a user study involving both, motor impaired users as well as able-bodied participants. Even though they were focusing on how these cursors would aid motor impaired users, they information they collected from able-bodied participants was helpful for comparison purposes. They presented participants with a testing environment where they were given many distracting targets (grey) and one desired target (green). All the testing scenarios were presented randomly to the participants, and they tested the four enhanced cursors.

They concluded that the Visual-Motor-Magnifier and the Click-Cross cursors were the most successful of the four. They successfully eased the selection process for small, dense targets and reduced the corrective phase challenges the user faced.

Discussion: I really enjoyed reading about this research area. It is really important to develop technology for all people, and design it to fulfill the needs of those who are in physical disadvantage as well. 

I don't think they specified how easy or what the process was to use these cursors in real applications. They mentioned some testing in Microsoft Word and a website, but I wonder how difficult would it be to implement it.


Tuesday, March 8, 2011

Paper Reading #14: Chronicle

Comments:
Comment 2

Reference Information:
Title: Chronicle: Capture, Exploration, and Playback of Document Workflow Histories
Authors: T. Grossman, J. Matejka, G. Fitzmaurice
Presentation: (Conference Paper) USIT 2010/2009

Summary: Keeping track of operations carried out in a document is something that majority of User Interfaces do in today's technology. However, once the document is saved, there is not way to obtain that information. Chronicle is a software that supports "graphical document work flow exploration and playback." Basically it is able to store the history of the modifications done to a document, and save is as part of the document itself.

The authors discuss some related work that has been in done in the area, for example operation history and undo management, video summarization and browsing, as well as multimedia tutors however the aim of these kind of software is to aid the user not loose their information. However, Chronicle focuses on helping users understand the work flow that was applied to a certain document.

Chronicle. a) main Chronicle window, b)
the timeline, c) application/Playback window.
Chronicle was implemented in an imaging application. It allows the users to watch the video of the modifications done in the image. There are three main parts to the application: 1) The main Chronicle window which shows a hierarchy of modifications. All modifications that have been done in an image are condensed into seven steps. The user is able to select one and see seven modifications done in that one step, and so on. 2) On the bottom of the page, the user can see an interactive timeline where the user can fast-track through the history of the modifications; and 3) the main application window and on top of it the window that is playing the video.

The developers evaluated Chronicle by carrying out a short user study. They recruited eight participants that had at least three years of experience with image editing software. Participants were given a short introduction to the software, a walk-trough some activities in order to present and explained its functionality. They were then given 5 tasks to complete individually and they fill out a survey about their experience. The authors note that majority of the comments and ratings were positive.

Chronicle can be used for various settings such as team work collaboration support, as a learning or tutorial aid, and even for the user who wants to know how he got to his current state in the application. Some of the future they are considering involve memory consumption and managment as well as enhancement in display settings.

Discussion: I would like to see this kind of software implemented in different kind of applications, for example word documents, video editing, and others. A system like Chronicle could be very useful in the education environment, for teaching students how to use the software required for their courses. 
However, I think that privacy issues could be very easily raised from the use of this software. If unauthorized users have access to the complete process of developing a certain image, or solving a problem, they could very easily replicate that piece of work. I found it interesting how this was not mentioned in the paper at all. It might be because this software is not released yet, but it could definitively become an issue.

I think the user study they carried out could of provided better or more accurate results if participants would use this tool for an extended period of time, not only a couple of hours. Even though they treated their evaluation as a usability study,  I would consider it more like a measure of how easy it is to be introduced to the software.

Saturday, March 5, 2011

Paper Reading #12: Twin Space

Comments:

Reference Information:
Title: TwinSpace: an Insfrastructure for Cross-Reality Team Spaces 
Authors: D.F. Reilly, H. Rouzati, A. Wu, J. Y. Hwang, J. Brudvik, W. K. Edwards.
Presentation: (Conference Paper) UIST 2010/2009

Summary: Twin Space is a software infrastructure able to combine interactive workspace with collaborative virtual worlds that allow for remote participants. Smart Spaces have been a topic of research for quiet some years not, providing for spaces that support group work in an interactive environment. However, these spaces are design for collaborative work with all members of the team being in the same physical space. This forms a limitation since remote participants cannot access the technology available in the smart spaces. 

The creators of TwinSpace had the intent of fusing these two settings by creating a balance between both set of participants. They developed a virtual smart space, where remote collaborators can meet and use advanced technology from this virtual space. Their main challenge was how to combine both physical spaces and make the virtual space somewhat similar to the physical one. 

Their development provides four different features to the research being done in this area: a communications layer, a common model, mapping capabilities and the virtual clients for the smart virtual spaces. The developed to implementations of TwinSpace and discuss about their case studies with these two implementations. The first one was an activity mappings implementation, which currently has two modes: a brainstorming mode and a presentation mode as they can be observed in the picture below.

Source
The second case study was a cross-reality collaborative game which places to teams trying to work for a common goal. This prototype focuses on studying the asymmetry in both game controls and team dynamics.

Source
Discussion: When I first started reading this paper I immediately thought about the discussions we've had in class about Second Life. And sure enough, they do mention it later on in the paper. I was also able to picture this like a game, and at the end of the paper they describe the game implementation they developed. I am not a gaming person, but I imagine that if I was to be in a meeting like this one, then I would feel like if I'm in a game... However, I can totally see this as a useful tool since many many organizations are not global and have to interact with people across the world in a daily basis.



Friday, March 4, 2011

Paper Reading #13: D-Macs

Comments:
Comment 1
Comment 2

Reference Information:
Title: D-Macs: Building Multi-Device User Interfaces by Demonstrating, Sharing and Replaying Design Actions
Authors: Jan Meskens, Dris Luyten, Karin Coninx
Presentation: (Conference Paper) UIST 2010/2009

Summary: In today's technology, developers need to adapt or deploy application in multiple platforms: interactive TVs, mobile phones, tablet PCs, and more. If suitable User Interfaces will be design manually for each platform, this would be time consuming and repetitive. According to the authors, there is not yet a system that can automate the transformation and design of UIs across platforms. This paper present D-Macs, a multi-device design tool.

Figure 1: In DMacs,
designers can demonstrate an
action sequence, share these action sequences with
other designers and replay previously recorded actions.
Design tool Macros (D-Macs) is a multi-device GUI builder that allows designers to design a UI interface only one, and automatically obtaining the equivalent devices. It supports three major steps: 1) the designer "demonstrates" the sequence of actions that needs to be automated. 2) There is a repository where designers can share their recorded steps. 3) There are replay and edit capabilities.

D-Macs are developed in the basis of multi-device user interface design, programming by demonstration and community shared expertise. A description of the key features of D-Macs is given, an overview of interaction techniques and UI elements they offer, as well as the architecture and implementation details are discussed. 

Currently, designers need to search through the repository for the action sequences that were previously stored, and decide by themselves if it is something useful for the current design. The developers believe that this effort can be lowered by developing a recommendation system that will display relevant action sequences that designers can choose from. In the future, the designers of D-Macs want to release it as an open source software.

Discussion: In recent discussions with teammates the topic of developing software for different devices has come up. It made me realize the importance of this area, since in today's world we as users want to have the same software capabilities in every device we own (my point of view as a user!). I would really like to read more about this topic, and see how an application like this one can reduce the repetitiveness of designing UI for various devices.

Thursday, February 24, 2011

Paper Reading #11: Eden: Supporting Home Network Management Through Interactive Visual Tools

Comments:

Reference Information:
Title: Eden: Supporting Home Network Management Through Interactive Visual Tools
Authors: J. Yang, W. K. Edwards, D. Haslem. 
Presentation: (Conference Paper) UIST 2010/2009

Summary: 
In today's world, network management has become a household task. There are many devices, including computer, printers, mobile devices, routers, etc., that people need to know to configure - which becomes a difficult task for those with no or little background in networking. For this reason, there is existing technology in the format of wizards or built-in tools that allow the user to set up all these configurations. However, they do not provide them with the knowledge necessary to understand at least the minimum of what is going on, they will eventually need this knowledge when they need to troubleshoot their systems.

In this paper, the authors introduce Eden. Eden is an interactive home network management system that allows for a direct manipulation and interaction from the user, but also provides an understandable model of what is going on so that users understand enough to create a mental model of their network flow. As mentioned before, previous research has developed useful tools for network management, however not directed to the household user, instead for network professionals. The tools that are targeted for home networks lack the characteristic of giving the user enough information to understand their network. There is only one system, Network Magic, that can be compared to Eden, however it lacks of direct interaction.

Screen shot of Eden's User Interface
Researchers first gathered data regarding the features households desired on their network management systems. Three main areas were discovered: membership management, access control and network monitoring, and QoS policy for bandwidth policy. Users want to be able to know quickly the status of their network and each device in it in order to make troubleshoot easier. Also, they want to have security - they want to make sure they are not granting access to all of their devices to guests as well as parental control for children. And third, they want their system to be up and running, and mainly their most important applications.

The result: a spatial + logical user interface giving users the ability to see what is inside their 'Home' network and allows them to group their devices in different 'Rooms.' This grouping can be useful in different ways, depending on the user's choice. The rooms could represent physically where the devices are, or maybe different configurations and settings  given to each room. Their system is also able to provide badges which represent the additon of settings, and provide useful information about each device. 

The system was evaluated by 20 participants of age ranging from 20's to 50's and with different kinds of backgrounds. Majority with technical background, but their knowledge about network management varied. Their testing was composed of two evaluations, conceptual, and functionality + usability evaluations. Researchers obtained positive results from the evaluations and expressed their desire to continue expanding this application. One feature they want to add is to make the system accessible remotely, maybe as a web application. 

Discussion:
I feel like my summary is really long, but I was trying to highlight the most important points about this development - and even then, I don't think I covered them all! This is a great technology, I think the user interface allows users to understand what is going on with their network, which will actually teach them at least the very basics of networking. I am one of those persons that will understand better if I put things on paper, maybe a drawing, or a sentence, or something... visualizing the network system will be definitely helpful.

This reminds me of the little mappings shown in the Windows Network and Sharing Center, the one that maps your computer to the network access, to the internet. Each component in the map as an icon that will help you understand and create a mental model of your internet connection. For example if something goes wrong, it will usually display a big X in one of icons indicating what went wrong.

Overall I think Eden is a great system to implement, and will definitively like to try it.

Monday, February 21, 2011

Paper Reading #10: Gesture Search: A Tool for Fast Mobile Data Access

Comments:
Comment 2

Reference Information:
Title: Gesture Search: A Tool for Fast Mobile Data Access
Author: Yang Li
Presentation: UIST 2010/2009

Summary: 
Mobile phones' power and storage capacity is increasing, however their accessibility to all this data is not as efficient. Just like personal computers, mobile phones' interfaces are not as efficient for searching data. This article discusses how touch screen gestures are being employed for the process of searching data. However, current gesture systems are somewhat inefficient, since users need to remember the shortcut gestures given to each file. Li discusses how Gesture Search provides a more efficient way to search data, by using shape writing. The user will only need to remember what they are searching for, and start writing part of the file name.
 
The author provides an example of how the Gesture Search system works. Besides looking at only the gesture characters entered, the system also considers frequency and search history when displaying the matching results (the order they are displayed matters).The author emphasizes this is not only a handwriting recognition technology, but it is couples with searching techniques.

Gesture Search allows for a maximized input area because it overlays the gesture input area on top of the results list. The system implements a mode less input, which means that the system is able to identify if the user is inputting a gesture, or trying to scroll or tap on the results list. In order to separate the GUI and the Gesture Systems, they had to study the difference between the touch traces between the two.

Gesture Search has already been implemented in Java using Android SKD 2.0. An it has been tested in various devices out in the market already. However, the developers did carry out a longitudinal user study with over a hundred mobile phone users before its release. In their study, they collected qualitative data by a user survey, and quantitative data by a log that would save data from each user. The studies revealed that users use this tool the most to find contacts,  instead of  music, applications, and web page bookmarks. Users noted the current way to invoke Gesture Search was not very convenient, and recommended a few ideas.

Discussion: 
The distinction made between gesture recognition only and search techniques is very important. I have only used gesture recognition before with the text input application my phone has, but I think the combination of both is a great idea. This is an interesting application, and the fact that they overlay the touch input with the gesture input is really appealing and it seems to be more effective than just being limited by an input space. Something else I liked was the search history feature and how it enhances the search.

Thursday, February 17, 2011

Paper Reading #9: Performance Optimizations of Virtual Keyboards...

Comments:

Reference Information:
Title: Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop
Author: Jochen Rick
Presentation: Department of Computing. The Open University. Milton Keynes, MK7 6AA, UK

Summary: In this article, Rick discusses his studies about enhancing and optimizing the performance of text entry on tabletops. He explains that even though a physical keyboard could be attached to such a device, it is impractical, it defeats the purpose of the interaction between the user and the tabletop. According to him, not much research has been done in this area, and his goal is to find a viable technique that will enhance the use of shape writing - stoking through all the letters in a word on a virtual keyboard - and how this technique is affected by using different keyboard layouts.

Rick provides some background about the history of keyboard layouts, how the QWERTY layout came about, and why it has stayed as the most popular and standardized keyboard layout internationally. As he discusses each of the layouts, he starts analyzing how stoke based entry text would work in such a layout. Much of the work that has been done in this area has been based on Fitt's Law. Rick implements a user study to investigate the role of distance and angle for a sequence of stokes. With the information revealed in this study, he is able to create a mathematical model. He then is able to apply this model, and in order to recognize a word from the sequence of strokes, he uses the list of 40,000 most popular words in the English, Project Gutenberg.The basis of his model were a three step stroke, a Beginning, Middle and End as it is shown in the picture below and it was based in calculations regarding distance and angle of the strokes.
He then evaluated his technique and how it performed when using it with different keyboard layouts. He finds that this technique has a gain in performance of 17.3% when used with the QWERTY layout, however there was a 29.5% gain in performance if using the OPTI II layout - much faster than QWERTY.


Discussion: I have heard of this kind of application before, but I have never tried it myself. Hopefully my next phone will have such an application. Even though I am so used to the QWERTY layout for both my computers and my phone, I think it would be interesting to use a different layout for a shape writing application. I think there can be a better mapping of words created specially for shape writing. Maybe one where the vowels are placed in convenient places, also where two letters that are not often used together are not placed together. As Rick explains, the QWERTY layout was first originated by placing letters that were often used one after the other in separate places to avoid mechanical problems. Now in this application it would be way more useful if those letters were placed next to each other, instead of in the opposite extremes of the layout. From a user's point of view, I would prefer something that look like the Hexagon OSK layout.

Tuesday, February 15, 2011

Paper Reading #8: Planz to Put Our Digital Information in Its Place

Comments:
Jessica Gonzales
Kevin Casey

Reference Information:
Title: Planz to Put Our Digital Information in Its Place
Authors: W. Jones, J. Gemmell, D. Hou, B.D. Sethanandha, S. Bi.
Presentation: (Conference Paper)  CHI 2010. April 10-15, 2010, Atlanta, GA, USA


Summary:
This article explores and discusses the importance of digital space, the past and current research that has been done in this area and how there are opposing views about the topic. The authors describe some of the problems that user's have had with the storing of information in their digital space, including the fact that they rather navigate their file system instead of searching. Some researchers have concentrated their studies in exploring "placeless alternatives" to enhance the users' interaction with digital space, while others, like the application that this article describes, are trying to obtain a greater sense of "place" and "placing" of digital information that will allow the user to better interact with their file system.
Planz allows users to manage their file system through this document like overlay. When modifications are done through the file system they are reflected on Planz, and when they are done from Planz the file system is updated as well. They are able to create and modify projects from this document like planner, they are able to write notes on it as if it was a word document they are able to like folders, files, email, etc.

The testing was not very extensive, only eight people tried the application for a completion of a project. Basically they worked in two similar projects, one they managed with the tools they have used in the past, and one with Planz. At the end, they were interviewed and filled out a survey. The results were not very relevant, since like they stated they have been using their previous preferred tools for a long time so of course they find them beneficial.

Summary:
This is a very interesting project. I was a little surprised their testing methodology wasn't more extensive. Even for me as a student, I have so many files saved that I forget they even exist. I would really like to try this tool, it seems very promising, at least for me since I try to be very organized with my files. The integration of email to the organization system is very innovative for me, I hadn't heard of before... I'm not sure how I can relate email to documents or folders, but I'm sure that once having the functionality I would find an application for it. I'm still not sure how this document overlay would work, I mean does it cover the complete file system, or only projects that you create and attach or link files from the file system? Hopefully I get to read more about it, and try it out! 

While looking for a picture online, I found the website where Planz can be downloaded: http://kftf.ischool.washington.edu/planz_install.html
 
 

Monday, February 7, 2011

Paper Reading #7: Manual Deskterity

Comments:

Reference Information:
Title: Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input
Authors: K. Hinckley, K. Yatani, M. Pahud, N. Coddington, J. Rodenhouse, A. Wilson, H. Benko, B. Buxton
Presentation: (Conference Paper) CHI 2010, April 10-15, Atlanta, GA, USA.

Summary:
Source
This paper discusses the research done over the shifting of using the traditional GUI's to a more direct manual input from the user. The development begin discussed is a scrapbooking application where the user can utilize both, pen and touch inputs, the authors state that the combination of these is only present in few systems. The basis of their application is the interaction between a designer and a physical design board and notebooks and previous work in the subject. The main idea of using both, the pen and touch, and which they repeat all throughout the article, is to have a division of work - the pen writes, touch manipulates, and when the user combines both input options, new tools are created.

The authors discuss related work that has been done in the field. They state that devices in the early 1990s utilized either pen or touch, but the device did found no differentiate between the inputs, and current applications will usually implement one or the other.

An observational study was conducted by the developers where they gained knowledge of how people usually interact with pens, tools and pieces of paper. They noticed nine relevant behaviors from the group of eight people that participated in the study. These behaviors go from how the user interchanges work between pens and manual work; how they interact with clippings, those they want to use and those they don't want; how they reused parts of paper that they had cut, etc. The developers tried to support these behaviors with Manual Deskterity.

Manual Deskterity has functionality that range from using multi-touch interactions to zoom, flip pages, move and select objects, as well as creating new objects; writing, which only the pen can produce ink strokes, with the exception of finger painting. As mentioned above, majority of the functionality is divided between using the pen or touch, however, there are other functions that are reached by using both at a time. For example, stapling items into stack, cutting and tearing objects with the X-acto Knife, carbon copying items, using one object as a ruler, and more. All these functions require some type of holding the object with one hand, and doing something to it with the pen.

After a prototype was ready, they performed a usability evaluation with the help of seven professional designers. Overall, the feedback they received back was very promising. Their future work for this development focuses on demonstrating the usability and effectiveness of the device, and prove that it can enhance the user experience with applications such as this one.

Discussion:
I am one of those girls that really enjoys scrap-booking, I am no professional designer, but I see it as a hobby. I think I would really enjoy to have such an application. I think it is really interesting that the design they have given it can be used for both, professional and non professional design; I can easily see such an application being used in design courses or something along those lines. I would like to find out if there is any other research being done in the whole concept of simultaneous pen and touch inputs; it would be really interesting to see this kind of application in a device that users use everyday, for example phones, computers, etc.

Paper Reading #6: Adaptive Mouse

Comments:

Reference Information:
Title: Adaptive Mouse: A Deformable Computer Mouse Achieving Form-Function Synchronization
Authors: S.K. Tang, W.Y. Tang.
Presentation: (Conference Paper) CHI 2010, April 10-15, 2010, Atlanta, Georgia, USA

Summary:
     This paper relates the implementation of a mouse that demonstrates the concept of a "form-function synchronization." The idea is to have a deformable mouse that is able to shape itself by the pressure that the user gives it when holding it, and the functionality of the mouse is reached by simply holding the fore or middle finger still.
     The authors go further to describe the implementation of the Adaptive Mouse. The mouse has a deformation sensing module, which is conformed of a piece of foam, a  hall-effect sensor and a magnet. This component has the capability of detecting any deformation and sends a signal to the Micro-Controller Unit (MCU). Underneath the sensing module, there is a reserved space of the optical sensor, batteries and circuit boards. 


     The developers recruited 30 subjects and had them experiment with the mouse in order to record data that would help them with an algorithm to recognize the users' palms and the locations where their fore and middle fingers would be found. With the results of this study, they developed predictions to determine where the dynamic buttons should be placed.
     After they developed the first prototype of the Adaptive Mouse, they had the previous 30 subjects come and try the mouse. Some of the benefits they identified were a high accuracy feedback, and an "intuitive holds then clicks" action. They also identified some drawbacks, including the inaccuracy of the mouse when it is only hold with the thumb and little fingers, and for some girls the mouse was too big to fit their palms. 
     The authors discussed related works and how they compare to the Adaptive Mouse, as well as the future work they have ahead of them. The concluded that operating such a device can create a "magic-like" effect for users.They also saw as an advantage that the Adaptive Mouse could be use in the dark, since there are less visual clues necessary to operate the mouse. The developers recognize that that there is a lack in accuracy, effectiveness and efficiency, and set these as their goals for future work.


Discussion:   
Even though I think this idea is quite interesting, I think the design is not quite as great. Maybe because it is just a prototype, but I would like to see it created in other kind of flexible materials, and maybe a bit smaller. The article does not specify a diameter size, but as I was able to observe in the pictures and by the comment that it was quite big for girls palms, I can assume I wouldn't be comfortable using it. In their conclusion, they inform that the size, and other factors lower the effectiveness and efficiency of the Adaptive Mouse. I would like to read a follow up on their studies if it becomes available and see what the improvements are.