Art and Ability: Cardinal

This project begins to examine the special physical needs of individuals with complex disabilities through the lens of their artistic and expressive needs. It proposes to develop and incorporate an art/research methodology, including stages of creation and analysis of prototypical tools to address these overlapping needs of the participants. It is anticipated that these newly developed tools will have potential benefits for a broader spectrum of the user’s needs, as well as for other users with or without disabilities. This iterative inquiry will take the form of collaborative art creation sessions involving both researcher/artists and participant/artists with severe physical disabilities. Analysis of the impediments to these exercises in self expression will guide the rapid development of new, prototypical, art making tools, techniques or materials. At the conclusion of the research, we will examine the effectiveness of the art/research methodology in refining and addressing the emerging research question of how communication models can be developed and employed for artistic expression by individuals with disabilities, and how they can be applied to their other communication needs.

Cardinal: Eye Gesture Screenless Communication System

Several observations of current eye-gaze and eye-gesture systems point towards the potential benefits of a low strain,computer-assisted, natural tool for users with eye control as their primary means of communicating.

The three existing systems include early Bliss boards, myTobii computers and the Eye Writer. These are the salient features of each:

The Bliss board was a physical tool that allowed a trained user to communicate with a trained “listener” through eye gestures. A 2-3 foot square sheet of clear Plexiglas had the centre cut out, leaving a frame about 6 inches wide. The two conversants would face each other. The square grid around the frame contained cells with a square grid of alphabetic characters. I.e. the top left cell might contain the letters A, B, C, D, E, F, arranged in a grid. The user would use a two-gesture glance to instruct the listener about the letter choice. Up and to the right, followed by up and to the left might combine to signify the upper right letter in the upper left square.

Two features stand out with this system. First, the goal of communicating with a listener is enhanced by having the face-to-face view of the conversants uninterrupted. I.e. they look at each other through the large hole in the centre of the board and glance to the edges of their field of gaze to signal alphabetic letters. Second, once both users have become accustomed to the system, the board itself can be removed and the pattern of eye gestures can still be interpreted.

In its early usage, the communicator is might look at the squares in question, but later they just gesture towards the squares, whether they are physically there or not. This sparks a differentiation between eye-gaze (looking) and eye-gesture (glancing).

The myTobii uses infrared cameras to track the communicators gaze, and maps it to a flexible set of on-screen buttons. The camera and motion tracking software create a very workable tool. Unfortunately the computer screen must constantly be the focus of the communicators gaze, and effectively becomes a barrier between the conversants. In theory, the cameras could track eye gestures that go beyond the edges of the screen. A “pause” feature used to be activated by glancing down beyond the bottom edge of the screen, although that feature seems to be gone.

The Eye Writer glasses uses an eye tracking system that is not linked to a particular on-screen representation. In its fist instantiation it was used with an onscreen software program to facilitate graffiti tagging, but the glasses themselves (the input device) are not linked to any screen, the way the myTobii is.

The synthesis of these systems suggests a model in which a user could use their eyes to gesture towards abstract referents – hypothetical buttons which exist outside of the field of attention. So a user might look at a conversation partner and then glance left and right, which would be interpreted by a computer vision system as the letter D. Right and left might be O. Up and left might be G. But because the communicator never attends to an onscreen representation, they are able to assess the impact of what they are saying, word by word, as we do in normal speech. Rather than having to type out an entire phrase (while ignoring the conversation partner) and then playing it back, with a highly intermediated effect.

In the first test, the object of attention (a Google map) is situated in the middle of the screen, where the user can study it at will without triggering any buttons (which would be the case with the myTobii system). Glancing towards any edge causes the map to scroll in that direction. Glances are triggered by a “mouse-over” effect, which does not require the user to look at, pause on, or otherwise fixate on a button. A simple glance suffices.

A subsequent instantiation will allow the user to wear EyeWriter glasses and look at a physical symbol board to spell words. After rudimentary training, we will test if the user can continue to spell by glancing with their eyes, without the presence of the board.

Further open source software and hardware models will explore if there is a sub-$100 device which could be produced to facilitate communication (and control) without the presence of a computer screen.

 

Publications & Presentations

Alexandra Haagaard, Geoffrey Shea, Nell Chitty, Tahireh Lal. Cardinal: Typing with Low-Specificity Eye Gestures and Velocity Detection. International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction, Sweden, 2013. (under review)

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Best Poster Award: Eye Tracking Conference on Behavioral Research, Boston, 2013.

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Demo and Talk: Disrupting Undoing: Constructs of Disability, Toronto, 2013.

Shea, G. and A. Haagaard. Artists Reflecting on Their Practice and Disability, Ethnographica Journal on Culture and Disability, Katholieke Universiteit Leuven, (under review).

Shea, G., Understanding the Work of Artists with Diverse Abilities: Applying Art, Design, Ethnography and Computer Science. Research Rendezvous, OCAD University, Toronto, 2012.

Shea, G., Art and Disability Research. A presentation to the Doctoral Program at SmartLab, Dublin, 2012.

Keywords: 
Creator: 
Sponsor(s): 
Image of an ear
Text about the readers
Image showing poducts
Wednesday, February 15, 2012 - 4:30pm