Visualizing Emergence

A project currently in development, Visualizing Emergence seeks to explore and visualize phenomena of emergence in data representing technologically mediated human communication and exchange within a techno-social complex adaptive system (CAS).

Using textual analysis and other data as substrate, research will focus on data from CIV-DDD partners, IBM Cognos and public sources, possibly including Twitter and other accessible APIs. In time we expect to aggregate data from additional sources. Leveraging senior researcher and student contributions from OCAD and York Universities, the project will explore and exploit a synthesis of scientific, artistic and aesthetic techniques, with software from partners including IBM / Cognos.

Project challenges include:

  • Finding the right data set; evaluating data quality
  • Representing, managing multi-variant data
  • Models, metaphors; legibility, navigation

Visualizing Emergence will examine model-based scientific visualization of complex data sets as well as emergent systems, data mining techniques and visualization. We will test, review and select the most appropriate software approach for developing the data models and generating dynamic results. The work will also deliver findings tied to the following CIV-DDD project aims: appropriateness of 2D or 3D visualizations, visualization aesthetics, and use of specific vs. generic tools.

 

For more information, please visit http://slab.ocadu.ca/project/visualizing-emergence .

Visualizing Emergence is supported by NCE-GRAND. This project is funded in part by the Centre for Information Visualization and Data Driven Design established by the Ontario Research Fund (ORF).

 

Sponsor(s): 
Photograph of sLab members Greg Van Alstyne and Trevor Haldenby working at a table
NCE logo
CIVDDD logo
Monday, October 23, 2017 - 11:45am

sBook: Futures of the Book

The goals of the sBook project are to develop a unifying information architectural framework for readers, writers and publishers that ties together emerging standards; and to invent new forms of functionality and interoperability to achieve our design vision. The name “sBook” refers to the qualities of the intended experience:

  • Simple: the pleasure and beauty of human readable pages
  • Social: developing context and community through social media tools
  • Searchable: the power and practicality of electronic text
  • Smart: intelligent recommendations both within and beyond the work
  • Sustainable: effective use of material and energy throughout the lifecycle
  • Synchronized: can be updated by author and publisher
  • Scalable: open platform supporting new products, services, experiences

sLab's vision goes beyond the limited model of most existing ebook systems (such as Amazon’s Kindle) by fully supporting annotating, quoting, comparing, searching, taking notes, and sharing, a process which may be described as “active reading” and which many commentators view as the threshold that must be met for the support of true knowledge work rather than simple leisure reading [Golovchinsky 2008, Sellen and Harper 2002]. sLab claims that emerging digital text infrastructures (search and retrieval systems, social media) are increasingly good at facilitating collective and institutional textual practices such as citing, referencing, curating, publishing, managing, etc. However, they are not very good at facilitating personal textual practices such as highlighting, commenting, annotating, etc. This bias stands in contrast to that of paper texts, which facilitate personal practices while making social and institutional ones more complex.

A number of competing systems, open and proprietary, exist for sorting, delivering and engaging with texts. The focus of this project will be to explore why, when and how these solutions need to inter-operate, and to develop new pathways, 'middleware', and interface technologies that assist in connecting the pieces and experiences together. The first design task is to create a framework that maps and relates emerging standards, systems, and devices, working together and with external partner organizations to create innovative bridging of digital and paper text solutions.

Following from this phase will be the development of prototype displays, applications, and devices that seek to make use of and extend this framework, calling attention to the advantages of an open, shared and accessible infrastructure. In addition to these human experiential benefits, the sBook framework seeks to foster significant advances in sustainability by developing expectations and business models for print-on-demand, reducing needless inventory. The development of the sBook framework starts from three specific attributes of reading we see as important and in need of critical attention and material support:

  • Reading occurs in a variety of spaces, places and at different times
  • Reading is social practice that involves other people, collectives, and institutions
  • Reading is an active process in the productive trajectory of intellectual work (that might include thinking, writing, making, linking, etc) rather than a passive process of consumption.

Given these precepts, the sBook framework is oriented towards conserving the valuable aspects of both digital and paper-based text. It is obvious that current text solutions foster and develop these aspects of reading to different degrees -- and for different reasons. Digital text solutions make personal rather than institutional distribution of texts more possible, but are currently limited in order to maintain traditional economic models of publishing. Ebook software standards and devices make markup and highlighting of text (important aspects of active reading) difficult, whereas paper copies encourage these practices. Key to our understanding of these issues is that they involve material and technical development as well as institutional change. The sBook framework does not discriminate between social, organizational, and technical development – it shall encompass all of these.

 

For more information, please visit http://slab.ocadu.ca/project/sbook-futures-of-the-book.

 

NCE logo

Advisor: 
Sponsor(s): 
Friday, October 20, 2017 - 12:30pm

Tweetris

Tweetris -- pairs compete in front of a large display to make random Tetris blocks (Tetrominos). A picture is taken of the winner of each round as they make the shape, and this is tweeted to TweetrisTO. Anyone can play a game of Tetris using these Tetromino images in real time on their smartphone or web browser. This was a curated exhibit in Scotia Bank Nuit Blanche on October 1, 2011, and has since been demoed at the Digifest 2011 conference, Dalhousie University Open House, and will be submitted as an art exhibit to TEI 2012. Research is currently underway to examine factors impacting gameplay (setting, audience participation, observation of prior players) over the course of Nuit Blanche.

http://forum.grand-nce.ca/index.php/Artifact:4325

 

Keywords: 
Sponsor(s): 
An image of someone playing the Tweetris game
Sunday, August 19, 2012 - 7:15pm
Embed Video: 

Common Pulse Symposium 2011

A partnership between OCAD University and the Durham Art Gallery, COMMON PULSE created a forum for presentations and discussion during a three-day symposium. Twelve artists and curators were invited to present their experience creating work in the context of university research. These presentations sparked a dialogue among all of the participants which examined current developments in digital media production and consumption within contemporary art practice and how they predict, reflect or refute parallel media phenomena within North American culture in general. We looked at societal shifts in authorship brought about by file-sharing, sampling and the open source movement, as well as collaborative initiatives sparked by mobile media such as citizen journalism, wiki culture and flash mobs. In each model of research-informed, digital media art practice, the flow back and forth between analysis and production is strongest and most focused in the artist-led research labs of the symposium contributors

 

  

Common Pulse Transcripts
Proceedings from the Symposium

The Common Pulse Symposium brought together twelve prominent media artists to discuss their approaches to four issues:

  • Social Authorship: Where do Ideas Come From?
  • Digital Identity: The Public Self
  • Users and Viewers: The Role of Participation
  • The Artist in the Research Lab

This book presents the contributors speaking about art, interactivity, media and the shifting landscape of Canadian culture: David Clark, Brooke Singer, Marcel O'Gorman, Jim Ruxton, Martha Ladly, Michelle Kasprzak, Jason Edward Lewis, Jean Bridge, Steve Daniels, David Jhave Johnston and Jessica Antonio Lomanowska. Edited by Geoffrey Shea.

Get it on Amazon - or - Download PDF

Common Pulse Website

http://commonpulse.ca/symposium.php

Sponsor(s): 
People sitting around a table at a conference
Wednesday, February 15, 2012 - 4:30pm

Art and Interactive Projections: Tentacles

This research employs interactive public video projection to explore emerging social constructions involving play and ad hoc communities. In these installations the viewer is encouraged to participate in unstructured play. As with every interactive experience (and in fact, most other things in life) there is the initial satisfaction resulting from simply figuring out how one’s decisions, gestures and actions cause reactions and create effects in the surrounding environment.

The interplay of scale – the small screen in the palm of one’s hand contrasted with the large public screen on the facade of a building – parallels other central human experiences. The intimacy of touch, for example, is contrasted by the dominance of projected, broadcast visual stimuli, while the screen – the sign – forms a kind of text waiting to be read. Your personal space simultaneously shrinks and expands as the tiny gestures you make with your fingers are magnified for all to see. Public and private stand in stark contrast, highlighting dichotomies like wireless and wired, perception and cognition, knowing and being.

Operating from within the crowd, viewers or players had the opportunity to step onto the stage of the projected environment – to display themselves in action, engaged with other virtual beings. Movements, gestures and displays become part of this spontaneous public performance, suggestive of the activity on a dance floor, where typical rules about decorum, reservation, engagement with strangers and physical contact are suspended. Each private, gestural experience is amplified publicly as a by-product of being within a crowd. Taking action in public in this way constitutes one layer in the creation of community. Our behaviours and others’ meld to generate simultaneous effects, creating a joint awareness that forms the cornerstone of our collectivity.

Play is presented as a free-form, creative activity – a childlike enthrallment with exploration, skill-learning and sharing. The scale and location of the displays encourages parallel play and the growing awareness of the activities of other players nearby. The public nature of the experience creates the opportunity for ambient performance, where other players’ awareness of you subtly influences and rewards your behaviour. Finally, these factors combine with the ambiguous structures and activities built into each project to encourage social play and collaboration in an emerging, shared activity.

http://www.tentacles.ca
 

Exhibitions
Talk to MeMuseum of Modern ArtNew York City, USA, July – November 2011
Transmission, GLOBAL SUMMIT 2011Victoria, Canada, February 2011
MediaCity 2010, Bauhaus UniversityWeimar, Germany, October 2010
Festival du nouveau cinemaMontreal, Canada, October 2010
Mobilefest, Museum of Image and SoundSao Paulo, Brazil, September 2010
Nuit Blanche, Lennox Contemporary GalleryToronto, Canada, October 2009

Publications
Geoffrey Shea and Michael Longford. Large Screens and Small Screens: Public and Private Engagement with Urban Projections. Media City: Interaction of Architecture, Media and Social Phenomena. J. Geelhaar, F. Eckardt, B. Rudolf, S. Zierold, M. Markert (Eds.) Bauhaus-Universität, Weimar, Germany, 201-210, 2010
Geoffrey Shea, Michael Longford, Elaine Biddiss. Art and Play in Interactive Projections: Three Perspectives. ISEA, Istanbul, 2011
Geoffrey Shea and Michael Longford. Identity Play in an Artistic, Interactive Urban Projection. CHI Workshop: Large Displays in Urban Life, Vancouver, 2011

Presentations
M. Longford, Connecting Talent in Digital Media, MITACS and the NCE GRAND, Mississauga, Canada, September 2010
M. Longford, “Digital Media: Successes and Accomplishments in Canadian Digital Media Research,” Canada 3.0, Stratford, Canada, May 2010
R. King, International Centre for Art and New Technologies (CIANT), Prague, Czech Republic, March 2010
G. Shea, Mobile Experience Innovation Centre (MEIC), Ontario College of Art and Design University, Toronto, Canada, February 2010
G. Shea, M. Longford, R. King, Discovery 2010, Ontario Centres of Excellence, Toronto, Canada, May 2010
R. King, Music in a Global Village Conference, Budapest, Hungary, December 2009
M. Longford, G. Shea, iPhone Developer's Group, Augmented Reality Lab, York University, Toronto, Canada, November 2009
M. Longford, Project Demonstration - A New Media Gathering, Town of Markham, Markham, Canada – October 2009
M. Longford, “Tentacles: Design, Technology and Interdisciplinary Collaboration in the Mobile Media Lab” PEKING/YORK SYMPOSIUM: Interdisciplinarity, Art and Technology, York University, Toronto, Canada, October 2009
G. Shea, “Artifact or Experience: Presenting Network Mediated Objects,” Interacting with Immersive Worlds, Brock University, St. Catharines, Canada, June 2009

Image of people looking abstract images projected on a wall
Wednesday, February 15, 2012 - 4:30pm

Art and Ability: Cardinal

This project begins to examine the special physical needs of individuals with complex disabilities through the lens of their artistic and expressive needs. It proposes to develop and incorporate an art/research methodology, including stages of creation and analysis of prototypical tools to address these overlapping needs of the participants. It is anticipated that these newly developed tools will have potential benefits for a broader spectrum of the user’s needs, as well as for other users with or without disabilities. This iterative inquiry will take the form of collaborative art creation sessions involving both researcher/artists and participant/artists with severe physical disabilities. Analysis of the impediments to these exercises in self expression will guide the rapid development of new, prototypical, art making tools, techniques or materials. At the conclusion of the research, we will examine the effectiveness of the art/research methodology in refining and addressing the emerging research question of how communication models can be developed and employed for artistic expression by individuals with disabilities, and how they can be applied to their other communication needs.

Cardinal: Eye Gesture Screenless Communication System

Several observations of current eye-gaze and eye-gesture systems point towards the potential benefits of a low strain,computer-assisted, natural tool for users with eye control as their primary means of communicating.

The three existing systems include early Bliss boards, myTobii computers and the Eye Writer. These are the salient features of each:

The Bliss board was a physical tool that allowed a trained user to communicate with a trained “listener” through eye gestures. A 2-3 foot square sheet of clear Plexiglas had the centre cut out, leaving a frame about 6 inches wide. The two conversants would face each other. The square grid around the frame contained cells with a square grid of alphabetic characters. I.e. the top left cell might contain the letters A, B, C, D, E, F, arranged in a grid. The user would use a two-gesture glance to instruct the listener about the letter choice. Up and to the right, followed by up and to the left might combine to signify the upper right letter in the upper left square.

Two features stand out with this system. First, the goal of communicating with a listener is enhanced by having the face-to-face view of the conversants uninterrupted. I.e. they look at each other through the large hole in the centre of the board and glance to the edges of their field of gaze to signal alphabetic letters. Second, once both users have become accustomed to the system, the board itself can be removed and the pattern of eye gestures can still be interpreted.

In its early usage, the communicator is might look at the squares in question, but later they just gesture towards the squares, whether they are physically there or not. This sparks a differentiation between eye-gaze (looking) and eye-gesture (glancing).

The myTobii uses infrared cameras to track the communicators gaze, and maps it to a flexible set of on-screen buttons. The camera and motion tracking software create a very workable tool. Unfortunately the computer screen must constantly be the focus of the communicators gaze, and effectively becomes a barrier between the conversants. In theory, the cameras could track eye gestures that go beyond the edges of the screen. A “pause” feature used to be activated by glancing down beyond the bottom edge of the screen, although that feature seems to be gone.

The Eye Writer glasses uses an eye tracking system that is not linked to a particular on-screen representation. In its fist instantiation it was used with an onscreen software program to facilitate graffiti tagging, but the glasses themselves (the input device) are not linked to any screen, the way the myTobii is.

The synthesis of these systems suggests a model in which a user could use their eyes to gesture towards abstract referents – hypothetical buttons which exist outside of the field of attention. So a user might look at a conversation partner and then glance left and right, which would be interpreted by a computer vision system as the letter D. Right and left might be O. Up and left might be G. But because the communicator never attends to an onscreen representation, they are able to assess the impact of what they are saying, word by word, as we do in normal speech. Rather than having to type out an entire phrase (while ignoring the conversation partner) and then playing it back, with a highly intermediated effect.

In the first test, the object of attention (a Google map) is situated in the middle of the screen, where the user can study it at will without triggering any buttons (which would be the case with the myTobii system). Glancing towards any edge causes the map to scroll in that direction. Glances are triggered by a “mouse-over” effect, which does not require the user to look at, pause on, or otherwise fixate on a button. A simple glance suffices.

A subsequent instantiation will allow the user to wear EyeWriter glasses and look at a physical symbol board to spell words. After rudimentary training, we will test if the user can continue to spell by glancing with their eyes, without the presence of the board.

Further open source software and hardware models will explore if there is a sub-$100 device which could be produced to facilitate communication (and control) without the presence of a computer screen.

 

Publications & Presentations

Alexandra Haagaard, Geoffrey Shea, Nell Chitty, Tahireh Lal. Cardinal: Typing with Low-Specificity Eye Gestures and Velocity Detection. International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction, Sweden, 2013. (under review)

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Best Poster Award: Eye Tracking Conference on Behavioral Research, Boston, 2013.

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Demo and Talk: Disrupting Undoing: Constructs of Disability, Toronto, 2013.

Shea, G. and A. Haagaard. Artists Reflecting on Their Practice and Disability, Ethnographica Journal on Culture and Disability, Katholieke Universiteit Leuven, (under review).

Shea, G., Understanding the Work of Artists with Diverse Abilities: Applying Art, Design, Ethnography and Computer Science. Research Rendezvous, OCAD University, Toronto, 2012.

Shea, G., Art and Disability Research. A presentation to the Doctoral Program at SmartLab, Dublin, 2012.

Keywords: 
Creator: 
Sponsor(s): 
Image of an ear
Text about the readers
Image showing poducts
Wednesday, February 15, 2012 - 4:30pm

Body Editing

Body Editing is a gesture and biosensor platform that returns feedback (in the form of music, sound and visuals) to users. Users can move, gesture or provide biometric data, for example, to paint a picture, form a fractal, create a sound scape or musical composition. It explores the relationship between the experience of movement, of biodata, and the generative production of data aesthetics. In this installation, users are tracked in the installation space, and as well, passersby are captured by a motion capture camera to contribute to the feedback; together they create a layered data visualization experience. 

 

Person waving their arms interacting with the work
Wednesday, February 15, 2012 - 4:00pm