Art Intersection Meetup: Jeremy Bailey and Midi Onodera

Jeremy Bailey and Midi Onodera are two artists who are engaged with online networks as an intrinsic component of their work. They will present their work as part of the Mobile Experience Lab's  Art Intersection Meetup, a place for artists, moving image-makers, gamers and technologists who are experimenting with art-related digital content and how the moving image is presented in a connected world. Digital culture, social media and networks encourage new ways of storytelling, image making, idea sharing and collaboration. This Meetup celebrates artists and innovators who are embracing change leading the next wave of creativity.

About Jeremy Bailey:

image

Throughout his career, the Toronto born artist Jeremy Bailey has explored software in a performative context. As Rhizome author Morgan Quaintance has written "Since the early noughties Bailey has ploughed a compelling, and often hilarious, road through the various developments of digital communications technologies. Ostensibly a satire on, and parody of, the practices and language of "new media," the jocose surface of Bailey’s work hides an incisive exploration of the critical intersection between video, computing, performance, and the body."

Specifically, Bailey’s works consist of all manner of performances that exist as videos, software, websites, inventions, institutions and ephemera all created and presented by his alter ego, Famous New Media Artist Jeremy Bailey.

Bailey studied at the University of Toronto from 1998 to 2002 and completed his Masters in Fine Art from Syracuse University in 2006. He has participated in residencies at the Banff Centre in Alberta Canada, FACT in Liverpool UK, and Quartier 21 in Vienna Austria and has been the subject of numerous solo and group exhibitions internationally.

For more information about his work visit jeremybailey.net and parinadimigallery.com/jeremy-bailey

About Midi Onodera:
image

Midi Onodera is an award-winning Canadian filmmaker who has been directing, producing and writing films for over twenty years. She has over twenty-five independent short films to her credit as well as a theatrical feature film and several video shorts. Her recent works feature a collage of formats and mediums ranging from 16mm film to Hi8 video to digital video and “low end” digital toy formats such as a modified Nintendo Game Boy Camera, the Intel Mattel computer microscope, the Tyco and Trendmasters video cameras.

Onodera's films have been critically recognized and included in numerous exhibitions and screenings internationally. Some highlights include the Andy Warhol Museum, the International Festival of Documentary and Short Films, Bilbao, Spain; the Rotterdam International Film Festival; the Berlin International Film Festival; the National Gallery of Canada and a number of screenings at the Toronto International Film Festival.

For more information about her work visit midionodera.com

Creator: 
Tuesday, January 17, 2017 - 6:30pm
Embed Video: 

Common Pulse Symposium 2011

A partnership between OCAD University and the Durham Art Gallery, COMMON PULSE created a forum for presentations and discussion during a three-day symposium. Twelve artists and curators were invited to present their experience creating work in the context of university research. These presentations sparked a dialogue among all of the participants which examined current developments in digital media production and consumption within contemporary art practice and how they predict, reflect or refute parallel media phenomena within North American culture in general. We looked at societal shifts in authorship brought about by file-sharing, sampling and the open source movement, as well as collaborative initiatives sparked by mobile media such as citizen journalism, wiki culture and flash mobs. In each model of research-informed, digital media art practice, the flow back and forth between analysis and production is strongest and most focused in the artist-led research labs of the symposium contributors

 

  

Common Pulse Transcripts
Proceedings from the Symposium

The Common Pulse Symposium brought together twelve prominent media artists to discuss their approaches to four issues:

  • Social Authorship: Where do Ideas Come From?
  • Digital Identity: The Public Self
  • Users and Viewers: The Role of Participation
  • The Artist in the Research Lab

This book presents the contributors speaking about art, interactivity, media and the shifting landscape of Canadian culture: David Clark, Brooke Singer, Marcel O'Gorman, Jim Ruxton, Martha Ladly, Michelle Kasprzak, Jason Edward Lewis, Jean Bridge, Steve Daniels, David Jhave Johnston and Jessica Antonio Lomanowska. Edited by Geoffrey Shea.

Get it on Amazon - or - Download PDF

Common Pulse Website

http://commonpulse.ca/symposium.php

Sponsor(s): 
People sitting around a table at a conference
Wednesday, February 15, 2012 - 4:30pm
Lab Member: 
Geoffrey Shea
Ilse Gassinger
Martha Ladly
Rita Camacho
Justine Smith
Nell Chitty

Portage

How can users of a public urban space engage in a multi-sensory, multimedia outdoor experience? PORTAGE is transforming John Street, in the heart of Toronto's entertainment shopping district, into a Broad Locative Environment (BLE) - a space that will allow visitors to engage with outdoor multimedia installations and other mobile users.

Users will be able to navigate from Grange Park down John Street through a GPS and Wi-Fi-enabled virtual theatre. Along the way they will interact with installed musical sculptures, create and share interactive audio components, trigger swarms of electronic cicadas residing in city trees and view themselves on surveillance camera which they themselves control.

Through this installation Portage will investigate how cultural content delivery is made possible by emerging multi-capability mobile devices. These devices include cell phones, handhelds and PDAs with Wi-Fi, Bluetooth, GPS and GSM access. Portage will also examine the processes by which these technologies can be used in conjunction with each other and with environmental sensors and displays to move the mobile experience beyond the phone and to create an interactive and immersive environment.

http://www.mobilelab.ca/portage

Publications

Gardner, P., Shea, G., and Davila, P., Locative Urban Mobile Art Interventions; Methods for facilitating politicized social interactions. Aether: The Journal of Media Geography. 5B, 2010
Geoffrey Shea, Portage: Locative Streetscape Art, International Multimedia Conference, Proceeding of the 16th ACM International Conference on Multimedia, Vancouver, 1131-1132, 2008
Geoffrey Shea and Paula Gardner. PORTAGE: A Locative Streetscape Theatre, at Bauhaus-Universität, Weimar, Germany, 2007
Paula Gardner, Geoffrey Shea, Patricio., Davila, P. PORTAGE: Locative Media at the Intersection of Art, Design and Social Practice, University of Siegen, Germany, 2007
Geoffrey Shea, Sound as a Basis for Locative Media Experience, Interacting with Immersive Worlds Conference at Brock University, June 2007
Shea, G. “Inside Out Experience Design”, in Ladly, M., and Beesley, P. (eds). Mobile Nation, Canadian Design Research Network, March 2007
Shea, G. Art, Design, Education and Research In Pursuit of Interactive Experiences. Proceedings of the 7th ACM Conference on Designing Interactive Systems, Cape Town, South Africa, 342-349, 2008
Shea, G., Donaldson, T., Gardner, P., Using the Mobile Experience Engine (MEE) to Create Locative Audio Experiences, 5th International Conference on Pervasive Computing at the University of Toronto, May 2007. Published in: Advances in Pervasive Computing, Austrian Computer Society, 2007
Shea, Geoffrey. Artifact or Experience: Presenting Network Mediated Objects, Interacting with Immersive Worlds Conference, Brock University, June 2009
Shea, Geoffrey. i = i + 1: Participatory Art and Design, Inter(pr)axis Conference, Toronto, 2008
Shea, Geoffrey and Patricio Davila. Catastrophe, conference presentation at DigiFest, Design Exchange, 2008

Keywords: 
Sponsor(s): 
Portage Banner
Saturday, February 23, 2013 - 4:30pm
Lab Member: 
Paula Gardner
Geoffrey Shea
Martha Ladly
David McIntosh
Patricio Davila
Ken Leung
Peter Todd
Leighanne Pahapill
Jennie Ziemiannin
Bryn A. Ludlow
Yvon Julie
Serena Lee
Jennifer Johnson
Embed Video: 

Measured Mile

This research is an exploration of the technology and strategies required for large scale audio mesh networks for eventual use in interactive, locative audio art installations. The use of wireless transmitters coupled with Arduino microcontrollers and sound generation and playback, as well as geolocation will be explored for the development of an extensible, modular network. 

Iteration 1

One Xbee attached to a computer running Processing serves a a coordinator, sending playback commands to several Arduinos with WaveShieds from Adafruit. Processing requires Andrew Rapp's Xbee API and the Arduinos use his Xbee Arduino. The Xbees are Series 2 in API mode. The WaveShields use SD cads (8 Gig, FAT32, fomatted with Mac Disk Utility) to store the audio files in WAV format (mono, 44 kHz, 16 bit).

Sample Processing 2.0 and Arduino code here

Iteration 2

The viewer's location is used to determine sound playback. An iOS App created in PhoneGap 3.1 and Xcode sends location information to a web based database. Processing queries the database for the latest user locations. (This hack elimates the need for a web service running on the controller computer running Processing and issues around getting a static, public IP address.)

Assumes you've got a local web server with PHP and MySQL. (Setting in your Apache config file may need to be adjusted to connect to clients other than 'localhost'.) I've configured a local database table with fields: user, long, lat, id and time.

A whitelist setting needs to be made to one of your config.xml files in your PhoneGap / Xcode project to be able to connect to jQuery remotely.

Sample PhoneGap (Xcode), Processing and PHP files here

Presentations
Shea, Geoffrey and Alan Boulton. Telegraph: Transmission in a streetscape audio artwork. Trans-X Symposium, Toronto, 2012.

Creator: 
Rendered image of the measured mile.
Sunday, March 3, 2013 - 4:30pm
Lab Member: 
Geoffrey Shea
Alan Boulton

Art and Interactive Projections: Tentacles

This research employs interactive public video projection to explore emerging social constructions involving play and ad hoc communities. In these installations the viewer is encouraged to participate in unstructured play. As with every interactive experience (and in fact, most other things in life) there is the initial satisfaction resulting from simply figuring out how one’s decisions, gestures and actions cause reactions and create effects in the surrounding environment.

The interplay of scale – the small screen in the palm of one’s hand contrasted with the large public screen on the facade of a building – parallels other central human experiences. The intimacy of touch, for example, is contrasted by the dominance of projected, broadcast visual stimuli, while the screen – the sign – forms a kind of text waiting to be read. Your personal space simultaneously shrinks and expands as the tiny gestures you make with your fingers are magnified for all to see. Public and private stand in stark contrast, highlighting dichotomies like wireless and wired, perception and cognition, knowing and being.

Operating from within the crowd, viewers or players had the opportunity to step onto the stage of the projected environment – to display themselves in action, engaged with other virtual beings. Movements, gestures and displays become part of this spontaneous public performance, suggestive of the activity on a dance floor, where typical rules about decorum, reservation, engagement with strangers and physical contact are suspended. Each private, gestural experience is amplified publicly as a by-product of being within a crowd. Taking action in public in this way constitutes one layer in the creation of community. Our behaviours and others’ meld to generate simultaneous effects, creating a joint awareness that forms the cornerstone of our collectivity.

Play is presented as a free-form, creative activity – a childlike enthrallment with exploration, skill-learning and sharing. The scale and location of the displays encourages parallel play and the growing awareness of the activities of other players nearby. The public nature of the experience creates the opportunity for ambient performance, where other players’ awareness of you subtly influences and rewards your behaviour. Finally, these factors combine with the ambiguous structures and activities built into each project to encourage social play and collaboration in an emerging, shared activity.

http://www.tentacles.ca
 

Exhibitions
Talk to MeMuseum of Modern ArtNew York City, USA, July – November 2011
Transmission, GLOBAL SUMMIT 2011Victoria, Canada, February 2011
MediaCity 2010, Bauhaus UniversityWeimar, Germany, October 2010
Festival du nouveau cinemaMontreal, Canada, October 2010
Mobilefest, Museum of Image and SoundSao Paulo, Brazil, September 2010
Nuit Blanche, Lennox Contemporary GalleryToronto, Canada, October 2009

Publications
Geoffrey Shea and Michael Longford. Large Screens and Small Screens: Public and Private Engagement with Urban Projections. Media City: Interaction of Architecture, Media and Social Phenomena. J. Geelhaar, F. Eckardt, B. Rudolf, S. Zierold, M. Markert (Eds.) Bauhaus-Universität, Weimar, Germany, 201-210, 2010
Geoffrey Shea, Michael Longford, Elaine Biddiss. Art and Play in Interactive Projections: Three Perspectives. ISEA, Istanbul, 2011
Geoffrey Shea and Michael Longford. Identity Play in an Artistic, Interactive Urban Projection. CHI Workshop: Large Displays in Urban Life, Vancouver, 2011

Presentations
M. Longford, Connecting Talent in Digital Media, MITACS and the NCE GRAND, Mississauga, Canada, September 2010
M. Longford, “Digital Media: Successes and Accomplishments in Canadian Digital Media Research,” Canada 3.0, Stratford, Canada, May 2010
R. King, International Centre for Art and New Technologies (CIANT), Prague, Czech Republic, March 2010
G. Shea, Mobile Experience Innovation Centre (MEIC), Ontario College of Art and Design University, Toronto, Canada, February 2010
G. Shea, M. Longford, R. King, Discovery 2010, Ontario Centres of Excellence, Toronto, Canada, May 2010
R. King, Music in a Global Village Conference, Budapest, Hungary, December 2009
M. Longford, G. Shea, iPhone Developer's Group, Augmented Reality Lab, York University, Toronto, Canada, November 2009
M. Longford, Project Demonstration - A New Media Gathering, Town of Markham, Markham, Canada – October 2009
M. Longford, “Tentacles: Design, Technology and Interdisciplinary Collaboration in the Mobile Media Lab” PEKING/YORK SYMPOSIUM: Interdisciplinarity, Art and Technology, York University, Toronto, Canada, October 2009
G. Shea, “Artifact or Experience: Presenting Network Mediated Objects,” Interacting with Immersive Worlds, Brock University, St. Catharines, Canada, June 2009

Image of people looking abstract images projected on a wall
Wednesday, February 15, 2012 - 4:30pm
Lab Member: 
Rob King
Micheal Langford
Geoffrey Shea
Ken Leung
David Green

Art and Ability: Cardinal

This project begins to examine the special physical needs of individuals with complex disabilities through the lens of their artistic and expressive needs. It proposes to develop and incorporate an art/research methodology, including stages of creation and analysis of prototypical tools to address these overlapping needs of the participants. It is anticipated that these newly developed tools will have potential benefits for a broader spectrum of the user’s needs, as well as for other users with or without disabilities. This iterative inquiry will take the form of collaborative art creation sessions involving both researcher/artists and participant/artists with severe physical disabilities. Analysis of the impediments to these exercises in self expression will guide the rapid development of new, prototypical, art making tools, techniques or materials. At the conclusion of the research, we will examine the effectiveness of the art/research methodology in refining and addressing the emerging research question of how communication models can be developed and employed for artistic expression by individuals with disabilities, and how they can be applied to their other communication needs.

Cardinal: Eye Gesture Screenless Communication System

Several observations of current eye-gaze and eye-gesture systems point towards the potential benefits of a low strain,computer-assisted, natural tool for users with eye control as their primary means of communicating.

The three existing systems include early Bliss boards, myTobii computers and the Eye Writer. These are the salient features of each:

The Bliss board was a physical tool that allowed a trained user to communicate with a trained “listener” through eye gestures. A 2-3 foot square sheet of clear Plexiglas had the centre cut out, leaving a frame about 6 inches wide. The two conversants would face each other. The square grid around the frame contained cells with a square grid of alphabetic characters. I.e. the top left cell might contain the letters A, B, C, D, E, F, arranged in a grid. The user would use a two-gesture glance to instruct the listener about the letter choice. Up and to the right, followed by up and to the left might combine to signify the upper right letter in the upper left square.

Two features stand out with this system. First, the goal of communicating with a listener is enhanced by having the face-to-face view of the conversants uninterrupted. I.e. they look at each other through the large hole in the centre of the board and glance to the edges of their field of gaze to signal alphabetic letters. Second, once both users have become accustomed to the system, the board itself can be removed and the pattern of eye gestures can still be interpreted.

In its early usage, the communicator is might look at the squares in question, but later they just gesture towards the squares, whether they are physically there or not. This sparks a differentiation between eye-gaze (looking) and eye-gesture (glancing).

The myTobii uses infrared cameras to track the communicators gaze, and maps it to a flexible set of on-screen buttons. The camera and motion tracking software create a very workable tool. Unfortunately the computer screen must constantly be the focus of the communicators gaze, and effectively becomes a barrier between the conversants. In theory, the cameras could track eye gestures that go beyond the edges of the screen. A “pause” feature used to be activated by glancing down beyond the bottom edge of the screen, although that feature seems to be gone.

The Eye Writer glasses uses an eye tracking system that is not linked to a particular on-screen representation. In its fist instantiation it was used with an onscreen software program to facilitate graffiti tagging, but the glasses themselves (the input device) are not linked to any screen, the way the myTobii is.

The synthesis of these systems suggests a model in which a user could use their eyes to gesture towards abstract referents – hypothetical buttons which exist outside of the field of attention. So a user might look at a conversation partner and then glance left and right, which would be interpreted by a computer vision system as the letter D. Right and left might be O. Up and left might be G. But because the communicator never attends to an onscreen representation, they are able to assess the impact of what they are saying, word by word, as we do in normal speech. Rather than having to type out an entire phrase (while ignoring the conversation partner) and then playing it back, with a highly intermediated effect.

In the first test, the object of attention (a Google map) is situated in the middle of the screen, where the user can study it at will without triggering any buttons (which would be the case with the myTobii system). Glancing towards any edge causes the map to scroll in that direction. Glances are triggered by a “mouse-over” effect, which does not require the user to look at, pause on, or otherwise fixate on a button. A simple glance suffices.

A subsequent instantiation will allow the user to wear EyeWriter glasses and look at a physical symbol board to spell words. After rudimentary training, we will test if the user can continue to spell by glancing with their eyes, without the presence of the board.

Further open source software and hardware models will explore if there is a sub-$100 device which could be produced to facilitate communication (and control) without the presence of a computer screen.

 

Publications & Presentations

Alexandra Haagaard, Geoffrey Shea, Nell Chitty, Tahireh Lal. Cardinal: Typing with Low-Specificity Eye Gestures and Velocity Detection. International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction, Sweden, 2013. (under review)

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Best Poster Award: Eye Tracking Conference on Behavioral Research, Boston, 2013.

Geoffrey Shea, Nell Chitty, Alexandra Haagaard, Tahireh Lal. Cardinal: An Eye Gesture Based Communication System. Demo and Talk: Disrupting Undoing: Constructs of Disability, Toronto, 2013.

Shea, G. and A. Haagaard. Artists Reflecting on Their Practice and Disability, Ethnographica Journal on Culture and Disability, Katholieke Universiteit Leuven, (under review).

Shea, G., Understanding the Work of Artists with Diverse Abilities: Applying Art, Design, Ethnography and Computer Science. Research Rendezvous, OCAD University, Toronto, 2012.

Shea, G., Art and Disability Research. A presentation to the Doctoral Program at SmartLab, Dublin, 2012.

Keywords: 
Creator: 
Sponsor(s): 
Image of an ear
Text about the readers
Image showing poducts
Wednesday, February 15, 2012 - 4:30pm
Lab Member: 
Geoffrey Shea
Alexandra Haagaard
Tahireh Lal

Portage: Cell Phone Xylophone

Cell Phone Xylophone was originally developed as part of PORTAGE, an artist and designer driven research project, led by Geoffrey Shea and Paula Gardner in the Mobile Experience Lab. It allows participants using a simple cell phone to control and play a networked, mechanical instrument.

Viewers simply dial a toll free number and are then prompted to enter key presses. The patterns they enter correspond to arpeggios and loop several times. Their phone connects to a VOIP service which delivers their key presses to an Asterisk server which relays them to the control machine wired up to the xylophones custom controller board. This version was presented at MobileFest in Sao Paolo, Brazil.

Exhibitions

MobileFest, Museum of Sound and Image, Sao Paolo, Brazil, 2008

Interactive Arts, ACM Multimedia, Science World, Vancouver, British Columbia, 2008
 

Publications

Geoffrey Shea, Portage: Locative Streetscape Art, International Multimedia Conference, Proceeding of the 16th ACM International Conference on Multimedia, Vancouver, 1131-1132, 2008

Geoffrey Shea and Paula Gardner. PORTAGE: A Locative Streetscape Theatre, at Bauhaus-Universität, Weimar, Germany, 2007

Paula Gardner, Geoffrey Shea, Patricio., Davila, P. PORTAGE: Locative Media at the Intersection of Art, Design and Social Practice, University of Siegen, Germany, 2007

Geoffrey Shea, Sound as a Basis for Locative Media Experience, Interacting with Immersive Worlds Conference at Brock University, June 2007

Keywords: 
Sponsor(s): 
Xylo Banner
Sunday, February 24, 2013 - 4:30pm
Lab Member: 
Paula Gardner
Geoffrey Shea
Ken Leung
Peter Todd
Leighanne Pahapill
Patricio Davila
Jennie Ziemiannin
Embed Video: