Digital Futures grad student and visual artist Afaq Ahmed Karadia, has designed a machine learning system which uses cognitive data and gesture technologies to recognize and interpret movements of the human body to create sound and animation.

Afaq’s performance uses a “virtual instrument” controlled by gesture-based movements. The instrument creates sound and these sounds create visualizations in real-time. These real-time visualizations replaces pre-made animations using shadow and light.

The prototype is part of his larger research project that examines the non-functional characteristics of gesture, such as expressivity, which remains a challenge for computers.

Template: 
Inline Image Template
Embed Video: