MOME Robotics studio / Reflective Robotics | Human–Robot Interaction Workshop: From Gestures to AI Systems
Explore how gestures become AI systems in this hands-on Human–Robot Interaction workshop at MOME using real-time, embodied prototyping tools.
human robot interaction, reflective robotics, physical AI, gesture recognition AI, embodied interaction, design robotics course, AI prototyping workshop, MOME robotics studio, Node-RED TouchDesigner, SenseCraft AI, Davide Gomba workshop, interactive systems design, embodied AI design
21735
wp-singular,post-template-default,single,single-post,postid-21735,single-format-standard,wp-theme-soho,soho-core-1.0.5,ajax_fade,page_not_loaded,, vertical_menu_transparency vertical_menu_transparency_on,wpb-js-composer js-comp-ver-6.10.0,vc_responsive

Davide Gomba Workshop at MOME Reflective Robotics:

From Gestures to Systems

71GABepkVPL._SL1500_

At the MOME Robotics Studio, we explore how emerging technologies can be approached not only as tools, but as materials within a design process. In the Reflective Robotics course, this takes the form of hands-on experimentation, where technical systems and embodied experience are considered together. In this context, we invited Davide Gomba to contribute by sharing his approach to working with AI through accessible, system-oriented prototyping.

Davide Gomba workshop draws on Bruno Munari’s Supplemento al Dizionario Italiano (1963), which presents gestures as a parallel language operating alongside speech. This perspective is relevant for design because it treats gestures as embodied expressions that carry meaning, intention, and cultural context. From this starting point, the workshop explores how such forms of knowledge can be translated into a computational setting. Students work with a small set of iconic Italian hand gestures, including “cosa vuoi”, the corno, and “non c’è niente”. These are not treated simply as inputs to be recognized, but as expressive interfaces. The focus is on how they can be captured, interpreted, and transformed through machine learning.

In preparation for the session, Davide shared a video and an open repository outlining a workflow that integrates SenseCraft AI with Node-RED and TouchDesigner (https://www.youtube.com/watch?v=Qcg3zmJe890). The system runs locally, allowing students to see and modify each step from data capture to interpretation and output. This makes the process transparent and easier to understand. Within this setup, machine learning is treated not as a fixed solution, but as a design material within a Physical AI approach, where computation is embedded in real-time, embodied systems. Students build a simple pipeline that connects gesture, data, interpretation, and response. The goal is not only to make the system work, but to understand how it works.

pipelines connecting gesture → interpretation → response

This workshop connects to the broader direction of Reflective Robotics, where the aim is not only performance or efficiency, but reflection through making. We are interested in how systems shape interaction and meaning, and how embodied knowledge can be translated without reducing it to data alone. Working with small-scale, interpretable systems supports this approach. For us, AI becomes a material for design something that can be shaped, questioned, and explored through practice.

In addition to the software workflow, students will work with small camera modules to capture gesture input directly. These setups make it possible to test how sensing, positioning, and data quality affect system behavior in real time.

Course Details

Human–Robot Interactions (HRI) University Course
Dates: March 27, 2026
Language: English
Format: Intensive, studio-based workshop

Course Focus

The workshop draws on Bruno Munari’s Supplemento al Dizionario Italiano (1963), which presents gestures as a parallel language operating alongside speech. Students worked with a set of iconic Italian hand gestures as an entry point into the system. Rather than focusing on reproducing these gestures accurately, the intention was to use them as a starting structure for experimentation. From there, students were free to extend, reinterpret, or move beyond gestures entirely, developing their own models and interactions based on what they found meaningful within the system.

Guest Lecturer

Davide Gomba

Davide Gomba is a designer and educator working at the intersection of AI, interaction, and system-oriented prototyping. His practice focuses on making machine learning accessible and understandable through hands-on workflows that connect embodied input, real-time systems, and design experimentation.


Academic Coordination

The course was made possible with the support of MOME Global Voices, whose program enables international knowledge exchange by bringing leading practitioners to the university.

Renata Dezso