MOME Robotics studio / Reflective Robotics | Blog Medium Image Whole Post
robotics, MOME, co-design, co-Ability, Arduino, reflective robotics, Aalborg university, digital craft,
18986
wp-singular,page-template,page-template-blog-small-image-whole-post,page-template-blog-small-image-whole-post-php,page,page-id-18986,page-child,parent-pageid-1815,wp-theme-soho,soho-core-1.0.5,ajax_fade,page_not_loaded,, vertical_menu_transparency vertical_menu_transparency_on,wpb-js-composer js-comp-ver-6.10.0,vc_responsive

Blog Medium Image Whole Post

// UNFOLDING

Team members: Nóra Barna, Márton Hunyadi and Ines Nirkko

As a team — Marci, a product designer; Nóra, a dancer and media artist; and Ines, an animator — we were united by a shared curiosity for human–robot interaction, somaesthetics, and the diverse ways bodies can be animated, both biologically and mechanically.

INSPIRATIONS

// Embodied interaction

// Machine mimicry

// Movement dialogue

// Sensorial feedback

// Foldable structures

// Kinetic sculpture

Our inspiration stemmed from shapeshifting robots and mutable objects — forms in constant flux. We drew upon the work of Brazilian artist Lygia Clark, particularly her Bichos sculptures, which inherently function as living bodies: movable, interactive structures that invite the viewer to become a creator. Through this reference, we sought to expand the notion of horizontality and explore the presence of diverse living entities within space, mediated through technology.

(more…)

Embracing Vulnerability in Robotic Companions

The aim of the project was to develop an interaction between humans and robots that takes into account both parties in a thoughtful and empathetic manner. We employed a more-than-human design approach, recognizing the importance of robots as active participants that have agency. The interaction tries to create empathy in the human actor towards the robot, which might as well have concerns about interacting with humans, just like humans have fears of technology.

Inspiration

Before the semester started, I had a course week about art & tech installations during which

my concept to encourage people to slow down to fully enjoy an artwork did not get a great reception, because the interaction I proposed was practically no interaction (do not move). I wanted to explore the same concept further involving another system, a robot.

My intention was to get people engaged and empathetic towards this creature we are building, which is a sentient being which has trust issues with people. The participant should be patient and tame the robot.

Interest

My major is interaction design, which is why the focus of the process was to research the ways we could interact with another entity, what the experience feels and what kind of reflections we may have while considering the involvement of that other entity.

Documentation

Throughout our process we had many iterations, and we shaped our next steps based on the reflections we gathered along the way. The project helped us to learn about 3D modelling and printing so that we became more confident in the field. Experimenting with the electronics and conductive paint gave us a whole new level of mindset regarding prototyping tangible interfaces that we were already able to utilize in other courses.

The final setup includes the following elements. Firstly, we 3D printed a publicly available pin art object and a linear actuator for a servo motor. We fixed interchangeable 3D printed shapes to the top of the actuator. The border of the top panel is painted with conductive paint, in order to make it work as an antenna. Through the screws the object is put together we connected a Touch Board microcontroller that controls the servo motor based on the proximity of the approaching hand.

Reflection

The entire semester was a great learning experience, which makes me wish it was longer, so we could go on an explore further. Compered to other courses or my experience in design so far, applying the research through design methodology was first a bit unfamiliar, as we couldn’t see where we will end up, but as we progressed, it became more and more enjoyable. My biggest achievement was breaking down my frustration regarding 3D modelling and could reach a point where I could design a simple object that I had in mind from zero experience. Another notable gain is that at this point I approach 3D printing without any fears.

Current Status

By the end of the semester, we could finalize the proof of concept object. We showed it to many students and teachers on the university and everyone enjoyed playing with it, which brings me to the conclusion that we succeeded in providing a way to interact with a non-human entity in a thoughtful way.

Future Development

We already started to work on the second version of the concept that has a 30×30 cm surface and we are exploring sideways movements and following with the help of MediaPipe’s hand landmark detection and a XY movement mechanism. It will also contain lights to be able to convey the robot’s attitudes better. We kept working with hexagon shapes and based our 3D model on a NASA space fabric experiment’s model.

Shout out

Thank you Renáta and Kálmán for the continuous support and for letting us take a peek into your world. I hope our cooperation doesn’t stop here, and we can work together on the second version of ‘the thing’ we are creating.

Massimo Banzi Visits MOME Robotics

This week was truly special — we had the honor of hosting Massimo Banzi at MOME Robotics at Moholy-Nagy University of Art & Design, with the support of MOME Global Voices. Over two inspiring days, Massimo connected with our students, researchers, and faculty — and we even got to know him on a personal level. Grateful for the conversations, insights. It was a joy to learn from someone who has empowered so many makers, educators, and dreamers around the world.

Design / Build / Test / Repeat

Nearly twenty years ago, Massimo Banzi co-founded the Arduino platform in Ivrea, Italy—ushering in a new way of learning, making, and thinking about technology that continues to influence the world today. In his talk, Open Sourcing Innovation, he explored how open-source practices are shaping the future of design, education, and global maker communities.

During his visit, Banzi didn’t just deliver a lecture—he spent time engaging with our students, faculty, and researchers in meaningful conversations. He shared his thoughts on design, interaction, contemporary technology, business models, and manufacturing processes—core aspects that have defined his remarkable career. Banzi has played a pivotal role in democratizing electronics and fostering a global community of innovators. Earlier in his career, he was an Associate Professor at the Interaction Design Institute Ivrea, where Arduino was originally developed as an accessible tool for designers and artists. He has also worked as a consultant for world-renowned brands including Prada, Artemide, Persol, Whirlpool, and Adidas. In addition to his contributions as a practitioner, Banzi is the author of the widely used book Getting Started with Arduino, and was instrumental in launching Italy’s first FabLab, Officine Arduino. Today, he continues to shape the future of creative technology as a faculty member at the Copenhagen Institute of Interaction Design (CIID).

First Steps into the DanceHack Method led by Reka Gerlits

On March 7th, as the first hints of spring sunshine filtered through the windows at MOME, we welcomed Réka Gerlits to lead the opening session of our Dance Hack-inspired workshop. As part of our semester-long Reflective Robotics course, this was already the students’ third class—but the first dedicated to movement, motion systems, and embodied experimentation. Réka, who is the Hungarian partner leader of the EU Dance Hack and an integral part of the Central Europe Dance Theatre (CEDT), brought her rich experience and deep familiarity with the methodology, which she learned from the TaikaBox team in Oulu and has been evolving ever since, most recently in Budapest.

Réka guided the students through a three-and-a-half-hour reconstruction of a Dance Hack residency day. The session unfolded in three parts—warm-up, cooperative work, and cool-down—and invited students to shift their focus from theoretical design into full-body exploration. Through rhythm games, partner work, and shared improvisation, participants connected with their own bodies, each other, and with a responsive projection that mirrored their movement. As Réka noted, at first, the relationship between movement and external material was logical and simple—but over time, it evolved into more layered, abstract interactions. A highlight was the cooperative segment using a painter’s foil as a shared object of play, metaphor, and choreography. We ended with the meditative “snail dance,” a gentle group improvisation that perfectly captured the sensitivity and awareness at the heart of the Dance Hack method.

Thanks to Réka’s engaging presence and to Borka Moravcsik (indirectly) the session marked a beautiful beginning to a more embodied exploration of somatic movements and design.

A Deep sea creature

We imagined a phosphorescend deep sea creature lookalike robot with light sensors in a dark box. When the light pops up in the box the robot starts to follow it and when it is turned off the robot stops. It repeats again when it see another light source.

Now it is working with cables and without the light sensors but the structure is almost finished.

OUR INSPIRATION CAME FROM

The correlation between black boxes and what the ai and modern technology generates and the unknown deep waters, these creatures with humans.

undiscovered, unknown deep waters

deep sea creatures with transparent, illuminating body structure

fluid, weightless movement

following, sometimes generating their own light, being their own lightsource

THE PROGRESS

Chronokinematograph by Heilig András and Novák Bence

Chronokinematograph

Andrásb Heilig & Bence Novák

INSPIRATION

The visual inspiration for this project was from chronophotography, the practice of capturing a moving subject at regular time intervals by using long exposure and a strobe. This creates a multi-layered image that shows motion in its phases. One notable artist using this technique was Etienne-Jules Marey, his works had the most impact on our work.

Our goal was to somehow visualize the complex relation between time and movement. This problem interested many philosophers throughout history, such as Zenon, Henri Bergson or Gilles Deleuze.

We followed the idea that the common denominator between motion and time is change itself. Our robot looks through its camera-eyes and captures the changes it sees in the video feed. When the motion stops the robot starts to draw. For the spectator making a gesture is a momentary experience, but for the robot it is a much-much longer time to draw the representing figure. With this we create tension between two timelines and two different kinds of movements.

INTEREST

We had limited experience with using an Arduino board and an even smaller experience with 3D printing so these new technologies were very exciting to explore.

REFLECTION

We have learned to integrate the Arduino board into TouchDesigner. And to use the 3D printers in the lab, as well as some basic 3D design principles that can help when developing products for 3D printing.

CURRENT STATUS

We are currently waiting for some crucial parts to arrive from Amazon so we can finish assembling the drawing machine part of the project. This design is an open source project by DIY Machines, that we modified a bit. The TouchDesigner programming part of the project is ready to use but it has to be tested with the assembled drawing machine itself too. The 3D printed parts are done for the build, the only things stopping us from finishing this project are the missing parts that did not arrive in time.

FUTURE DEVELOPMENT

Doing the final assembly of the parts and testing the timings for the optimal performance.

somasphere

The aim of this project was to make a machine which is controlled by human movements and through this facilitate the healthy and ergonomic body postures. The theoretical background for this was Schusterman’s (1999) Somaestatic which is the “Somatic acknowledgment that there is no separation between mind and body and connects the self with all of these processes” (Höök 2018:xvii). In this respect, the controlling of the robot promotes to practice ergonomic body postures, like strengthening the back or lift the overarm above the shoulder.

The Somesphere is driven by a wireless Arduino, two DC motors and one servo motor. The DC motors give the speed, meanwhile the servo motor adjust the iron ballast in the middle which responsible for the direction. The outer shell and the engine holder box were produced by additive manufacturing techniques from PLA filament.

The spirit of the device is the Touchdesigner software and the computer vison centered machine-learning framework plugin, the Mediapipe, which could estimate a digital skeleton by an AI based algorithm. With this method from a live video input the distance of user’s mouth from shoulder is estimated which controls the speed. For direction the relative difference between the height of the right and the left elbow is responsible.

The first prototype had difficulties of turning the outer shell because it was too heavy and the used DC motors had not enough turning force. To test the software and estimation method the shell was replaced by lightweight wheels from reused plastic. This solved the problem but limited the shape and the movements of the device.

To keep the spherical shape the weight of the shell should be reduced by manufacturing it using a random Voronoi cellular structure (see more Rokicki & Gawell 2016, Feng et al. 2018, Zhang et al. 2021) and/or stronger motors even brushless (BLDC) motors would solve the speed and turning force problems.

In the long term two overlapping directions are seen for further work. 1) developing the human body driven controlling mechanisms by researching the ergonomic movements and postures and find the best markers which captures the most the desired gestures. 2) shape driven: experiment with the shape and the controlling hardware mechanisms in a less gravity dominated space like floating in water or levitating it in the air. This direction would also broaden the possibility to translate human movements into different controls.

The hypothesis was that these kinds of movements, even first need a high self-reflection and body consciousness, but after a while, when these are already used to, it may help incorporate the movements. Same as cars are driven without focusing on the control mechanism

Balint Ligeti’s Drawing Robot for Children with Physical and Intellectual Disabilities

Drawing Robot for Children with Physical and Intellectual Disabilities

During the brainstorming phase of my project, I had two main objectives: to design a tool that brings joy to children and also enhances their skills. I reached out to the Csillagház Primary School and Foundation, where children with physical and severe intellectual disabilities study. They explained that these children have creativity just like any other healthy child, but they struggle to draw properly because they cannot coordinate arm movements or hold a pencil steadily.

Recognizing this issue, I identified the problem: there are children who are deprived of the opportunity to draw, even though drawing develops several important skills such as fine motor skills, hand-eye coordination, problem-solving ability, and creativity. Moreover, it can improve self-confidence and have stress-relieving effects.

The aim of the project was to develop a device that enables these children to draw on paper. Through my research, I found that a physical end product (i.e., a drawing on paper) can be much more motivating for them than just seeing their creation on a screen.

I divided the task into three parts: creating a drawing robot capable of drawing on paper, designing an interactive controller for direct communication with the robot, and developing software to control the robot.

IDEA

During the brainstorming phase of my project, I had two main objectives: to design a tool that brings joy to children and also enhances their skills. I reached out to the Csillagház Primary School and Foundation, where children with physical and severe intellectual disabilities study. They explained that these children have creativity just like any other healthy child, but they struggle to draw properly because they cannot coordinate arm movements or hold a pencil steadily.

Recognizing this issue, I identified the problem: there are children who are deprived of the opportunity to draw, even though drawing develops several important skills such as fine motor skills, hand-eye coordination, problem-solving ability, and creativity. Moreover, it can improve self-confidence and have stress-relieving effects.

The aim of the project was to develop a device that enables these children to draw on paper. Through my research, I found that a physical end product (i.e., a drawing on paper) can be much more motivating for them than just seeing their creation on a screen.

I divided the task into three parts: creating a drawing robot capable of drawing on paper, designing an interactive controller for direct communication with the robot, and developing software to control the robot.

The Drawing Robot

The design of the drawing robot was inspired by the operation of the “Etch a Sketch” drawing toy. In this toy, aluminum powder is spread inside, allowing drawing on the screen. The toy uses two knobs to move the drawing stylus: one knob controls horizontal movements, and the other controls vertical movements.

This principle guided the design of the drawing robot, which moves along two axes (X, Y). Each axis is controlled by a small motor. The X-axis is responsible for moving the paper left and right, while the Y-axis moves the drawing tool up and down.

I performed the design process using 3D design software. Afterward, I printed the designs using a 3D printer, assembled them, and incorporated the small stepper motors.

The Controller

A significant emphasis during the project development was on designing the controller, considering that children within the target group have varying abilities compared to average users. It was important to create a device that these children could handle and use relatively easily, even if they face different physical or intellectual challenges. To achieve this, I held continuous consultations with educators from Csillagház and tested prototypes with the children studying there.

The biggest challenge was addressing the diverse physical and cognitive issues of the children. Therefore, I aimed to create a controller that was as universal and widely applicable as possible.

I considered various options, including voice control, head-mounted motion sensors, eye-tracking cameras, pressure sensors, and button-based controllers. Ultimately, after multiple iterations, I decided on button controls, as I believed this would be the most straightforward solution for most children.

When designing the six buttons of the controller, I considered the functions of the drawing robot. The buttons needed to facilitate moving right, left, up, and down, as well as drawing circles and squares. The buttons were sized at 8×8 cm to be easily accessible and usable by children struggling with motor coordination. They also feature a safety edge around them to provide additional protection against slipping hands.

It was crucial that the button icons were appropriately colored, considering that many children have visual impairments affecting their eye function. Therefore, I used a color combination of yellow, red, and blue against a black background for the icons on the buttons. These icon buttons are standalone units that can be attached to each other in any order.

The Software

I programmed the software using Arduino Sketch, which accepts input from the buttons (up, down, left, right, circle, square) and manages two outputs that control the X and Y axis motors.

Innovation of the Solution

Unique Controller System

The button controller was specifically designed for children with physical and intellectual disabilities, considering their physical and cognitive limitations. The large-sized, brightly colored buttons are easy to press and recognize, while the buttons’ flexible arrangement allows for versatile use. This makes the controller intuitive and easy to use, facilitating the children’s interaction with the device.

Simple yet Effective Software

The Drawing Robot’s software is user-friendly and intuitively controls the robot arm’s movements. The simplified programming enables efficient handling of six inputs and two outputs, ensuring precise and reliable control of the robot arm. This makes it easy for children to learn how the robot operates and enjoy the drawing process.

Opportunity for Creative Self-Expression

With the Drawing Robot, children with physical and intellectual disabilities can freely draw and create. This fosters their creativity, self-confidence, and self-esteem while developing fine motor skills, hand-eye coordination, and cognitive functions. Through the opportunities provided by the robot, children can experience development and express themselves through the unique combination of art and technology.

Drawing is a versatile activity that supports various developmental areas for children, including problem-solving skills, creativity, self-confidence, and stress relief.

VILMa – Voice Led Interactive Machine

By Amália Gerstenkorn, Marton Hunyadi and Sidse Rebien

Our soft robot VILMa is a Voice Led Interactive Machine that reacts to the surrounding sounds. When VILMa is moving around it leaves a trail of paint to visualize its movement. The two DC motors react independently to each other. The motors create vibration, and the robot moves with its shaking bristles. We chose an audio channel for the communication act but instead of giving direct instructions our robot is reacting to the noise frequencies. One motor reacts on high frequencies while the other reacts on low frequencies which makes it possible to control VILMa’s movement. The initial concept of this reflective robot was to make an emphatic machine that mirrors the emotions and noise of the surrounding humans.

 

VILMa consists of 3D printed parts and an Arduino that is connected to a computer that contains our programming in Touchdesigner. The parts are modular and can be switched around if needed. In our process, we experimented with different shapes of the bridge between the elements. We have also studied how weight and balance can affect the motion of our robot. In the future we would like to make the robot completely independent from other devices, so it can move around wireless only with the use of an Arduino, a microphone, and a battery.