“Towards Spatial-guided Vision based Assistive Robotics using EdgeImpulse and LEGO Mindstorms.
The ability to feed oneself is a crucial aspect of daily living, and losing this ability can have a significant impact on a person’s life. Robotics has the potential to assist with this task.
The development of a successful robotic assistive feeding system depends on its ability to acquire bites of food accurately, time the bites appropriately, and transfer the bites easily in both individual and group dining settings. However, the task of automating bite acquisition is challenging due to the vast variety of foods, utensils, and human strategies involved, as well as the need for robust manipulation of a deformable and hard-to-model target.
Bite timing, especially in social dining settings, requires a delicate balance of multiple forms of communication, such as gaze, facial expressions, gestures, and speech, as well as action and sometimes coercion. Bite transfer is a unique form of robot-human handover that involves the use of the mouth.
Gesture and speech controlled robot arms are being developed to assist in feeding individuals with upper-extremity mobility limitations.
These robot arms use sensors and cameras to detect and interpret hand gestures or voice commands, allowing the user to control the robot arm in a more natural way. Gesture controlled robot arms can pick up food, bring it to the user’s mouth, and adjust the position of the food based on the user’s gestures.
Speech controlled robot arms can interpret voice commands such as “open mouth” or “move food closer”. These robot arms have the potential to improve the independence and quality of life for individuals with upper-extremity mobility limitations and can also be used in healthcare and elderly care settings to improve the efficiency and safety of feeding assistance.
In this project, we will be focusing on Face tracking, Gesture recognition, Robot-Assisted feeding and Object Detection in cluttered environments.”