According to Caleb Champion, one of the group members responsible for the robot’s construction, two small cameras are attached to the claw. These cameras enable the robot to use facial recognition and tracking software to “see” a person’s face and spoon food into their mouth.
He explained that Champion used a Bluetooth PlayStation 4 controller as a workaround until the team could construct its own remote to instruct the OMAR which bowl to scoop candy from.
He stated, “The idea is for the OMAR to only detect a face and a mouth.” We clearly don’t have any desire to simply put a spoon in someone’s mouth if their mouth isn’t open — so we need to guarantee that the mouth is open and prepared for that next nibble.”
Champion said that an automatic stop button lets the person eating stop the machine if it breaks or does something they didn’t want it to do. If a caretaker is required, there is also a button that makes the robot buzz loudly.
Torres, who designed and implemented the phone app in large part, demonstrated how the app also provides information for the user or their caregiver, such as video tutorials on how to troubleshoot the OMAR and responses to frequently asked questions.
Champion stated that the OMAR project proposal caught his attention right away and he was eager to assist in its construction.