Moxie – A Social Robot with Artificial Intelligence and Machine Learning that can Understand and Express Emotions

Published  May 7, 2020   0
Moxie – A Social Robot with Artificial Intelligence

Embodied Inc. in collaboration with Yves Béhar's has come up with a social robot with machine learning technology. Named Moxie, the robot helps in promoting social, emotional, and cognitive development through everyday play-based learning and captivating content amongst children. It’s the result of the teamwork of experts in child development, engineering, technology, game design, and entertainment.  

Moxie, as defined by the CEO of Embodied, can understand and express emotions with emotive speech, believable facial expressions, and body language, tapping into human psychology and neurology to create deeper bonds. The hardware and software of the robot help enhance human capabilities in a safe environment. The FDA has granted its first approval for therapeutic software that can be prescribed.

The face of the robot is fully projected with the rounded with naturally-curved edges. This makes interacting with Moxie feel more life-like, realistic, and believable. The 3D appearance of the face makes it possible for Moxie to have actual eye-contact with the child. This way, not just Moxie’s face protects children from excessive screen-time, but it also makes the interaction experience feel all the more real.

Moxie’s multimodal sensory fusion makes Moxie aware of the environment and its users. Its computer vision and eye-tracking technology help maintain eye contact as the child moves. Machine learning helps Moxie to learn user preferences and needs, and recognize people, places, and things.  Specially located mics enable Moxie to hear the direction a voice came from and easily turn to the source. Touch sensors allow Moxie to recognize hugs and handshakes. The screen is made curved for embodiment and detachable arms are included to help Moxie robot in gesturing.

Unlike other social robots, Embodied didn’t rely much on the cloud instead it on onboard computing. It gathered data to help tune Moxie’s artificial intelligence. For security, automatic speech recognition ASR that uses Google is used. 95% to 99% of Moxie’s programming is running onboard to ensure security and responsiveness. Moxie uses on-board machine learning to recognize facial features, build models for conversations, and adapt interactions to individual children.

As of now, Embodied is developing Moxie’s skills in-house, but it might eventually release a software developer’s kit for non-therapeutic applications, such as entertainment, Moxie is available for pre-order for $50 down in the US with a one-year subscription to content and Moxie Mission Packs, the Global Robotics Laboratory, and behavioral analytics in the Embodied Moxie parent app. There are expectations that the shipping of Moxie will start in September.