The Building of the Eva Face

Authors: Yunyang Di (diyy@shanghaitech.edu.cn), Jingwei Peng (pengjw@shanghaitech.edu.cn)

Affiliation: ShanghaiTech University

1. Abstract

This project aims to simulate human facial expressions on a robot's face. Inspired by Columbia University's open-source Eva project, we constructed a robot. We sample facial features, abstracting them into points controlled by servos (Actuated Neuromuscular Units - ANU) to mimic facial muscles. Using literature, we translated the six basic human expressions into data matrices for ANU. Through adaptive learning, the robot can imitate a broader range of expressions.

2. Introduction

Facial expressions are crucial for non-verbal communication, conveying feelings and interpreting others' emotions. Facial mimicry is important in infant social development. Robots capable of mimicking human expressions can enable more natural social behaviors and enhance human-robot interaction.

3. State of the Art

Our project references several papers, focusing on adaptive learning algorithms, imitation evaluation, robot head design, artificial skin, expression classification, servo manipulation, and robot head construction. We draw inspiration from the Eva project and incorporate expression processing methods from other research. We convert human expressions into data interpretable by servo motors using ANU points. The six basic expressions (Neutral, Surprise, Fear, Disgust, Anger, Happiness, Sadness) form the basis, with facial features divided into 68 (or merged into 24) basic points for control. Adaptive learning algorithms help optimize the robot's imitation. Open-source code related to servo control and eye movement is also utilized.

4. System Description

We are testing the algorithm using 24 basic AU points. Challenges include verifying servo deflection effects and finding sample test points. We are following construction guidelines, but encountered issues with mismatched 3D printed parts and different silicone face material. We are also considering replacing the eye part with another open-source project, unsure about size compatibility.

5. System Evaluation

Evaluation criteria include the similarity between the robot's and the original expression, and avoiding the uncanny valley effect. Similarity is quantified by calculating the L2 distance between facial feature points. Subjective assessment using questionnaires based on basic emotions will also be used. Initial replication using 9 AU points showed issues; attaching wires directly indented the face. Using gauze to simulate muscle tissue improved force distribution. Connecting AUs to servos with cotton threads resulted in satisfactory effects only in the eye socket area.

6. Our Work

6.1 The part we have finished

We reproduced parts of the Eva project using 3D printing and laser cutting for the head structure and face mould. We assembled the product after preparing the silica gel. We had a valuable exchange with the Eva project author.

6.2 The part we want to improve

Insights from the author included using Y-slots and conduits for better effects. We realized reproducing the project is complex, especially training parameters due to differences in our silicone material and manipulation method. This requires retraining the model. Training is sensitive to instabilities in the cotton thread and servo setup, leading to difficulties in replication and convergence.

6.3 The future work

Due to high training costs and lack of repeatability, we plan a partitioned approach. We will divide the face into five blocks, controlled by 1-2 motors each, allowing for individual section training. We also consider using mixed driving (like pneumatic systems) as suggested by the Eva author to reduce training costs by allowing parameter inheritance upon component replacement.

7. Demo Video

Below is a demonstration video of our project:

8. Conclusion

Our project aims to create a robot mimicking human facial expressions. We have progressed in construction, incorporating adaptive learning and using open-source code for control.