Repository logo

Facial Expression Rendering in Medical Training Simulators: Current Status and Future Directions

Accepted version



Change log


Recent technological advances in robotic sensing and actuation methods have prompted development of a range of new medical training simulators with multiple feedback modalities. Learning to interpret facial expressions of a patient during medical examinations or procedures has been one of the key focus areas in medical training. This paper reviews facial expression rendering systems in medical training simulators that have been reported to date. Facial expression rendering approaches in other domains are also summarized to incorporate the knowledge from those works into developing systems for medical training simulators. Classifications and comparisons of medical training simulators with facial expression rendering are presented, and important design features, merits and limitations are outlined. Medical educators, students and developers are identified as the three key stakeholders involved with these systems and their considerations and needs are presented. Physical-virtual (hybrid) approaches provide multimodal feedback, present accurate facial expression rendering, and can simulate patients of different age, gender and ethnicity group; makes it more versatile than virtual and physical systems. The overall findings of this review and proposed future directions are beneficial to researchers interested in initiating or developing such facial expression rendering systems in medical training simulators.



Training, Rendering (computer graphics), Medical diagnostic imaging, Haptic interfaces, Visualization, Pain, Bibliographies, Facial expressions, facial expression rendering, medical simulators, medical training, robotic patients, human-machine interaction

Journal Title

IEEE Access

Conference Name

Journal ISSN


Volume Title



Institute of Electrical and Electronics Engineers (IEEE)


All rights reserved
EPSRC (EP/T00519X/1)
Engineering and Physical Sciences Research Council (EP/N029003/1)
This work was supported by the Robopatient project funded by the EPSRC Grant No EP/T00519X/1