Jingying Wang

Hi, I am Jingying Wang, a fourth-year Ph.D. candidate from the Computer Science and Engineering department of the University of Michigan. I am luckily co-advised by Prof. Xu Wang from the UM CSE, and Vitaliy Popov from UM Medicine.

My research lies at the intersection of Human-Computer Interaction (HCI) and medical education, with a focus on enhancing the visual understanding in medical training. Medical procedures are inherently visual tasks that require learners to know where to look, how to interpret complex visual cues, and how to translate those perceptions into precise actions. Due to limited access to experts, trainees often learn from videos and simulations that lack personalized and interactive feedback. My work bridges this gap by designing human–AI systems that enable interactive learning experiences grounded in multimodal data sources, including video, gaze, speech, hand gestures, and etc.

[Google Scholar][CV]

News


Mar/2025

Honored to be selected to receive the Barbour Scholarship! [Article]

Apr/2024

Our paper “Looking Together ≠ Seeing the Same Thing: Understanding Surgeons' Visual Needs During Intra-operative Coordination and Instruction” got an honorable mention award in CHI2024

Publication


SurgGaze is an implicit calibration method that uses tool–tissue interactions as natural cues to improve surgeons’ gaze tracking. It reduces gaze error by 40.6% compared to standard calibration, enabling more reliable attention analysis in both simulated and real operating rooms. (Paper Under Review)

Jingying Wang

SurgGraph is a scene-graph pipeline for understanding laparoscopic videos by encoding expert-defined surgical relationships into LLM-generated programs grounded in segmentation and depth maps. It enables quantitative analysis of surgical processes and outperforms standard scene-graph and vision-language models in clip retrieval and question answering, providing more accurate and educationally valuable insights. (Paper Under Review)

Jingying Wang

eXplainMR: Generating Real-time Textual and Visual eXplanations to Facilitate UltraSonography Learning in MR. (CHI '25)

Jingying Wang, Jingjing Zhang, Juana Nicoll Capizzano, Matthew Sigakis, Xu Wang*, Vitaliy Popov*

[Paper][Video]

Surgment: Segmentation-enabled Semantic Search and Creation of Visual Question and Feedback to Support Video-Based Surgery Learning. (CHI '24)

Jingying Wang, Haoran Tang, Taylor Kantor, Tandis Soltani, Vitaliy Popov, Xu Wang

[Paper][Video]

Looking Together ≠ Seeing the Same Thing: Understanding Surgeons' Visual Needs During Intra-operative Coordination and Instruction. (CHI '24)

Xinyue Chen*, Vitaliy Popov*, Jingying Wang, Michael Kemp, Gurjit Sandhu, Taylor Kantor, Natalie Mateju, and Xu Wang

[Paper]

SketchSearch: Fine-tuning Reference Maps to Create Exercises In Support of Video-based Learning for Surgeons. (UIST '23 Demo)

Jingying Wang, Vitaliy Popov, and Xu Wang

[Paper][Video]

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. (MM '23)

Ye Pan, Ruisi Zhang, Jingying Wang, Yu Ding, and Kenny Mitchell

[Paper]

Fully Automatic Blendshape Generation for Stylized Characters. (VR '23)

Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding and Ye Pan

[Paper][Video]