Automated skill assessment for individualized training in robotic surgery

Funding Round: 1 2013-2015

Research Question: Our goal was to develop automated methods to assess surgical technical skill and to provide individualized feedback to surgical trainees.

Interdisciplinary Approach: Our project is a unique interdisciplinary collaboration between investigators based at the Laboratory for Computational Sensing and Robotics, WSE, and the Johns Hopkins Minimally Invasive Surgery Training and Innovation Center, SOM. The project builds on our prior research within the “Language of Surgery” project, in which we have developed statistical models to identify surgical gestures and to assess surgical skill.

Potential Implications of Research: Training to become a surgeon involves the acquisition of a family of technical skills. In current practice, trainees acquire surgical skills by practicing in the laboratory and simulators, or by operating on patients under a senior surgeon’s supervision. The current approach to train surgeons in technical skills is “proximate,” i.e., it requires the presence of a teacher. Although trainees may practice on their own, appropriate feedback is essential to acquire the specific skills needed to perform the task like an expert. We envision that our discoveries will inform the development of efficient and individualized programs, based on objective and informative feedback, to train surgeons in providing safe and effective patient care, and will support further interdisciplinary research to understand the acquisition, retention, and attrition of surgical skills.

Project Description: We studied a cohort of 27 trainee surgeons as they performed a suturing and knot-tying task on an inanimate model using the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA). An instructor assigned a skill score for each performance. In order to develop an automated method to assess trainees’ surgical skill, we needed to develop a method for detecting the particular surgical activity being performed (e.g., suturing). In this project we developed novel methods to extract information from the motion of the surgical tool. This method is based on the hypothesis that the tool motion signal contains repeated patterns or motifs, which correspond to specific activity segments. Our method automatically identifies motifs in the tool motion signal, which enables recognition of the corresponding activities. Figure 1 shows the motifs corresponding to a suture throw that are detected using our method.

Automated detection of activity segments enables several applications, such as 1) targeted assessment and feedback for trainees in a scorecard (Figure 2 shows an example); 2) assessment of learning over time; and 3) indexed catalogs that allow efficient retrieval of skillful performances for trainees to study. Technology developed in this study automates skill assessment, targeted feedback for segments within surgical tasks, and other applications to evaluate and inform learning of technical skill among surgical trainees.