MP10-17: Development and Validation of the End-To-End Assessment of Suturing Expertise (EASE)
Friday, May 13, 2022
1:00 PM – 2:15 PM
Location: Room 228
Taseen F Haque, Alvin Hui, Jonathan You, Runzhuo Ma*, Steven Cen, Xiaomeng Li, Monish Aron, Los Angeles, CA, Justin W Collins, London, United Kingdom, Hooman Djaladat, Los Angeles, CA, Ahmed Ghazi, Rochester, NY, Kenneth A Yates, Andre L Abreu, Siamak Danseshmand, Mihir M Desai, Los Angeles, CA, Alvin C Goh, Jim C Hu, New York, NY, Amir H Lebastchi, Los Angeles, CA, Thomas S Lendvay, James Porter, Seattle, WA, Anne K Schuckman, Rene Sotelo, Los Angeles, CA, Chandru P Sundaram, Indianapolis, IN, Jessica H Nguyen, Inderbir Gill, Andrew J Hung, Los Angeles, CA
Introduction: Current skills assessment tools do not encompass all aspects of suturing and therefore may omit key insights to help trainees improve. This study aimed to create a global suturing skills assessment tool that comprehensively defines criteria around relevant sub-skills of suturing and to evaluate its validity.
Methods: In Stage 1 (Development), 4 expert surgeons and an educational psychologist participated in a cognitive task analysis (CTA) to deconstruct robotic suturing into its most basic maneuvers, describe the accompanying “sub-skills”, and define the differing proficiencies on a scale of 1-3. Using the Delphi method, each CTA element was then systematically revised by a multi-institutional panel of 16 leading surgical educators. Sub-skill descriptions that reached a content validity index (CVI) =0.80 were included in the final product. In Stage 2 (Validation), 3 blinded reviewers independently scored 8 training videos and 39 vesicourethral anastomoses (VUA) using EASE. Inter-rater reliability was measured with intra-class correlation (ICC) for normally distributed values and prevalence-adjusted bias-adjusted Kappa (PABAK) for skewed distributions. Expert (=100 prior robotic cases) and trainee ( <100 cases) EASE scores from the non-training cases were compared using a generalized linear mixed model to adjust for data nesting within surgeons.
Results: Stage 1: The 16 surgeon panelists for the Delphi method had a median H-index of 23 (range 11-107). In Round 1 of the Delphi method, 60/64 (94%) of proposed sub-skill descriptions met the CVI threshold. In Round 2, the number of sub-skill descriptions decreased to 61 as panelists suggested combining two sub-skill categories; these remaining descriptions all reached CVI threshold. In total, panelists agreed on 7 domains and 18 sub-skills (Table). Stage 2: Inter-rater reliability was moderately high (ICC range: 0.51-0.97; PABAK: 0.62-0.97). EASE scores were able to distinguish expert and training surgeons with multiple sub-skills.
Conclusions: Through a rigorous CTA and Delphi process, we have developed EASE, whose granular suturing sub-skills can distinguish surgeon experience while maintaining rater reliability. The future of EASE may include automated technical skills assessment where the most explicit formative feedback will benefit training surgeons.
Source of Funding: This study was supported in part by the National Cancer Institute under Award Number 1R01CA251579-01A1.