TY - JOUR
T1 - Exploring Endoscopic Competence in Gastroenterology Training
T2 - A Simulation-Based Comparative Analysis of GAGES, DOPS, and ACE Assessment Tools
AU - Ismail, Faisal Wasim
AU - Afzal, Azam
AU - Durrani, Rafia
AU - Qureshi, Rayyan
AU - Awan, Safia
AU - Brown, Michelle R.
N1 - Publisher Copyright:
© 2024 Ismail et al. This work is published and licensed by Dove Medical Press Limited.
PY - 2024
Y1 - 2024
N2 - Purpose: Accurate and convenient evaluation tools are essential to document endoscopic competence in Gastroenterology training programs. The Direct Observation of Procedural Skills (DOPS), Global Assessment of Gastrointestinal Endoscopic Skills (GAGES), and Assessment of Endoscopic Competency (ACE) are widely used validated competency assessment tools for gastrointestinal endoscopy. However, studies comparing these 3 tools are lacking, leading to lack of standardization in this assessment. Through simulation, this study seeks to determine the most reliable, comprehensive, and user-friendly tool for standardizing endoscopy competency assessment. Methods: A mixed-methods quantitative-qualitative approach was utilized with sequential deductive design. All nine trainees in a gastroenterology training program were assessed on endoscopic procedural competence using the Simbionix Gi-bronch-mentor high-fidelity simulator, with 2 faculty raters independently completing the 3 assessment forms of DOPS, GAGES, and ACE. Psychometric analysis was used to evaluate the tools’ reliability. Additionally, faculty trainers participated in a focused group discussion (FGD) to investigate their experience in using the tools. Results: For upper GI endoscopy, Cronbach’s alpha values for internal consistency were 0.53, 0.8, and 0.87 for ACE, DOPS, and GAGES, respectively. Inter-rater reliability (IRR) scores were 0.79 (0.43–0.92) for ACE, 0.75 (−0.13–0.82) for DOPS, and 0.59 (−0.90–0.84) for GAGES. For colonoscopy, Cronbach’s alpha values for internal consistency were 0.53, 0.82, and 0.85 for ACE, DOPS, and GAGES, respectively. IRR scores were 0.72 (0.39–0.96) for ACE, 0.78 (−0.12–0.86) for DOPS, and 0.53 (−0.91–0.78) for GAGES. The FGD yielded three key themes: the ideal tool should be scientifically sound, comprehensive, and user-friendly. Conclusion: The DOPS tool performed favourably in both the qualitative assessment and psychometric evaluation to be considered the most balanced amongst the three assessment tools. We propose that the DOPS tool be used for endoscopic skill assessment in gastroenterology training programs. However, gastroenterology training programs need to match their learning outcomes with the available assessment tools to determine the most appropriate one in their context.
AB - Purpose: Accurate and convenient evaluation tools are essential to document endoscopic competence in Gastroenterology training programs. The Direct Observation of Procedural Skills (DOPS), Global Assessment of Gastrointestinal Endoscopic Skills (GAGES), and Assessment of Endoscopic Competency (ACE) are widely used validated competency assessment tools for gastrointestinal endoscopy. However, studies comparing these 3 tools are lacking, leading to lack of standardization in this assessment. Through simulation, this study seeks to determine the most reliable, comprehensive, and user-friendly tool for standardizing endoscopy competency assessment. Methods: A mixed-methods quantitative-qualitative approach was utilized with sequential deductive design. All nine trainees in a gastroenterology training program were assessed on endoscopic procedural competence using the Simbionix Gi-bronch-mentor high-fidelity simulator, with 2 faculty raters independently completing the 3 assessment forms of DOPS, GAGES, and ACE. Psychometric analysis was used to evaluate the tools’ reliability. Additionally, faculty trainers participated in a focused group discussion (FGD) to investigate their experience in using the tools. Results: For upper GI endoscopy, Cronbach’s alpha values for internal consistency were 0.53, 0.8, and 0.87 for ACE, DOPS, and GAGES, respectively. Inter-rater reliability (IRR) scores were 0.79 (0.43–0.92) for ACE, 0.75 (−0.13–0.82) for DOPS, and 0.59 (−0.90–0.84) for GAGES. For colonoscopy, Cronbach’s alpha values for internal consistency were 0.53, 0.82, and 0.85 for ACE, DOPS, and GAGES, respectively. IRR scores were 0.72 (0.39–0.96) for ACE, 0.78 (−0.12–0.86) for DOPS, and 0.53 (−0.91–0.78) for GAGES. The FGD yielded three key themes: the ideal tool should be scientifically sound, comprehensive, and user-friendly. Conclusion: The DOPS tool performed favourably in both the qualitative assessment and psychometric evaluation to be considered the most balanced amongst the three assessment tools. We propose that the DOPS tool be used for endoscopic skill assessment in gastroenterology training programs. However, gastroenterology training programs need to match their learning outcomes with the available assessment tools to determine the most appropriate one in their context.
KW - competence
KW - endoscopy
KW - gastroenterology
KW - simulation
KW - training
UR - http://www.scopus.com/inward/record.url?scp=85184418219&partnerID=8YFLogxK
U2 - 10.2147/AMEP.S427076
DO - 10.2147/AMEP.S427076
M3 - Article
AN - SCOPUS:85184418219
SN - 1179-7258
VL - 15
SP - 75
EP - 84
JO - Advances in Medical Education and Practice
JF - Advances in Medical Education and Practice
ER -