TY - JOUR
T1 - The Revival of Essay-Type Questions in Medical Education
T2 - Harnessing Artificial Intelligence and Machine Learning
AU - Shamim, Muhammad Shahid
AU - Zaidi, Syed Jaffar Abbas
AU - Rehman, Abdur
PY - 2024/5/1
Y1 - 2024/5/1
N2 - OBJECTIVE: To analyse and compare the assessment and grading of human-written and machine-written formative essays. STUDY DESIGN: Quasi-experimental, qualitative cross-sectional study. Place and Duration of the Study: Department of Science of Dental Materials, Hamdard College of Medicine & Dentistry, Hamdard University, Karachi, from February to April 2023. METHODOLOGY: Ten short formative essays of final-year dental students were manually assessed and graded. These essays were then graded using ChatGPT version 3.5. The chatbot responses and prompts were recorded and matched with manually graded essays. Qualitative analysis of the chatbot responses was then performed. RESULTS: Four different prompts were given to the artificial intelligence (AI) driven platform of ChatGPT to grade the summative essays. These were the chatbot's initial responses without grading, the chatbot's response to grading against criteria, the chatbot's response to criteria-wise grading, and the chatbot's response to questions for the difference in grading. Based on the results, four innovative ways of using AI and machine learning (ML) have been proposed for medical educators: Automated grading, content analysis, plagiarism detection, and formative assessment. ChatGPT provided a comprehensive report with feedback on writing skills, as opposed to manual grading of essays. CONCLUSION: The chatbot's responses were fascinating and thought-provoking. AI and ML technologies can potentially supplement human grading in the assessment of essays. Medical educators need to embrace AI and ML technology to enhance the standards and quality of medical education, particularly when assessing long and short essay-type questions. Further empirical research and evaluation are needed to confirm their effectiveness. KEY WORDS: Machine learning, Artificial intelligence, Essays, ChatGPT, Formative assessment.
AB - OBJECTIVE: To analyse and compare the assessment and grading of human-written and machine-written formative essays. STUDY DESIGN: Quasi-experimental, qualitative cross-sectional study. Place and Duration of the Study: Department of Science of Dental Materials, Hamdard College of Medicine & Dentistry, Hamdard University, Karachi, from February to April 2023. METHODOLOGY: Ten short formative essays of final-year dental students were manually assessed and graded. These essays were then graded using ChatGPT version 3.5. The chatbot responses and prompts were recorded and matched with manually graded essays. Qualitative analysis of the chatbot responses was then performed. RESULTS: Four different prompts were given to the artificial intelligence (AI) driven platform of ChatGPT to grade the summative essays. These were the chatbot's initial responses without grading, the chatbot's response to grading against criteria, the chatbot's response to criteria-wise grading, and the chatbot's response to questions for the difference in grading. Based on the results, four innovative ways of using AI and machine learning (ML) have been proposed for medical educators: Automated grading, content analysis, plagiarism detection, and formative assessment. ChatGPT provided a comprehensive report with feedback on writing skills, as opposed to manual grading of essays. CONCLUSION: The chatbot's responses were fascinating and thought-provoking. AI and ML technologies can potentially supplement human grading in the assessment of essays. Medical educators need to embrace AI and ML technology to enhance the standards and quality of medical education, particularly when assessing long and short essay-type questions. Further empirical research and evaluation are needed to confirm their effectiveness. KEY WORDS: Machine learning, Artificial intelligence, Essays, ChatGPT, Formative assessment.
UR - https://www.scopus.com/pages/publications/85192607279
U2 - 10.29271/jcpsp.2024.05.595
DO - 10.29271/jcpsp.2024.05.595
M3 - Article
C2 - 38720222
AN - SCOPUS:85192607279
SN - 1022-386X
VL - 34
SP - 595
EP - 599
JO - Journal of the College of Physicians and Surgeons--Pakistan : JCPSP
JF - Journal of the College of Physicians and Surgeons--Pakistan : JCPSP
IS - 5
ER -