TY - JOUR
T1 - Clinical Manifestations
AU - Kipkoech, Kevin
AU - Karanja, Wambui
AU - Smith, Cynthia Isabel
AU - Maina, Rachel W.
AU - Shah, Jasmit
AU - Tsoy, Elena
AU - McDonagh, Sarah
AU - Javandel, Shireen
AU - Valcour, Victor
AU - Udeh-Momoh, Chinedu
AU - Blackmon, Karen
N1 - Publisher Copyright:
© 2025 The Alzheimer's Association. Alzheimer's & Dementia published by Wiley Periodicals LLC on behalf of Alzheimer's Association.
PY - 2025/12/1
Y1 - 2025/12/1
N2 - BACKGROUND: Digital cognitive assessments have emerged as promising tools in limited resource settings, with the potential for adaptability and scalability in Alzheimer's Disease and Related Dementia research. However, it is important to demonstrate their practical usability relative to traditional paper-based tests for widespread adoption. This study compares the feasibility of both modalities in a healthy older adult multilingual Kenyan population. METHOD: We enrolled 135 community-dwelling Kenyan adults [mean (SD) age in years=55.2 (8.6), min = 45, max=79; 64.4% female], ranging in education from primary to doctoral-level [median education = 11 years]. Participants completed a cognitive test battery that included the Tablet-based Cognitive Assessment Tool [TabCAT] and paper-based cognitive tests assessing memory, attention, and executive function. All tests were available in English and culturally adapted to Swahili. Feasibility metrics included error types, language-switching (as a potential indicator of increased cognitive load), valid task completion, learning curves (as a reflection of user engagement), and practice trial success. RESULT: Digital assessments had a 12.9% error rate, mainly due to technical issues (10.6%); whereas, paper-based tests had an 8.2% error rate, mainly due to examinee-related issues (4.2%). Language-switching was common in paper-based tests, particularly in tasks involving months (88%) and numbers (97%) but was less common in digital tests, ranging from 5% to 12%. Completion rates were high for digital tasks such as Birdwatch (100%), Match (100%), Line Orientation (100%), and Flanker (98%), with slightly lower rates for Set-Shifting (90%). Completion rates were 100% for paper-based tests, except for Trail making Test A (3% of participants timed out). Positive learning curves were apparent across paper-based and digital learning trials on memory tests, indicating effective user engagement. Digital test practice trials had high success rates prior to actual test trials (100% for Flanker and Match, 97% for Set-Shifting, 87% for Line Orientation), indicating effective task design and user engagement. CONCLUSION: Culturally and linguistically adapted digital tools have the potential for scalable, user-friendly, and adaptable cognitive testing in resource-limited settings. Technical issues with digital tests and the additional cognitive load of language-switching on working memory tasks should be addressed prior to scaling.
AB - BACKGROUND: Digital cognitive assessments have emerged as promising tools in limited resource settings, with the potential for adaptability and scalability in Alzheimer's Disease and Related Dementia research. However, it is important to demonstrate their practical usability relative to traditional paper-based tests for widespread adoption. This study compares the feasibility of both modalities in a healthy older adult multilingual Kenyan population. METHOD: We enrolled 135 community-dwelling Kenyan adults [mean (SD) age in years=55.2 (8.6), min = 45, max=79; 64.4% female], ranging in education from primary to doctoral-level [median education = 11 years]. Participants completed a cognitive test battery that included the Tablet-based Cognitive Assessment Tool [TabCAT] and paper-based cognitive tests assessing memory, attention, and executive function. All tests were available in English and culturally adapted to Swahili. Feasibility metrics included error types, language-switching (as a potential indicator of increased cognitive load), valid task completion, learning curves (as a reflection of user engagement), and practice trial success. RESULT: Digital assessments had a 12.9% error rate, mainly due to technical issues (10.6%); whereas, paper-based tests had an 8.2% error rate, mainly due to examinee-related issues (4.2%). Language-switching was common in paper-based tests, particularly in tasks involving months (88%) and numbers (97%) but was less common in digital tests, ranging from 5% to 12%. Completion rates were high for digital tasks such as Birdwatch (100%), Match (100%), Line Orientation (100%), and Flanker (98%), with slightly lower rates for Set-Shifting (90%). Completion rates were 100% for paper-based tests, except for Trail making Test A (3% of participants timed out). Positive learning curves were apparent across paper-based and digital learning trials on memory tests, indicating effective user engagement. Digital test practice trials had high success rates prior to actual test trials (100% for Flanker and Match, 97% for Set-Shifting, 87% for Line Orientation), indicating effective task design and user engagement. CONCLUSION: Culturally and linguistically adapted digital tools have the potential for scalable, user-friendly, and adaptable cognitive testing in resource-limited settings. Technical issues with digital tests and the additional cognitive load of language-switching on working memory tasks should be addressed prior to scaling.
UR - https://www.scopus.com/pages/publications/105025853402
U2 - 10.1002/alz70857_105987
DO - 10.1002/alz70857_105987
M3 - Article
C2 - 41449637
AN - SCOPUS:105025853402
SN - 1552-5260
VL - 21
SP - e105987
JO - Alzheimer's and Dementia
JF - Alzheimer's and Dementia
ER -