Cross Cultural Assessments
When neuropsychologists are asked which area they find most challenging, many of us quietly give the same answer: CALD (Culturally and Linguistically Diverse) assessments.
Not because we lack care. Instead because our tools were never designed with non-Western populations in mind, including First Nations communities.
In one assessment with a refugee client from a conflict region*, schooling had stopped around primary level. She spoke multiple languages informally, had survived displacement and was understandably mistrustful of authority when asked to “perform” under testing conditions.
With an interpreter, the assessment took twice as long, and yet I found myself asking:
Was I assessing cognition — or the cognitive load of trauma, translation and cultural power dynamics?
On paper, her scores suggested dementia, yet her functional life told another story — navigating services, raising children, making complex decisions in multiple languages.
This is a tension many clinicians recognise:
➡️ Norms come from Western, English-speaking, formally educated samples. Even new tools like the WAIS-5 were normed only on English-speaking populations. While some tools are translated or marketed as multicultural, very few are truly normed on non–English-speaking or low-literacy groups, most still assume Western schooling and test familiarity.
➡️ Interpreters help, but also change pacing, working memory demands and sometimes instructions.
➡️ Trauma, interrupted schooling and cultural deference can mimic cognitive impairment.
➡️ And this occurs in systems expecting efficiency and neat diagnostic labels.
Sometimes, functional history reveals more about cognitive resilience than a test score.
There are emerging responses:
➡️ Non-verbal or less language-bound tasks, used cautiously (even “culture-fair” tests assume familiarity with abstract testing)
➡️ Dynamic assessment, observing how someone approaches a task
➡️ Interpreter orientation, to reduce added cognitive load
➡️ Tools developed for cross-cultural use, like RUDAS and KICA
➡️ And importantly — naming these limits openly, rather than implying certainty
As Fernández & Abe (2018) noted, the issue is not only bias, but the illusion that our tools are culturally neutral.
I don’t claim to have the full solution.
But I do believe fairness begins with humility and context awareness, especially when our findings influence healthcare and legal outcomes.
I’d value hearing how others navigate this space — what helps, and what still feels uneasy.
*Case details are blended and anonymised from multiple cases to protect confidentiality.
Fernández, A.L., & Abe, J. (2018). Bias in cross-cultural neuropsychological testing: problems and possible solutions. Culture and Brain, 6, 1–35.