Understanding reasons for variation by ethnicity in performance of general practice specialty trainees in the Membership of the Royal College of General Practitioners’ Applied Knowledge Test: cognitive interview study
Significant differences exist in candidate performance in high stakes medical licensing examinations, such as the Membership of the Royal College of General Practitioners (MRCGP), between black and minority ethnic (BME) compared to white British doctors. This is a cause for concern to examination bodies, training programmes, candidates and professional bodies. Examiner bias, suggested as a potential source of differential attainment in licensing assessments, is less plausible for written computer-marked multiple-choice examinations such as the MRCGP Applied Knowledge Test (AKT). We do not know why candidates from different ethnic groups and training backgrounds vary in their performance in such computer-marked multiple-choice examinations. This study aimed to investigate causes of differential attainment in the AKT by candidate ethnicity.
We used a qualitative design employing cognitive interviews with a purposive sample of 21 GP specialty trainees (GPSTs) from three candidate subgroups, white British/Irish UK trained, BME UK trained, and BME overseas trained doctors. ‘Think aloud’ techniques were used to explore the cognitive processes of GPSTs when answering 15 AKT questions. We selected questions covering clinical medicine, data interpretation and administration, including those which BME candidates consistently performed less well or consistently better at compared with White British UK trained doctors. We used Grounded Theory to analyse the data supported by Nvivo 10.
Four overarching themes were identified from the data. ‘Cultural barriers’ included cultural differences in the UK NHS, language barriers and differences in national guidelines (NICE) which GPSTs who trained abroad highlighted as affecting their responses to AKT questions. ‘Theoretical versus Real Life Clinical Experience’: All participants reported it was difficult to recall information and respond to AKT questions drawn solely from theoretical learning (classroom, text-book, undergraduate theory) compared with those questions on areas covered within GP clinical practice. ‘Opportunity and Frequency’: All participants reported difficulties answering AKT questions on areas they had not encountered in their country of undergraduate training and topics they had not recently encountered or studied. Issues such as “not in my rotation”, “[not] appropriate to speciality training” and “scared looking at numbers” were observed to lead to candidates’ lack of confidence answering specific questions. The final theme, ‘A Comparative Analysis’, enabled us to understand and identify key processes describing how candidates across the three candidate subgroups answered the range of AKT questions used, generating hypotheses of why candidates with different ethnic and educational backgrounds might vary in test performance.
We have generated theory on causes of differential attainment in computer-based assessments. Our findings provide insights into how BME and white British candidates, who trained in the UK or abroad may differ in their approach to understanding and answering questions in the AKT could inform future design of the AKT and specialty training.