Rating communication in GP consultations: do patients and experienced trained raters agree?

Talk Code: 
2A.2

The problem

Good doctor-patient communication is central to good patient experience. Doctor-patient communication is routinely rated by patients in England through the GP Patient Survey. However, the extent to which patient ratings reflect the communication skills of the doctor is currently not understood. In addition, direct comparisons have not been made between patient ratings of communication and accepted professional standards of this domain of care.

The approach

We obtained consent to video-record 555 consultations with 48 GPs in 14 practices. Immediately after the consultation, patients rated the doctor-patient communication they had just experienced using seven items from the GP Patient Survey. Subsequently, 56 consultations were selected for rating by trained raters to provide a range of patient ratings- 28 consultations for which all communications domains were rated by patients as good or very good and 28 consultations with at least one domain rated as less than good. A further restriction barred inclusion of more than two consultations with any one GP. Each video of these consultations was rated by four experienced trained raters (all GPs) using the Global Consultation Rating Scale. The ratings of patients and trained raters were compared using a scatterplot and correlation coefficient.

Findings

There was evidence of a modest positive correlation between patient ratings and those made by trained raters (rho=0.29, increasing to 0.34 after accounting for measurement error/reliability, p=0.029). When the consultation was rated as one of the higher scoring ones by trained raters, patients tended to do the same. However, when the consultation was rated as one of the poorer-scoring ones by the trained raters, a wider range of patient scores were present.

Consequences

We observed a positive association between patient and trained raters' ratings of the same real-life consultation that reflects high agreement only when trained raters rate consultations highly. We propose that this may arise either due to ceiling effects imposed by the survey instrument or because patients are less critical than raters when using the questionnaire. In either case, patient ratings of individual consultations appear to be specific but not sensitive measures of poor communication skills.

Credits

  • Gary Abel, University of Exeter, Exeter, UK
  • Jenni Burt, University of Exeter, Exeter, UK
  • Marc Elliott
  • Jenny Newbold, University of Exeter, Exeter, UK
  • Natasha Elmore, University of Exeter, Exeter, UK
  • Nadia Llanwarne, University of Exeter, Exeter, UK
  • Antoinette Davey, RAND Corperation, Santa Monica, USA
  • Inocencio Maramba, RAND Corperation, Santa Monica, USA
  • Charlotte Paddison, University of Exeter, Exeter, UK
  • John Benson, University of Exeter, Exeter, UK
  • Jonathan Silverman, University of Exeter, Exeter, UK
  • John Campbell, RAND Corperation, Santa Monica, USA
  • Martin Roland, University of Exeter, Exeter, UK