Thursday 7 January 2016

Spotting other people's mistakes - and saving lives in the NHS

How good are you at spotting mistakes? You may not notice your own but you're probably good at spotting other people's.

In jobs where mistakes can cause harm, it's vital to find ways of avoiding them or lessening their impact. This is what I spent last summer researching for my MSc in human computer interaction (HCI).

Accidental overdoses

The workplaces I looked at were NHS hospitals and people's mistakes were accidental drug overdoses. Specifically, accidental overdoses where staff were using machines (infusion devices - see pictures) to give patients a steady dose of drugs, blood, hormones or food over a period of time. It could be over 20 minutes or 12 hours.

I also looked at 'underdoses' which can be equally harmful: imagine you need a steady dose of insulin, liquid food or painkillers, and you don't receive it.

Sharp end, blunt end

I say 'mistakes' but in HCI you learn the user's never to blame. When mistakes happen, it's down to bad equipment design or a perfect storm of events.

James Reason, an expert in human error, distinguishes between the 'sharp end' - frontline staff who come into contact with patients and are often blamed for errors - and the 'blunt end', senior managers and policies that create conditions for mistakes to happen. Sharp end and blunt end are mentioned in Reason's 1995 article Understanding Adverse Events. 

7 years of incidents

I was lucky enough to get my hands on real NHS data - 7 years' worth of incidents from NHS hospitals and care homes that my supervisor, UCL's Ann Blandford, was guarding closely. Ann gave me a password-protected data stick with an Excel spreadsheet of 8,000 incident reports on infusion devices. The reports were written by medical staff, from healthcare assistants to nurses to anaesthetists.

I couldn't read all 8,000 so Ann showed me how to sample the reports systematically to avoid bias and ensure my study was as objective as could be.

We are detectives

In June 2015 I started reading reports and dived into a world of busy hospital wards, formal procedures, bleeping medical equipment and stressed staff. It felt like playing detective: What happened here? Was the patient OK? Who did what? Is this person trying to point the finger of blame at a colleague?

I read, re-read and made notes, waiting for ideas to leap out at me. This method's called 'grounded theory', described by professor Kathy Charmaz in her helpful book Constructing Grounded Theory.

Many of the incident reports were disappointingly brief, missing out vital details. I had to be careful not to jump to conclusions and see things that weren't actually there. My supervisor kept me on track, constantly asking for evidence to back up my hunches.

Clock on, spot error

Pretty soon, a pattern emerged. Nurses were coming on to their shift and noticing machines had been programmed with the wrong dose of pain relief or antibiotics. Nurses were noticing the previous shift's 'programming errors'. It's what James Reason calls unsurprisingly "fresh eyes".


Three-Mile Island

In his 1990 book Human Error, Reason cites the 1979 accident at the Three Mile Island nuclear power plant in the US, where the fresh eyes of a supervisor on an incoming shift diagnosed the problem after colleagues on the previous shift were unable to diagnose it correctly.

Tip of the iceberg

To find more evidence for the incoming shift spotting the previous shift's errors, I sampled around 400 reports. One challenge was that many reports were irrelevant: issues such as a lack of available equipment, dirty machines, broken machines.

But I found enough reports to provide evidence for my theory: nurses spotting errors while carrying out routine checks as part of normal duties or checking patients off their own bat. And when you think that hospital incidents are vastly underreported (Billings, 1998, 'Incident reporting systems in medicine') then this could be the tip of the iceberg.

Design recommendations

So if nurses are good at spotting each other's errors when they walk round the wards, why not encourage them to do it more often? Why not increase staffing levels so that nurses can make ward rounds every hour or so? In the context of today's NHS trust budget deficits, this suggestion would probably not go down well.

Another suggestion would be to make it easier to spot errors by making information more 'in your face'. Nurses diagnose errors by comparing a patient's prescription (on a chart or in notes) with the electronic display on an infusion device. Could you make these 2 things more obvious so any mismatch stands out?

I know that spotting programming errors - noticing an ongoing drug overdose and fixing it - is obviously not as good as preventing an error happening in the first place. But to prevent errors happening, you need to understand why they happen - and that's something you can't tell from these brief incident reports, you'd need to be there on the wards shadowing staff and interviewing them.

Robot drug dispensers

If the NHS had the time, money and good project management, it could automate the dispensing of drugs and get all medical systems talking to each other - patient's notes, prescription, hospital pharmacy, barcode on medicine, infusion device - so that, in theory there would be less room for error. It's not foolproof - automation might turn out to have a bad knock-on effect - but it might be worth a try.

Distilled dissertation

So this is my 16,000-word dissertation - a 3-month fascinating investigation last summer - distilled down. I've written this in as normal English as I could muster after a year of using HCI jargon.

I've missed out many stages of the research - it wasn't as quick and easy as I've made out. So here's the full dissertation: The detection of errors in infusion rates on infusion devices: an analysis of incident reports from the National Reporting and Learning System (NRLS).

You spent a whole 3 months proving that?

And you may be thinking: "This 'fresh eyes' theory and nurses spotting each other's errors - it's all common sense isn't it? I could have told you that and saved you 3 months' work." Well, you're kind of right! But HCI research is often about proving one tiny thing everybody takes for granted but no one has actually proved. So that's why I did it.

No comments:

Post a Comment