Doctors carry a patient infected by ebola
Chapitre
Rony Brauman
Rony
Brauman

Medical doctor, specialized in tropical medicine and epidemiology. Involved in humanitarian action since 1977, he has been on numerous missions, mainly in contexts of armed conflicts and IDP situations. President of Médecins sans Frontières from 1982 to1994, he also teaches at the Humanitarian and Conflict Response Institute (HCRI) and is a regular contributor to Alternatives Economiques. He has published several books and articles, including "Guerre humanitaires ? Mensonges et Intox" (Textuel, 2018), "La Médecine Humanitaire" (PUF, 2010), "Penser dans l'urgence" (Editions du Seuil, 2006) and "Utopies Sanitaires" (Editions Le Pommier, 2000).

Michèle
Beck

Nurse

Since 2006, Michèle Beck has worked with MSF in Niger, Chad, Jordan, Syria, Lybia, Ivory Coast and Haiti. In 2014, she was MSF medical team leader in Gaza.

Date de publication

III. Possible directions for quality policy at MSF

Michèle BECK

We are getting to the final part of the day, which aims to identify the possible directions that have emerged from the multitude of points addressed. In the three discussions, we identified three angles from which to approach medical quality. The first is the patient level – how do we take patient satisfaction into account? The second is the collective level, in relation to standards: what is the relationship between what is “regulated” and what is “managed”. What spaces of discussion are currently available within MSF? Finally, again at the collective level: what are the implications for how work is organised? How can a strategy of decentralising some decisions to the field incorporate the relationship with quality?

Brigitte VASSET - Deputy Medical Director

One mechanism that would help us to improve the level of quality we deliver would be to ensure better reporting of medical errors up to head office. We need to avoid the idea that medical errors are immediately synonymous with “sanctions” as we find today. They shouldn’t fall within the jurisdiction of the HR or legal department, as we currently find. These errors would help us improve our processes, procedures and working methods so that we can avoid making the same mistakes in the future. Being able to talk about them would be advantageous for the field teams, but also for other missions, by sharing information and improving our organisation. It isn’t easy, because we need to move away from the idea of sanctions, unless the error is deliberate. But in that case, it’s called a “fault”. We all make errors, so it’s essential that they help us to learn, so that we don’t make them again. It’s the same as what we do for the morbidity-mortality reviews that we have in our surgical programmes.

Rony BRAUMAN

Indeed, we all make errors, every day. The best way of avoiding them in the future is not sanctions, but talking about them. And I would add that one of the ways of ensuring an organisation functions properly is making time for discussions, whether that’s at head office or in the field. Collectively and institutionally, it should be a normal part of the way any group of people working together operates. The discussion groups should be an opportunity to talk about problems, obstacles, queries and questions. They are not necessarily followed up with practical actions, but can be if necessary. It seems to me to be a simple way of working that would help us adjust our actions and address problems more effectively.

Michèle BECK

To build on that, I would add that the right to make a mistake, but above all, analysing what happened through discussions within the team, would help people to identify all the factors that came into play in the mechanism that led to the error. We also need to look at this more broadly and not just from a medical point of view. A process that doesn’t work or not achieving the results expected has an impact right across the board. Xavier Lassalle gave me the example of a mortality review in the operating theatre. The team realised that mortality was high because the patients who were sent to them were already dying. The problem was not how they were treated in the operating theatre, but the triage system in the emergency room.

Omari BETH - Field coordinator

There’s one question we haven’t answered today, namely how do we measure quality and what are the indicators that should be systematically used in our programmes? Whatever project you are setting up, you have to set yourself objectives and decide on indicators so that you can evaluate and monitor the programme. If the indicators are not the right ones, you can change them. But we don’t have a more formalised process for managing missions.

Carine TESSE - Field coordinator

A lot of people already complain about the paperwork and I’m not keen on the idea of more indicators. Above all, we need visibility. We produce an enormous amount of data but they are never discussed in the team and we never know what the purpose of collecting all this information is. Even discussing the data with the coordination teams and the desks would help us explain what’s happening on our project

Alfatih OSMAN SULIMAN - Medical coordinator​​​​​​​

The issue is knowing whether they’re the right indicators or not. They should be aligned with our objectives so that after a period of time, we know whether they’ve been achieved or not. Do we have an external quality control mechanism? If you look at the size of MSF, I think it’s time to have a dedicated quality department, which would supervise and assess the quality of our programmes. Some organisations do it with quality control teams that carry out surveys.

Rony BRAUMAN

I don’t think it’s a good idea. One of the reasons is that, compared with a bank or the automotive industry, we don’t have any clear products to offer. There are different assessments, different ideas about what one can expect from a quality assessment: the quality of a programme, care or processes, patient satisfaction, our capacity to improve over time or adjust to unforeseen events, or assessing the unplanned side effects of projects. All of that is part of what we have to call “quality”. It is more or less pertinent depending on the timing of a project, or the part of a project one is examining. The idea of this workshop is not to come up with ideas for quality indicators. The bureaucratic burden is already quite significant, so adding more summary or partial indicators is not the right solution. Conversely, the aim is to encourage more reflection among the operational staff who are involved in taking decisions on a day-to-day basis, by shedding light on the problems associated with quality and the different levels at which we understand quality.

Maya FEHLING - OCG and OCA quality adviser​​​​​​​

Indicators are useful only if we share them with the whole team, i.e. logistics, the hygiene committee, etc. We all want to improve the quality of care for patients and the whole team is involved. It must therefore get a return for its efforts. I also don’t believe one can assess oneself. It needs an external perspective with an objective examination of methods. An MSF operational centre could call on another centre. It could be done between sections, provided it didn’t involve pointing the finger at our failings: we all have them and we all face the same difficulties in the challenging conditions we operate in. It would be 86 another opportunity to learn from each other and save us from making the same mistakes.

Fabrice WEISSMAN

One of the problems of assessing quality is knowing which benchmark to use. And the question of benchmarks is essential, because an audit measures deviation from the norm. The whole problem we face is which standard to follow and what degree of deviation can be tolerated. An assessment-based approach is inappropriate for responding to the quality problems MSF faces. Conversely, an approach based on acting in uncertainty, as we saw in the radiotherapy groups and other professions, seems to me much more pertinent. The Swedish air-traffic controllers talk about putting under scrutiny events people can learn from. That immediately broadens the field and means there’s no need to wait for an accident before we ask ourselves questions. Moreover, who can judge the quality of a project? The medical department? The operations department, bearing in mind that they each have different benchmarks? Or the patient too? We have completely ignored the patient’s point of view in our discussions. We ourselves make a judgement, on the patient’s behalf, as to whether the caring relationship is good, whether the waiting time is acceptable or whether the therapeutic aims are appropriate. It’s an area that still has a lot of room for improvement.

David OLSON - Deputy Medical Director​​​​​​​

The best point of view for assessing quality is out in the field. It’s the people in the field who can see whether things are going well or not. They’re the ones who should be given the tools so that they can assess quality and make improvements. And it’s down to us to make sure we offer them the support they need to gather the right data, so that they can draw the right conclusions and act accordingly.

Michèle BECK

To conclude, and to pick up on what David has just said, in all the literature on quality, the conclusion is that the best people for assessing quality are those closest to the action. Today, that effectively means the people in the field. It’s probably at that level that this whole system of continuous quality improvement should be taking place.

In the MSF library we have a book in PDF formatMaguerez, G. (2005), L’amélioration rapide de la qualité dans les établissements sanitaires et médico-sociaux. Presses de l’EHESP that turns all top-down assessments and indicators on their head. It advocates continuous quality improvement based on a fast, time-limited method, in which the main actors are the people in the field. Problems are identified and indicators – often subjective indicators – are defined by the group. Improvement initiatives are then implemented. Moreover, the author comments that the simple fact of monitoring a situation often led to spontaneous improvement. But it does imply very little standardisation between different areas and little control from head office.