A / A / A

II. Current practice and knowledge about the approach to quality in other areas of activity

Date de publication
Rony Brauman

Medical doctor, specialized in tropical medicine and epidemiology. Involved in humanitarian action since 1977, he has been on numerous missions, mainly in contexts of armed conflicts and IDP situations. President of Médecins sans Frontières from 1982 to1994, he also teaches at the Humanitarian and Conflict Response Institute (HCRI) and is a regular contributor to Alternatives Economiques. He has published several books and articles, including "Guerre humanitaires ? Mensonges et Intox" (Textuel, 2018), "La Médecine Humanitaire" (PUF, 2010), "Penser dans l'urgence" (Editions du Seuil, 2006) and "Utopies Sanitaires" (Editions Le Pommier, 2000).



Since 2006, Michèle Beck has worked with MSF in Niger, Chad, Jordan, Syria, Lybia, Ivory Coast and Haiti. In 2014, she was MSF medical team leader in Gaza.


In her research, Adélaïde Nascimento has examined the area of patient safety, of which quality is a part.

Adélaïde NASCIMENTO - Ergonomics lecturer at the CNAM

The presentation (see attached PowerPoint slides) is entitled Taking care of quality. “Taking care” in the sense that quality needs to be looked after, discussed and debated. Quality criteria vary between individuals, between senior management and those in the field, and depending on the context and country.


My presentation will be discussing the link between what we call “regulated quality”, or everything that is prescribed or driven by rules or standardisation, and so-called “managed quality”, which is more about what we’re doing, here and now, with a particular patient or in a particular country.

This relationship is on the agenda of research into patient safety, but it appears in other contexts as well. It also relates to human and organisational reliability, as we call it in the jargon. Beyond the human aspects, it relates to everything associated with work situations in either industry or services, because it’s the point at which two worlds meet. These two different worlds could be described as “cold” and “hot”. The “cold” world is the one where we try to develop general knowledge based on actual experience.

This type of knowledge is based on general phenomena, which are used to manage day-to-day activities and create a framework for practice, thus avoiding any undesirable deviations. All of this, which we’ll call “regulated quality” is fundamental and important. I don’t think anyone would disagree with that, based on the discussions we have just been having. Then we might want to ask ourselves the following questions: how is regulated quality actually used in practice? Is it effective, particularly for people in the field? And what is the relationship with another kind of quality, which we might call quality in action or managed quality?

This notion of quality belongs to the “hot” world. It is about arbitrating, based on people’s know-how. It encompasses the skills of those actually taking action, who have know-how, experience, a life story, and who take into account all these aspects when acting. This type of reflection is already well established in organisations such as hospitals, whether you look at an individual unit, a hospital or a clinic. It becomes more complex when you consider the realities of the areas in which you operate, in different countries, with different specialisms, and so on.

Today, the search for quality is based on quality standards and pervades all areas of society, from public policy to business, particularly but not only in the industrial contexts of mass production. In fact, it was in those that standards appeared, in the aftermath of World War II.

Quality processes involve a preparation phase followed by an implementation phase, with controls to improve quality. These are called quality circles. They began in industry but are no longer limited to industrial contexts. The logic of the quality process has been implemented everywhere there are high-risk situations, and medicine has certainly not been spared. It arrived in hospitals through accreditations and certifications.

These processes aim to define occupational standards, at both the international and local level. Their aim is to produce prescribed ways of working that are deemed acceptable by the people who develop and design them. All ways of working have their own logic.

Problems arise when they interact with each other. For example, if a WHO recommendation runs counter to social norms, i.e. the deontology that one believes needs to be followed as a professional and as a subject, it can lead to problems. We often find this kind of interference in various work-related situations.

The dream of any organiser and prescriber is that the worker will follow their instructions. That is to say, the prescriber has an interest that all the rules, guidelines and requirements he has established be followed. This is rather a naïve position, however, because it is based on the idea that not only contexts but also the models of action for which the requirements were designed remain stable. Possible changes to the context and individuals variations are forgotten: consequently, such requirements cannot apply everywhere and at all times. People change, for they can be tired or disillusioned, or they may have personal problems that might influence the way they do their work.

This leads us to ergonomics and one of the field’s strong postulate, which is the existence of an irreducible gap between the work specified and the work actually carried out, i.e. a gap between what is prescribed by more senior figures, established rules, and actual work. Indeed, what is prescribed – everything that comes from standards, rules and protocols – doesn’t cover every real situation. It does not take account of hazards. Things will happen that weren’t anticipated and for which the prescribed is a dead end. So, local solutions have to be developed in response.

The prescribed can be counterproductive or even more dangerous than if it had not been applied. It is in these situations that we rely on people’s intelligence to understand how to adapt the rule to the local context. One example is the work-to-rule. In other words, if everyone tries to apply the rules strictly, the system will be paralysed, as has been demonstrated in various areas. The prescribed can clash with the subject’s values. The latter is not a simple operative carrying out tasks, but a practitioner engaging in activities. In ergonomics, we draw a distinction between the task requested – the formal requirement – and the activity, which is what people actually do in their working environment, using their know-how.

Subjects therefore make a whole range of choices, based on the prescribed, to decide on the activity they actually need to carry out. They decide what is essential and what is less so. They prioritise which criteria should be considered, which involve not only the patient, but also protecting themselves, their values and their personal professional ethics. We need to recognise that behind every example of care, behind the quality of care, there are people. It is men and women at work who deliver care. We mustn’t overlook this. There are aspects related to people’s skills, but also to their subjectivity.

For ergonomics specialists, activity includes both the actual work, i.e. what people really do in practice, which is not necessarily what they have been asked to do, and the reality of work. The latter includes what they have done and everything they have been unable to do. Day-to-day work activities therefore include multiple obstacles, which can be organisational or the result of conscious choices. I cannot do everything, so I make compromises and leave some things to one side. It is important to be aware that leaving things to one side can sometimes be to the detriment of other, very important things. This needs to be taken into account.

So, what are the links between these notions of activity and quality in the strict sense of the term? How do these notions affect the notion of quality? In ergonomics, we believe that men and women aim for quality, as has been demonstrated by research in the field of social sciences. When they do a piece of work, they try to make it a high-quality piece of work according to their way of working. They therefore have their own criteria, either in the way they work or the result they are trying to achieve. We saw the same thing in the different notions of quality that emerged from the interviews. Not everything that is taken into account is explicit in quality, but everyone has their own criteria.


Studies done about work in various areas show that difficulties at work often arise from conflicts between quality criteria. What is being asked for contradicts what the individual concerned believes should be done.

Michèle’s report included an example that illustrates the difficulty associated with this conflict. Talking about an Ebola intervention, the interviewee said, “I was annoyed when I took part in a discussion about monitoring a woman with diabetes where the doctors were discussing the fact that they were forbidden from taking blood samples from her and giving her insulin. Either she had to do it herself or nobody would. It was out of the question to give injections with a risk of AEBAccidental Exposure to Blood, which was absurd, because we’ve always done transfusions in Donka. I never had the impression that we were restricted when it came to transfusions. It was the project coordinator who said no blood samples could be taken or insulin given. It really annoyed the two expatriate doctors, who decided to ignore it.”

We are therefore getting right to the heart of differences in quality criteria. Some are focused on the safety of doctors, while others, and the doctors themselves, are focused on caring for the patient. What that means is that they look at the patient as a whole. It’s important for them to offer care and they don’t believe there are that many risks, since they’ve already dealt with them elsewhere. The other elements that come into play are issues of competence, which strengthen the doctors’ position.

There are two possible outcomes in these situations, where prescribed norms conflict with personal or professional values. Either the person will act in accordance with the prescribed norms but be distressed in ethical terms. The term ‘distress’ is very strong, but it implies that the persons will prevent themselves from taking action because the organisation’s ideology or line management will not allow them to do otherwise. Or they will breach the prescribed norms, which implies running certain risks. These risks are both personal: will I be criticised for doing this? Am I going to lose my position or my job? But there are also risks for other people. We heard the most extreme example a little while ago: “You’re going to kill the patient if you do that.” In that situation, it will inevitably be more difficult to make a decision.

It is therefore understandable that the choice will depend on the dynamic and the organisational set-up in which the person concerned is working. Depending on the context, they may or may not be able to break or adapt rules, or have the flexibility to discuss what can and cannot be breached. Organisational questions and quality are therefore inextricably linked. Operating in isolated silos is impossible, because they have an impact on people’s practice.

‘Care’ therefore has a role to play here: it is studied in the human sciences as a set of material, technical and interpersonal activities that involves offering a tangible response to the needs of others. I think that is pretty much the core of what you do at MSF.

There is therefore a multitude of quality standards. The criticism we could level at the definitions in these standards is precisely the fact that they do not take into account the role of people in their relationship to work. Quality, whether it is expressed in AFNOR or ISO standards, is about ensuring customer and patient satisfaction. The WHO definition of quality of care is guaranteeing the patient the best treatment at the lowest cost, to ensure their satisfaction in terms of procedures, results and human contact. That seems to me to be associated with ‘care’ rather than simply focusing on ‘cure’. From an ergonomics point of view, what is lacking in these definitions is the place occupied by the quality of work according to the person doing it. In other terms, what is valued as a job well done in the eyes of workers as well as customers and patients. It is what might be defended by the person doing the work; completing it contributes to making it meaningful. People produce or try to produce high-quality work, which does not necessarily run counter to pre-established norms. What I want you to realise is that the act of caring can be viewed as a collective process, which is ultimately aimed at the patient.

We are seeing more and more research encouraging us to take patients’ participation into account when looking at the quality of care. Care quality processes can be approached not only from the perspective of the results obtained and resources mobilised, but also by looking at the dynamics of the actors involved in the action.

From my point of view, this is the question that needs to be asked and which you are already asking thanks to days like this one: what does high-quality work at MSF represent for different groups of people?


I am going to use the example of radiotherapy, which I studied for my PhD, to illustrate the points we have just examined. We are therefore in the context of a state-of-the-art French hospital.

In this example, people deal with hazards and manage conflicting standards as part of a “cross-disciplinary” community. There are several professionals involved in radiotherapy, from the radiotherapist who prescribes the dose of radiation, to the various professionals involved in preparing the treatment and finally, to the technicians who administer the dose and are in contact with patients. On average, a patient attends 15 treatment sessions.

My interest in the field was prompted by a request following the accident in Epinal, also known as the “radiation overdoses” case. We wanted to gain a better understanding of what was happening during these treatments. Are people working in radiotherapy simply careless and lacking in professionalism? Do they have a safety culture?

My research examined the points of view of a range of professionals but today I want to concentrate on the example of the technicians, who work at the end of the line. My starting point was understanding what they meant by the term ‘quality’. Two key objectives emerged from their view of quality.

On the one hand, it was about delivering day-to-day care by not cancelling sessions. This was to ensure that the treatments – in the context of cancer patients – were effective. They wanted to avoid any cancellations because each session is important.

On the other, the ‘care’ was important to them. As patients had to travel to a specific appointment, they couldn’t upset them by simply saying, “Your session isn’t going to run today.”

They therefore take the patient’s point of view into account, while ensuring that the treatment will not hurt them. The treatment needs to meet safety criteria, i.e. providing the right treatment, in the right place, at the right time. Normally, these objectives should not contradict each other.

In a perfect world that meets the prescribed norms, they would be able to satisfy both objectives at the same time. But because, in reality, there are all kinds of hazards, we will see that they cannot always do both and that they have to make choices and trade-offs.

During the observations carried out at two hospitals in the Paris region, unforeseen events occurred at treatment stations either as a result of external factors, or errors made by the technicians themselves. One of the first developments was to realise that, in spite of the fact that accidents take place at the end of the chain, they could also be the result of organisational failures all along the treatment chain. As a result, the simplistic approach of only looking at the technicians, because they are the people who press the button and deliver the radiation, makes no sense. On the contrary, because they are at the end of the line, they regularly find themselves in situations where they have to manage conflicts between the two objectives they have established as defining quality

a) Making judgements between conflicting objectives

The situation I saw most commonly was one of conflict between regulations and practice. The regulations state that radiotherapy must be only conducted after a radiotherapist and a medical physicist who had determined the dose, approved of it, in the file. This obligation is also supported by good professional practices.

Yet the file often arrives at the treatment station without having been validated either by the radiotherapist or the physicist. The patient has arrived for his appointment and is waiting. The technicians have two choices: either continue with the session in the interest of patient care and the effectiveness of their cancer treatment, which has to take place daily; or cancel the session because they’re not certain they have the right treatment, which could put the patient at risk. Presenting the situation in this way may make it seem like a binary choice, but that is how they experience these conflicts and how they resolve them based on their own quality criteria and experience.

In practice, they will try to get the doctors’ signatures. But in cases where they are unable to correct the situation, they will have to make a decision, even a medical decision, and may be criticised for it because it falls outside their remit.

These judgements will be situated in time, i.e. each technician will make her decision based on her own experience and knowledge. This knowledge relates to patients: the phase of treatment they have reached and their behaviour during the treatment, if it is their first session. But they also have knowledge about the behaviours of radiotherapists and doctors, for example: radiotherapist X usually trusts doctor Y; if the doctor has signed but the radiotherapist hasn’t, the treatment can go ahead. Together, they create a set of meta-rules to determine what compromises to make in respect of risk.

These rules vary from one hospital to another, because the task expected can vary from one to another. The task expected is the behaviour expected from the technicians in the case of hazards, even if it is not set out in writing. It might be to carry on with the session regardless, but quite the opposite in another institution. Practices can therefore vary widely. It is also often said that these judgements are accepted as long as there are no accidents. It works, but if there’s an accident, there will be an inspection. This will highlight the fact that the technician took the decision to carry out the treatment without a doctor’s signature. But no-one will question the fact that the doctors don’t sign the files.

In light of these observations, we asked ourselves about regulation. If some things are not acceptable, what are the regulatory spaces where the decisions about flexibility are made? What is acceptable and what isn’t? What falls under practices that should be embedded for the long term, and what should not?

Again, as part of my research, I gave scenarios involving hazards and judgements to 14 professionals – radiotherapists, physicists, dosimetrists and technicians – and asked them for their opinion on the trade-offs made.

The first thing to emerge was that not all situations are unacceptable. There is a degree of flexibility that means some judgements will be accepted, even if the rule has been broken.

The second thing to note is that acceptance will vary depending on the actor’s position in the organisation. As a result, a technician who has direct, daily contact with patients will be less likely to accept certain practices than doctors, who will find it easier to accept a degree of deviation.

Diverse practices and diverse assessments of practices therefore co-exist. Consequently, it is important to discuss quality criteria and the conflicts between criteria. A multi-disciplinary approach is even more important when these criteria vary depending on the trades. The aim is then to delimit what is acceptable and what is not.

b) Workplace spaces of discussion

What can we do to establish such spaces? In the radiotherapy context, we brought together different professions and used real case studies as a starting point.

The first rule of the discussion is that it should be equipped so that it can focus on work and its reality. It must not be abstract, but grounded in reality with an account, photos, video or a situation. For example, we do not accept statements such as “Normally that’s how it should be done.” We want our starting point to be the reality of people’s experience and find out how they decided to approach a particular issue. It needs to be outcome-focused: otherwise the approach will remain at the discussion stage, with no tangible effects.

Another important rule is that the approach must be participatory. Different people at different levels of line management need to be given the opportunity to speak. That implies a conversation based on respect, without passing judgement, which must allow everyone the chance to express themselves. It is important to avoid those who have power within the organisation systematically taking the floor. A positive atmosphere is needed so that people also feel able to talk about errors, failures and situations where they would have liked to act differently. Talking about these situations provides an opportunity to assess whether the actions taken are in line with the organisation and peers.

A set of rules for the discussions therefore needs to be established, so that they maintain an operational focus. They need to take place frequently as part of a long-term approach. By operational focus, we mean arriving not at a consensus, but a majority position that will focus on action. The role of the facilitator will be both to guarantee the principle of discussion within the group, but also to decide whether the proposal is applicable or not.

To illustrate what we have just seen, we will take the example of multidisciplinary medical concertation meetings. These meetings occur when evidence-based medicine does not apply in certain cases and a medical protocol needs to be developed. Each doctor will bring their own ideas for the protocol depending on their area of specialisation. The meeting provides an opportunity to share individual practices, to expand the field of possibilities through collective practice.

Not everything within the field of possibilities is necessarily acceptable. The discussion can therefore be used to dismiss certain practices that it would not be desirable to perpetuate within the group or the organisation. Conversely, an outline of acceptable practices will be established. Positions will therefore be debated and discussed by what is deemed to be a relevant group of people. The latter will have the authority to make decisions and enable flexible practices within the scope of “acceptable” actions. It therefore leaves room for individual actions and retaining a personal “style” of practice, which is still endorsed by the peer group and the organisation.


To end this presentation, I would like to return to the topic of responsibility for decision-making: who takes the decision? What is the decision about? Who is responsible for it?

In ergonomics and management disciplines, we suggest using the principle of subsidiarity to respond to these questions. This principle defines the distribution of powers within a community. It comes from public policy and the idea is to identify the most pertinent level for action. This means not burdening the upper levels of the hierarchy with tasks that can be resolved by people working further down the organisation.

It assumes that participants have the power to act to resolve the situations under discussion and that, if local resources are inadequate, the discussion groups provide a way of communicating with other decision-making areas. A discussion group decides what can and cannot be agreed at its level.

Subsidiarity relies on compliance with three principles: - the principle of competence, which prevents a higher level from completing any task that could be done by a lower level; - the principle of support: conversely, the higher level is obliged to take on tasks that the lower level is unable to do; - the principle of substitution, which finally prohibits the upper level from offloading the tasks for which it is responsible.

In itself, it is rather theoretical but I think it is open for discussion. It works in public policy and we are starting to see it emerge in some business contexts. The aim is to free up management, at different levels, from tasks and decision-making that are seen as time-consuming, and which could easily be the responsibility of people who are closer to the reality on the ground.

To conclude, let’s look at the example of a colleague’s intervention on risk management, in a large organisation in the electricity sector.

He began by setting up spaces of discussion in the workplace to break what is known as “organisational silence”. This is a frequent situation in high-risk organisations, where workers will have a tendency to hide errors so that they are not forwarded up the line to the highest levels. Practices of this kind prevent feedback, which is why organisational silence needs to be addressed. On the contrary, it is by talking about real life that people become aware of constraints and try to find solutions that reflect reality, rather than being modelled on an ideal world. He began the discussion groups with electricity workers to discuss high-risk situations they had experienced in the field with a local manager. He also implemented this approach at different levels of the organisation.

The workers themselves brought problematic situations to the discussion groups, using photos they had taken in the field. They discussed them between themselves and with managers in practical terms, to try to resolve problems at their own level. If they were unable to do so, the problem was escalated to the next level of management above the local managers.


Laurent SURY - Emergency Desk Manager

Where prescribed norms conflict with reality, the person concerned has two choices: either they obey and are ill at ease, or they object and do what they think is right, but run a risk. You said that this choice was dependent on the people around them, the association and the organisation. Are these the only factors, or do the individual’s personal convictions also come into play? I’m thinking about their conviction that the decision is taken in good faith and therefore justifiable, or that the person will have others to back them up, and can rally other people to support their position.

Stéphane Roques

The principles of “managed quality” and “regulated quality” echo all the questions we have been asking ourselves, not only in terms of medical quality but also in operational terms. I think there is a benefit in regulating a number of simple processes, so that the majority of situations can be handled as simply as possible, without asking for additional efforts from the teams. As far as scope for acceptable practice is concerned, how did you pursue the discussion and research? If we consider the practice is acceptable, why does it not, after a period of time, enter into the realm of regulated quality? In your example, it becomes acceptable only to have one signature, so one could imagine that the next time there will be no signatures. How do you manage this tension between acceptable practice and regulated quality?


Of course. From our perspective, it is impossible to dissociate the organisation from individual acts. Even if there are subjective aspects, associated with how people react, they still operate within social environments where the organisation of work will be important. For example, if an organisation works on a more punitive basis, that will have an influence on how people act and the distress they feel by acting in this way. Conversely, if the organisation is more open, and gives people room for manoeuvre, with discussions and permission to make mistakes, it will support learning and adaptation to the reality of work. Everyone makes errors at every level of line management, in all areas of work: there’s absolutely no doubt about that. The question is: how do we view errors and failures and how can we talk about them? The organisation has an important role to play in responding to this question.

Fabrice WEISSMAN - Member of CRASH

Are there organisational conditions that favour one or the other?


This topic has been discussed in hospitals but it comes up in the literature too. If it’s acceptable and if it’s what people do, why doesn’t it become a rule? The whole point of this reflection is to say that rules need to come from the field, from how people actually work. That is to say, everything begins with spaces, known in sociology as “hot regulation space”, i.e. how the person reacts here and now. Then the observation moves into a “cold” regulatory space, which brings together people who think through and design the rules. So, the process should start with actual constraints and practices and the rules should come from there. The disadvantage is the risk of inflation of the body of regulations. Everything ends up being written down, planned and turned into rules, with the negative effects we have seen in a number of situations. “People know; we don’t need to create an indicator to feed the information back,” particularly if it is already recognised as good practice or an inherent part of the job.

These inherent elements don’t need to be written down: people who do the job are aware of its history and know the rules. As a result, they don’t necessarily have to be prescribed. It’s a debate with advantages and disadvantages, but it’s important to be vigilant so that prescribed standards don’t leach into areas that are unnecessary. In the hospital example, the connection was made by “normalising” the deviation in some sense. They accepted that the doctors didn’t have the time to sign and worked from the principle that if the doctor could speak to the radiotherapist on the phone to confirm the dose of radiation, a box could be ticked on the file. That meant the technicians were no longer responsible. As far as the relationship between “regulated” and “managed” is concerned, we are still asking ourselves all kinds of questions: how do you join them up, and at what doses? I don’t think there is a fixed, universal relationship. It’s a social construction, based on the context in which you are operating. The relationship in the nuclear sector won’t be the same as what we might expect in MSF.

Maurice Nègre – Field doctor

I’d like to return to the human aspect. There are different ways of knowing how to disobey depending on the individual’s personality and experience. It’s also why we created the support departments, to provide help and support, not to make rules. Perhaps that’s what we should be reflecting on today: how to give people, staff whose competence is based on their qualifications, confidence and have support departments that are capable of saying, “If ever you’re not sure, or afraid that you’re not complying with evidence-based medicine or the “rule”, we’re there to help you. We’re not there to control you.” I think it’s important for us to work on how we help people in the field to feel more confident.

Léon Salumu – Programme Manager

We’d love to give people the autonomy to adapt the rules we put in place. Another question relating to this higher level of hierarchy: does it always tend to monitor, control and evaluate based on compliance with prescribed standards?

Adélaïde Nascimento

What comes out a lot from the evaluation questions is that it’s the results that are evaluated, without taking the resources into consideration. People are more interested in the final result, than knowing how people have approached it and what constraints they’ve encountered. At the Ministry of Public Finance, for example, they have a performance indicator which states that staff must be able to answer a call from a customer within three rings. So, people don’t let the phone ring more than three times. But the quality of the response offered during the conversation isn’t taken into account. Avoiding this kind of absurdity means taking not only the results but also the means of achieving them into account. It’s about looking for the reasons why a member of staff doesn’t manage to achieve the results their organisation expects.

Emmanuella – Anaesthetist

In your experience, are there mechanisms that help workers themselves to become aware of unacceptable practices and apply self-assessment mechanisms to tackle their own problems?


In practice, we all make mistakes, several times a day, in different contexts. And we pick up errors all the time, either consciously or unconsciously. The rate at which errors are picked up by the person who has actually made them is very significant. The other link in picking up errors is the immediate community. A member of staff may have made an error on a particular aspect without realising it, but their colleague notices it and corrects it when they pick up the file. So, a lot of errors are “rescued” by the people who are directly involved, without people at other levels of the hierarchy being aware of it. Errors that are not picked up can result in an accident. If that happens, people will try to understand the reasons why through experiential feedback. At that point, external experts will judge whether the practices concerned are adequate or not.

Michèle Beck

I would like to add that the tool we now use on our surgical projects for experiential feedback after an accident is the mortality review. It’s used to identify errors, evaluate the situation and review the factors that led to the accident, in this case the “non-natural” death of a patient. That’s what encourages us to review our practices and improve them.



A sociology researcher at the Institute for Radiological Protection and Nuclear Safety, Christine Fassert began her training as an ergonomics specialist. She then worked for several years in the area of incident sharing and sharing information on the relationship between what is “regulated” and what is “managed”. She subsequently wrote her dissertation in sociology on the concept of transparency in high-risk organisations and the notion of trust. She is now a sociologist at the University of Paris I Panthéon-Sorbonne.

Christine FASSERT

To begin with, I’d like to return to the idea mentioned earlier, that an organisation cannot function without there being a relationship between what is “regulated” and what is “managed”. This is entirely accepted and documented in numerous research papers and articles. If an organisation decides that everything has to be regulated, it doesn’t work. In official communications for the general public, however, these organisations – whether they are in the aeronautical, nuclear or even the medical sector – cannot convey this message. It is extremely difficult to say anything other than “Everything is supervised, everything is regulated, trust us, because we have everything under control.” Indeed, while we recognise that people working in the field have room for manoeuvre, it is not easy to explain what, why and how.


To start off, I’m going to talk to you about air-traffic control. I studied this field for my PhD, by comparing several air-traffic control centres in different European countries. The point that this environment and MSF have in common is that until the 1990s, this was a relatively informal area. That may seem contradictory compared with the image of a highly regulated organisation that one might have of the aeronautical sector.

Regulation is important for aircraft but it was not for air-traffic controllers. The air-traffic controller’s job was to give instructions to aircraft using radar in order to avoid in-flight collisions, but it was not a particularly formal system: the controller analysed the situation and made a decision. There was such a wide variety of contexts, in terms of aircraft performance and route configurations, for example, that there were some broad principles but no procedures as such. Learning was based on peer-to-peer support, with no hierarchical structure. The novice controller began on a simulator but quickly moved into the control room and spoke directly to the aircraft. They remained under the supervision of a senior controller and learned through real situations. After about three years, the control team decided when the novice could take their qualification. Once again, there was no hierarchical control of the process.

Then, around the 2000s, the European agency decided to introduce more formalised, standardised practices with a view to creating a “single European sky”. The diversity of practices in different countries was brought sharply into focus. To show what’s at stake when we talk about compliance and non-compliance in relatively informal contexts, let’s take the example of Italy. For a long time, controllers worked with paper strips, where they recorded the situation and noted down the instructions given based on flight plans and radar data. But this information was not recorded in the system. As a result, the decision was taken to move to a fully computerised system and stripping became electronic. When I arrived in the control centre, the computer system had been in place for some months. But the written procedure required the controllers to continue filling in their paper strips for some time, as they were not sure that the computer system was entirely reliable. The problem for the controllers was that they could not do both things at the same time, because of their workload. They tried to explain this to their line management, but the latter thought it was largely a problem of unwillingness.

My observations led me to the same conclusion as the controllers: filling in both systems was simply not realistic. This conclusion created feelings of bitterness towards their line managers among the controllers: “They’re allowing the dichotomy to persist between the official procedure and what we’re actually doing. They know how we work but they’re closing their eyes to it because they think they can cover their backs if there’s an accident or radar breakdown by saying we didn’t follow the procedure.” In the second example, we’re going to see non-compliance of the standard for separation between aircraft. The standard for separation between two aircraft depends on the quality of radar systems. In European airspace, it is five nautical miles horizontally and 1000 feet vertically. Instances of non-compliance with the standard obviously occur. The controller may interpret the situation wrongly and the aircrafts pass just below the normal standard. It is important to realise that the controllers’ relationship with the standards varies from one country to another and depends largely on the volume of traffic. The more aircraft a controller has to control, the more they risk making an error that will result in a loss of separation. Obviously, this can lead to an in-flight collision.

There have only been a few in-flight collisions in the history of air-traffic control, but it is the most feared accident because there are no survivors. This is why the standards are so precise. Yet instances of non-compliance occur, either because of workload or for a variety of other reasons. In a single centre, there are also people described as ‘cowboys’, who are less obedient, take more risks and are often older than the rest. We’re back with the question of competence, self-confidence and so on. It is also interesting to see the relationship between standards and incidents.

The International Air Traffic Association defines an incident in very vague terms: “An incident is anything that could have led to an accident”. Typically, a loss of separation is an incident and a controller is supposed to declare it. It is also important to realise that only the controller and their colleagues in the room are aware of the event. Some countries have automatic surveillance systems, but in most cases, it is the controller who decides whether they are going to declare the incident or not. As a consequence, we see extremely variable notification practices. In some countries, following an incident, the controller is suspended for a period while an investigation is carried out. Not only do the controllers experience this situation as particularly humiliating, they also lose a bonus based on the number of times they speak on the radio. Eurocontrol wanted to end this type of practice to ensure better notification, but were hampered by national legal systems, which took the view that controllers would no longer do their work correctly if sanctions were removed. During my research, many controllers saw incident notification as a quantitative system: either there was compliance with the standard and safety was guaranteed, or it was not. In their view, the reality was much more complex. The standard of separation might be lost for a few seconds, but the controller was in control of the situation because they were monitoring the aircraft closely and there was only limited traffic.

In other situations, there was compliance with the standard but the controller realised they had forgotten an aircraft and lost control of the situation more generally. Their workload kept increasing and they realised that their voice was trembling. In their view, there was an incident in this case. In Sweden, the official definition of an incident has been replaced by “events we can learn from”. These events are discussed in groups with no line managers present. The situations are described in the form of an account, without any taboos, to try to increase the visibility and variability of all the situations they might encounter. The aim is to try not to get caught out by a situation, by learning from a colleague’s experience.

The following example comes from France. A feature of air-traffic control is that the radar images are saved for 24 hours, in case an accident occurs. Similarly, images can be saved if there is an incident. A local safety committee was set up to review certain situations. The radar images are projected onto a large screen to relive the loss of separation. The group then analyses them as part of a second stage. The controller explains openly why they lost control of the situation. The reasons may not be brilliant, for example if a trainee is left alone. But when it comes to stating the reason in a written report, it is often standardised so that it fits a particular category, and all the valuable details of the account are lost. Those further up the hierarchy, who are concerned about flight safety all over France, only receive a list of causes that do not describe the situation in detail, which is a problem they complain about.

Another interesting aspect of local safety committees is viewing the incident as a whole. The effect produced is a sort of ritual, in which the group frightens itself by reliving the situation. The important part is the reflection, asking oneself the question, “Why didn’t I follow the standard? What happened? Is it defensible?” It includes the idea of accountability, i.e. being able to account for one’s actions to the team and the group. I will conclude this illustration of air-traffic control by saying that the link between standards, risks and incidents varies widely from one place to another. This is why I feel quite conflicted when people talk about transferring good practices from one area to another. Practices are highly context-dependent, both culturally and organisationally, which is why something that may work very well in one place won’t be appropriate in another. The Swedish example works in Sweden, because it aligns with a culture of transparency. But it wouldn’t work in Italy, where there is still a very punitive model.


Moving to an entirely different area, we are now going to look at formal risk analysis in the nuclear sector. Specifically, we will be putting ourselves in the shoes of the technician, who carries out a risk analysis before starting work on a pump or valve, etc.

The request made to the IRSN

IRSN :  Institute for Radiological Protection and Nuclear Safety was to investigate the reasons why technicians, in certain situations and despite a risk having been identified, decide to continue with the task after all, causing an incident. I began my research by meeting the central authority, which manages the national infrastructure.

Their position was that risk analysis should be based on asking a set of questions before entering certain parts of the plant. Individuals should ask themselves these questions to contextualise the task: what are the risks, what is the condition of the plant, what impact will the task have on the state of the valves, etc. For the next stage, I went into the field. In all the plants I visited, risk analysis was formalised in writing, in a dedicated file. The approach recommended has become a written procedure, which involves answering various questions.

Despite the document not being obligatory, the teams complete it systematically. They want to be able to prove that they have asked themselves the right questions in case of an internal audit. When I met the people who actually do the job on the ground, they admitted that they sometimes only completed the document for the sake of having done it. Some would like to dare not to put anything for no-risk tasks, but it is difficult. People with more experience may dare to do so. In some cases, we even saw people copying and pasting. Yet a risk analysis should be driven by the context, as even if work has been done on the same valve a month earlier, the new task is not necessarily identical.

The context of the plant is not the same. Central management wanted to avoid bureaucratic behaviour, but this is what had happened. It is also important to take into account that risk analysis is part of a broader system. This means there are lots of other documents to complete, for example the job sheet and a whole heap of administrative paperwork from various sources. As one project manager told me, “There are so many things to do, files, filling out paperwork, that sometimes you could forget to do the actual job once you get to the room.”

To conclude my presentation, I think it is important, in an organisation, to understand and assess everything that is “managed” and that is not “regulated”, and examine it. But the “managed” aspect is becoming increasingly difficult to understand in high-risk industries. They are gradually closing their doors to external scrutiny, even by researchers. It is not always easy to admit what is “managed” and intrinsically it remains rather opaque, because it’s close to the reality on the ground. If we’re going to understand it, people in the field need to trust us, at least to some extent. But the IRSN is still the armed wing of the nuclear safety authority, what journalists sometimes call the “nuclear policeman”. The idea of transparency is therefore a difficult attitude to maintain.

To conclude, we need to reflect on the possibility of talking publicly about the mechanisms for managing more complex and less easily handled risks than the usual discourse of command and control. It is what we call “sayability”. For how long can we maintain the viability of these managed aspects, when a control system or regulatory authority will struggle to take them into account in its assessment? One of the characteristics of MSF is that it only has an internal control body, which makes it easier to talk about these “managed” aspects.


Pierre MENDIHARAT - Deputy Operations Director

You said that collisions were extremely rare in the history of aviation, which would suggest that the method seems to work, whether it’s Swedish or Italian?
It’s worth being clear that the results are as good as they are because practices are analysed on the basis of feedback and discussions, even if they don’t take place officially

Christine FASSERT

I wouldn’t say that all methods work, but nor would I say the opposite. What I mean is that it can be complicated to assess how an incident might lead to an accident. We used to think that assessing safety meant measuring the number of incidents. But that became counter-productive, because the more an organisation hid its incidents, the less the controllers were inclined to report. We might have thought that safety was then guaranteed, but in fact it was just the opposite. You are quite right to emphasise the impact of feedback. Particularly in Italy, I noticed that the coffee break was typically the moment when informal conversations took place. Of course, for Eurocontrol, this unofficial aspect was not acceptable. That’s where the limitations and distortions of audits become apparent. Some things are not acceptable, because the audit won’t be satisfied with them. Sweden found a good compromise in its semi-formal approach. It didn’t take place during the coffee break but there was no written record and line management was not present. Despite all of that, a consultant in human organisational factors was present to help them express what they had experienced, analyse it and create an organisational vision, but without putting it in writing or attempting to categorise it.

David OLSON - Deputy Medical Director

I’m very pleased that you talked about good practices. It’s an authoritative way of positioning oneself in relation to the teams, based on the fact that these are best practices and therefore not debatable. In our field, the same applies to what we call evidence-based medicine. All of that always needs to be analysed in a specific context. In all our sections, we have a notification system for medical errors that is supposed to go up to the Deputy Operations Director. I think the examples you have given are excellent, because errors are discussed in the field and decisions are taken in the field. Everyone acknowledges that there are human errors at every level. In the end, they went a lot further than we have managed with our system of reporting medical errors.

Christine FASSERT

Indeed, but in certain cases and for certain errors or incidents, it’s important not to stop just at group-level discussions. If the situation involves problems at a systemic or organisational level, the discussions need to get to the next level in the hierarchy so that the necessary changes can be made. This is typically the type of situation where it’s important to learn from an incident by taking steps: one might realise, for example, that ultimately there had been an incident because a radar was not properly adjusted. But for most incidents and experiences, simply talking about them by describing them in a group is very effective

A participant

What struck me in the presentation was the example about the difficulty of standardising what is “managed” in the Italian line management system. They know which aspects are managed, but won’t standardise them because of a lack of confidence. And that’s not dissimilar to Stéphane’s question just now, i.e. how does one move from “managed” to “regulated”? The answer was that people didn’t want too many standards. But here we can see that there’s also a difficulty of not feeling ready to standardise. Ultimately, in the same way that a relatively inexperienced person is not ready to move from regulated to managed, the organisation, if it’s less well-established or working in a new environment, doesn’t feel ready to regulate what it has previously managed.

Christine FASSERT

It’s fair to say that there needs to be a minimum level of experience to be able to standardise, because the standard’s legitimacy is derived from a base that doesn’t just come from nowhere. In the Italian example, the lack of confidence in the new computer system meant that managers were reluctant to abandon the use of paper strips. That created a contradiction for the teams, which caused them distress at work.


I was very struck by the fairly remote relationship between behaviours and consequences in the Italian/Swedish comparison. Ultimately, it’s safety behaviours, equipment, how that equipment is adjusted and the frequency of flights that makes air travel safe. It’s therefore very difficult to measure safety on the basis of extremely rare incidents. Although they are absolutely catastrophic, they are also extremely uncommon.

As a consequence, I was wondering whether understanding behaviours should include some kind of aspiration towards evaluation. Should we not try to understand the social and human costs that come with a particular level of safety?

We know, for example, that time off work for illness correlates directly with frustration, tension and stress in the workplace. We also know that the frequency with which people change jobs is also an indicator of how they feel about their work. Looking beyond the mega-incident of an in-flight collision and everything that follows, there are also less dramatic ways of seeing what the effects of good practice might be.

Christine FASSERT

Are you talking about a way of assessing organisation, not in terms of results, i.e. incidents and accidents, but looking further upstream, at the atmosphere at work?


Indeed, because the limitation of the analogy with the question of air safety is both the extreme rarity and enormity of the risk incurred. In what we do, for example, it is not always a question of life or death, but about being around patients more, and paying more attention to results. These things are more nuanced than a black-or-white result. It also relates to how comfortable people feel at work and how much pleasure they feel in what they do. It’s what I call the social or human cost of practices.

Christine FASSERT

Certainly, those are elements that need a more qualitative assessment. The human cost or well-being at work are relatively subjective notions, which it is difficult to translate into more tangible or quantitative costs without losing a great deal.

The problem is that at the moment, the trend is towards transforming these assessments into more tangible, more quantitative elements and the consequence is losing sight of things that should be assessed