If it weren’t for the people…

A humanist approach to automation.

load_of_bricksEditor’s note: At some point, we’ve all read the accounts in newspapers or on blogs that “human error” was responsible for a Twitter outage, or worse, a horrible accident. Automation is often hailed as the heroic answer, poised to eliminate the specter of human error. This guest post from Steven Shorrock, who will be delivering a keynote speech at Velocity in Barcelona, exposes human error as dangerous shorthand. The more nuanced way through involves systems thinking, marrying the complex fabric of humans and the machines we work with every day.

In Kurt Vonnegut’s dystopian novel ‘Player Piano’, automation has replaced most human labour. Anything that can be automated, is automated. Ordinary people have been robbed of their work, and with it purpose, meaning and satisfaction, leaving the managers, scientists and engineers to run the show. Dr Paul Proteus is a top manager-engineer at the head of the Ilium Works. But Proteus, aware of the unfairness of the situation for the people on the other side of the river, becomes disillusioned with society and has a moral awakening. In the penultimate chapter, Paul and his best friend Finnerty, a brilliant young engineer turned rogue-rebel, reminisce sardonically: “If only it weren’t for the people, the goddamned people,” said Finnerty, “always getting tangled up in the machinery. If it weren’t for them, earth would be an engineer’s paradise.

While the quote may seem to caricature the technophile engineer, it does contain a certain truth about our collective mindsets when it comes to people and systems. Our view is often that the system is basically safe, so long as the human works as imagined. When things go wrong, we have a seemingly innate human tendency to blame the person at the sharp end. We don’t seem to think of that someone – pilot, controller, train driver or surgeon – as a human being who goes to work to ensure things go right in a messy, complex, demanding and uncertain environment.

Our mindset seems to inform our attitude to automation, but it is one that – if it ever were valid – will be less so in the future.

Human as Hazard and Human as Resource

The view of ‘human as hazard’ seems to be embedded in our traditional approach to safety management (see EUROCONTROL, 2013; Hollnagel, 2014), which Erik Hollnagel has characterized as Safety-I. It is not that this is a necessarily a (conscious) mindset of those of us in safety management. Rather, it is how the human contribution is predominantly treated in our language and methods – as a source of failure (and, in fairness, as a source of recovery from failures, though this is much less prominent). Most of our safety vocabulary with regard to people is negative. In our narratives and methods, we talk of human error, violations, non-compliance and human hazard, among other terms. We routinely investigate things that go wrong, but almost never investigate things that go right.

This situation has emerged from a paradigm that defines safety in terms of avoiding that things go wrong. It is also partly a by-product of the translation of hard engineering methods to sociotechnical systems and situations. As the American humanistic psychologist Abraham Maslow famously remarked in his book Psychology of Science, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as is it were a nail.” If we only have words and tools to describe and analyze human failures, then human failures are all we will see. Yet this way of seeing is also a way of not seeing. What we do not see so clearly is when and how things go right.

It is not just the safety profession. It is, to an extent, management and all of society. At a societal level, we seem to accept a narrative that systems are basically safe as designed, but that people don’t use them as designed, and these blunders cause accidents. Hence the ubiquitous “Human error blamed for…” in newspaper headlines. From a human as hazard perspective, it seems logical to automate humans out wherever possible. Where this is not possible, hard constraints would seem to make sense, limiting the degrees of freedom as much as possible and suppressing opportunity to vary from work-as-designed.

An alternative view is that humans are a resource (or, for those who object to the term’s connotations, are resourceful). In this view, people are the only flexible part of the system and a source of system resilience. People give the system purpose and form interconnections to allow this purpose to be achieved. They have unique strengths, including creativity, a capacity to innovate, and an ability to adapt. As it is impossible to completely specify a sociotechnical system, it is humans – not automation – who must make the system work, anticipating, recognizing and responding to developments.

This view of the human in a safety management context seems to resonate with a more fundamental view of the human in management thinking more generally. Over 50 years ago, Douglas McGregor identified two mindsets regarding human motivation that shape management thinking: Theory X and Theory Y. Theory X dictates that employees are inherently lazy, selfish and dislike work. The logical response to this mindset is command-and-control management, requiring conformity and obedience with processes designed by management, and a desire to automate whatever can be motivated, because this removes a source of trouble.

The Theory Y mindset is that people need and want to work; they are ambitious and actively seek out responsibility. Given the right conditions, there is joy in work, and so work and play are not two distinct things. Rather than needing to be ‘motivated’ by managers, people are motivated by the work itself and the meaning, satisfaction and joy they get out of it. Importantly, humans are creative problem solvers.

Toward a humanistic and systems perspective

Two things seem to be certain for the future. The first is obvious: we will see more automation. The second is less obvious, but equally certain: Whatever mindset motivates the decision to automate, it will be necessary to move toward a more humanistic view of people that incorporates Hollnagel’s Human as Resource and McGregor’s Theory Y. For this view to prevail, we will need to reform our ideas about work away from command-and-control and towards a more humanistic and systems perspective.

It is inevitable that work with automation will not always be as designed or imagined. While part of the design philosophy may have sought to suppress human performance variability, humans must remain variable in operation. As well as the rare high-risk scenarios, there will be disturbances and surprises, and even routine situations will require human flexibility, creativity and adaptation. This does not call for technophobia, but humanistic and systems thinking. People will be key to making the system as a whole work.

We, the people

Finnerty’s exclamation raises an important question: who are the people? It seems that he was talking about people on the front-line. But they are not the only people. We might think of four roles for the people in the system: system actors (e.g. front line employees, customers), system experts/designers (e.g. engineers, human factors, human resources), system decision makers (e.g. managers and purchasers), and system influencers (e.g. the public, regulators) (Dul et al, 2012). When automation goes wrong, it tangles up people in all roles. The system actors (front-line staff and customers) just pay the highest price. The responsibility for automation in the context of the system must therefore be shared among all of us, because automation does not exist just within the boundary of a ‘human-automation interaction’ between the controller/pilot and the machinery. Automation exists within a wider system. So how can we make sense of this?

Making sense of human work with automation

Our experiences with automation present us with some puzzling situations, and we often struggle to make sense of these from our different perspectives. For example, we might wonder why someone ‘ignored’ an alarm that seemed quite clear to us, or why they did not respond in the way that (we think) we would have responded. We might also wonder why someone would have purchased a particular system, or made a particular design decision, or trained users in a certain way. To make sense of these sorts of situations, and to ensure that things go right, we need to consider the overall system and all of our interactions and influences with automation, not isolated individuals, parts, events or outcomes.

There are a variety of systems methods that can help to do this. But following are some tips from a EUROCONTROL White Paper just published, Systems Thinking for Safety: Ten Principles.

  1. Involve the right people. The people who do the work are the specialists in their work and are critical for system improvement. When trying to make sense of situations and systems, who do we need to involve as co-investigators, co-designers, co-decision makers and co-learners?
  2. Listen to people’s stories and experiences. People do things that make sense to them given their goals, understanding of the situation and focus of attention at that time. How will we understand other’s (multiple) experiences with automation from their local perspectives?
  3. Reflect on your mindset, assumptions and language. People usually set out to do their best and achieve a good outcome. How can we move toward a mindset of openness, trust and fairness, understanding actions in context using non-judgmental and non-blaming language?
  4. Consider the demand on the system and the pressure this imposes. Demands and pressures relating to efficiency and capacity have a fundamental effect on performance. How can we understand demand and pressure over time from the perspectives of the relevant field experts, and how this affects their expectations and the system’s ability to respond?
  5. Investigate the adequacy of resources and the appropriateness of constraints. Success depends on adequate resources and appropriate constraints. How can we make sense of the effects of resources and constraints, on people and the system, including the ability to meet demand, the flow of work and system performance as a whole?
  6. Look at the flows of work, not isolated snapshots. Work progresses in flows of inter-related and interacting activities. How can we map the flows of work from end to end through the system, and the interactions between the human, technical, information, social, political, economic and organizational elements?
  7. Understand trade-offs. People have to apply trade-offs in order to resolve goal conflicts and to cope with the complexity of the system and the uncertainty of the environment. How can we best understand the trade-offs that we all system stakeholders make when it comes to automation with changes in demands, pressure, resources and constraints – during design, development, operation and maintenance?
  8. Understand necessary adjustments and variability. Continual adjustments are necessary to cope with variability in demands and conditions, and performance of the same task or activity will vary. How can we get and understanding of performance adjustments and variability in normal operations as well as in unusual situations, over the short or longer term?
  9. Consider cascades and surprises. System behavior in complex systems is often emergent; it cannot be reduced to the behavior of components and is often not as expected. How can we get a picture of how our systems operate and interact in ways not expected or planned for during design and implementation, including surprises related to automation in use and how disturbances cascade through the system?
  10. Understand everyday work. Success and failure come from the same source – ordinary work. How can best observe and discuss how ordinary work is actually done?

Conclusion

If it weren’t for the people, it is true that there would be no-one to get tangled up in the machinery. But if it weren’t for the people, there would be no system at all: no purpose, no demand, no performance. We need to reflect, then, on our mindsets about us, the people, about the systems we work with and within, and about how we will ensure that things go right.

References

Dul, J., Bruder, R., Buckle, P., Carayon, P., Falzon, P., Marras., W.S., Wilson, J.R., & van der Doelen, B. (2012). A strategy for human factors/ergonomics: Developing the discipline and profession. Ergonomics, 55(4), 377-395.

EUROCONTROL (2013). From Safety-I to Safety-II (A white paper). EUROCONTROL.

EUROCONTROL (2014). Systems Thinking for Safety: Ten Principles (A white paper). EUROCONTROL.

Hollnagel, E. (2014). Safety-I and Safety-II. The Past and Future of Safety Management. Ashgate.

Maslow, A. H. (1966). Psychology of Science: A Reconnaissance. Gateway Editions.

McGregor, D. (1960). The human side of enterprise, New York, McGrawHill.

Vonnegut, V. (1999). Player Piano. The Dial Press.


This originally appeared on Humanistic Systems as “If it weren’t for the people…” and has been published with permission. Photo: Pascal https://flic.kr/p/8M9DHN CC BY 2.0

tags: , , , , ,