The aviation industry has always been at the forefront of crisis management. And for a good reason. When things go wrong the impact can be devastating. So high are the standards in the industry compared to others that over the past thirty years, the response to critical events has been codified.
But the crash of Ethiopian Airlines flight ET302 bound from Adis to Nairobi on March 10th suddenly turned into a crisis for two other industry players: Boeing, the manufacture of the B737 Max, and the US Federal Aviation Administration, which certified the plane.
The background information on the birth of the B737 Max, to what it means to Boeing and to the events that followed the crash are well explained in this article published by the Financial Times.
And as we have seen too often, highly visible crises give crisis management and communications consultants a great opportunity to express their views and pass judgement on the actual management of the crisis, often criticizing decisions taken while enjoying the enviable comfort of their seats and benefitting from 48/72 hours hindsight. At times even with little understanding of the industry involved.
I personally have no interest in this kind of exercise. As crisis management advisor who has worked on a major airline crash (SAS flight SK686 from Milan to Copenhagen October 8, 2011) and with the aviation industry for the past 20 years I am interested in trying to understand – with the limited information I have – the context that nurtured the crisis management decision making process in Seattle and what, if anything, we can learn from it. Because as Thom Fladung, managing partner of Hennes Communications in Cleveland, Ohio put it to me:
“effective crisis communications and crisis management best practices and learnings apply across industries, disciplines and situations”.
In this article I attempt to explore some of the issues as they relate to this specific event. You will find more questions than answers and hopefully they will stimulate reflection.
The problem of “codified responses” and “established culture”
As I mentioned, over the past 30 years the aviation industry has codified responses to incidents and accidents. As John Bailey, a long term crisis management consultant to the aviation industry pointed out to me in an email exchange:
“the industry is permeated by a «let’s wait for the facts first» culture, particularly in engineering-led companies like Boeing. While the industry has learned the hard way about the need to adapt crisis response plans to the «always on» media environment, the role of the manufacturer after an accident remains the same: to offer technical expertise and support to the investigation and ensure that any safety-related recommendations are communicated quickly to customers and operators”.
John’s comments are interesting for two reasons: first they highlight the role “culture” plays in crisis management, a line of thinking long preached by French crisis expert Patrick Lagadec. Second they point to the concept of “role” in the context of what I referred to as “codified responses”.
To what degree then did the industry “culture” and Boeing’s own “corporate culture” influence the crisis management decision making process in Seattle and associated communications? And to what degree did “codified responses” blinded and created cognitive biases that ultimately led the company to evaluate the scenario the way it did?
Real time unfolding events, real time answers
This «let’s wait for the facts first» culture which is not only perfectly understandable but one that is born out of both experience and the engineering nature of the aviation industry (mind you Apple is no different), the fact that an unfolding crisis can be followed online in real-time by a global audience creates a huge challenge and disconnect. Because as John puts it “it has created an expectation that the parties directly involved will be able to share information and offer solutions almost instantaneously”.
Crises, as we know, are further compounded by a company’s inability (or unwillingness) to respond in a timely manner to stakeholders’ expectations for both information and action. This then quickly plays into the “perception” dimension of crisis which in turn shapes a “new” reality.
This dynamic clashes with another well established industry practice: investigations. Investigations are also codified, extremely complex and take time – with the completion of a preliminary report taking close to six months. Sometimes years pass before a definitive report is completed. “By then” says John,
“the damage to brand and reputation may already have been done, regardless of the outcome”.
Yet things had been changing: under the pressure of public opinion and stakeholders, it took little over one week – long before any technical report was filed – for French prosecutors to determine that the crash in the French Alps of Germanwings flight 9525 in 2015 had been a deliberate act by the pilot. And it should therefore come as no surprise to learn that Ethiopian Investigators announced that a preliminary report is to be published within a month.
The dynamics of real time audience, real time answers provides exceptional challenges regardless of the industry. Could it therefore be argued that initial statements play today an even more crucial role in maintaining trust and credibility in the long run? And if this is the case, that the assumption of responsibility and caution (in words not in timing) are both elements of an effective response?
Accessibility to technology and data have added increased complexity to contemporary crisis management and adds risk to the «let’s wait for the facts first culture» which until yesterday has actually been a safeguard for the aviation and other industries. “Data is readily available” tells me Barbara Kracht, a former director of communications at Airbus now partner in BHK Crisis Communications, a crisis comms aviation consultancy.
“Dedicated websites such as FlightRadar24, FlightAware, AeroInside, AircraftFleets to name just a few, lay bar information like aircraft age and history, and even flight data such as altitude, speed, vertical speed, etc which become an incredible source for speculations. While giving an initial clue, even to investigators, they represent a new but tremendous challenge for all parties involved”.
The data provided by these websites is available to the public. On the one side this means “amateur investigators” and “citizen journalists” are in a position to both analyse and draw conclusions from the available set of data. Resulting in them either identifying real facts or putting forth theories that contribute to “information smog” and potentially feed into “fake news” made viral by competitive 24/7 traditional media outlets. And on the other it also greatly increases the speed at which information is collated from multiple sources and analysed.
One more thing to consider here. According to various media outlets satellitedata gathered from the Ethiopian Airlines flight and evidence from the crash site showed similarities with Lion Air’s JT610 accident on October 29, 2018. Just a few years ago this kind of information would not have been available.
Technology and electronic surveillance add a new dimension to, and a set of challenges, to both crisis management and crisis communications. More information is available faster. The timeframe for decision making gets ever more compressed. Information smog is sticky and difficult to address.
How then do we manage such complexity? Which tools and resources are available to crisis management teams and crisis communications professionals?
Framing the crisis
“Leaders have one crucial task at the start of a disaster in the making, and that is to use the art of framing to describe the nature of the problem the organization is facing. Frames shape the way we think about problems (and also opportunities). They tell us what category of problem we are dealing with, and because they identify a type of problem, they also contain the seeds of action and response”.
Although framing is a useful “technique”, and one that certainly should be looked at, it remains a tool. And as such it does have one shortcoming: its effectiveness. That, I would argue, depends on the cognitive biases, corporate culture, levels of automatism and emotional stress present in the crisis management team at the moment the “frame” is created. The risk I see is that these factors may very well lead the team, or its leader, to frame a crisis in a specific way which does not necessarily mean it is framing it in “correct way”.
But the discussion about framing helps explore another dimension. That of crisis preparedness. Is the final objective of crisis preparedness setting organizational structures and response mechanisms and procedures and ensure they can be activated promptly? Or is it more important to instill in crisis management teams and top managers a “crisis sensitive” culture that leads them to ask the right questions at the right time? To execute plans discussed at times of peace or to question existing notions in an effort to “think out of a box” that no longer exists, as Patrick Lagadec would put it?
“in reality a crisis situation is characterized by a fundamental uncertainty: a lack of information associated with cognitive biases in a universe that is highly dynamic and uncertain”.
Cognitive biases, they write, alter our reasoning and different kinds of biases play a role in the crisis management decision making process. Some may be related to past experience, to industry codified responses, to the trust and reliance on available information, excessive focus, ego centric culture (it cannot happen to us), infallibility (we cannot be wrong), excessive confidence, unhealthy crisis, subordinate syndrome, management team dynamics that leads to certain “personalities” overshadowing or undermining others.
How can we prepare for cognitive bias and train crisis management teams effectively? And to what degree cognitive biases affected Boeing’s crisis management team’s ability to see the crisis in its full dimension and navigate it effectively?
Crisis Decision Check-Points
The early stages of crisis are, as we all know, critical. But, argues Sebastien Hogan, a crisis consultant based in Oslo in this very interesting article “crisis teams usually get into a focused and automatic mode that consistently leads to errors. Behavioral research clearly shows that designing decision-checkpoints into a process can effectively nudge us out of automatic mode and that 2-3 min is often enough to engage our critical thinking”.
Sebastien has developed a simple yet I believe extremely effective “decision making compass”. He has created a six-point matrix against which the crisis management team regularly tests its decisions.
“Getting people out of automatic mode is the main objective of the checkpoint, but making sure they speak up is also important. So a good degree of psychological safety on the team is also important for this to work” he argues.
This is an interesting idea. Can checkpoints be developed to both get the team out of “automatic mode” and safeguard against “cognitive bias”?
Defining moments: the “what if” question?
In crisis management decisions must be constantly taken as scenarios are projected, unfold and need constant re-examination. However, it is one pivotal decision, a single defining moment, in the early stages of the crisis that sets the course. Once that decision is taken, the direction of the crisis is set and the clock cannot be turned back.
In a long Facebook post Patrick Lagadec, who has followed the Boeing crisis since the onset, suggests that
“in the case of the Ethiopian Airlines catastrophe, the first crisis management priority should have been to realize this was not an «aviation crisis». The aircraft, both in terms of its aerodynamics and piloting was sufficiently «new» to make standard fundamentals inapplicable. The crash came a short time after another crash and developed in similar circumstances: these apparent elements lead us to think more along the lines of a «series of events» rather than at events that are unrelated”.
This, he argues, meant the event should not have been managed according to the “codified response” model but should have been approached at a strategic level. I couldn’t agree with him more. Except, by the time the second plane crashed it was already too late.
The defining moment for Boeing came (and passed) with the Lion Air crash in Jakarta. It is a this time that Boeing should have asked itself the “what if” question. And that question should have been: “what if we have another crash with the same aircraft?”
How would have the crisis unfolded if Boing had asked that question on October 28th 2018 and had approached the crisis from a strategic level from the onset? What prevented the company (culture, procedures, personalities, cognitive biases…) from seeing the potential risk ahead?
To sum it up…
What do you think?