Air Disasters and Problem Solving

I don’t watch a lot of television, but one of my guilty pleasures is the Smithsonian Channel’s Air Disasters, a series of 45-minute documentaries about airline crash investigations. I know, as a regular flyer of commercial aviation, I probably shouldn’t do this to myself, but so far it hasn’t interfered with my enjoyment of flying. What’s interested me most has been the problem-solving process. There is a technical communication connection, I promise, but if even thinking about plane crashes bothers you, feel free to read another post.

The Show Format

Using a combination of live-action actors and realistic animations or graphics, the show usually starts with an airline disaster (or near-disaster), teasing what might have happened before going into the full narrative. The stories are based on official reports and interviews with some of the participants, including investigators from the U.S. National Transportation Safety Board (NTSB) or similar agencies if the incident occurred outside the United States; pilots, passengers, or engineers. You usually have a pretty good idea if the plane/people survived by whether the interviewees were on the plane or not.

Starting with the incident itself, the show then shows investigators being brought in, showing what they found, and the process or challenges they encountered in trying to determine what happened. The show concludes, as always, on a positive note by showing what was learned and how air safety was able to improve after the featured incident.

The Process

What intrigues me the most is the process that NTSB and other related agencies go through in investigating a crash (or near-crash). They start with the evidence at hand, which might be obvious, such as video of an engine flameout captured on the nightly news, or it might be baffling, such as a plane falling out of the sky on a clear day and crashing into the ocean with no discernible cause.

Usually, if they have an obvious situation on their hands, they will start their investigation with that. Whether this actually happens or not in the real world, the investigators usually have a central shared office with a dry-erase board, where they write down their list of theories of what happened. In any case, before any blame is assigned, the investigators’ first task is to determine the what before they delve into the why. This amounts to delving into the root cause.

An example might be: The plane crashed because the engine flamed out. Why did the engine flame out? Because it ingested a flock of birds near the end of the runway. Why were birds in the area? Because there were not deterrents, such as sirens or sound effects to keep them away. And so forth. They keep digging into the “why” until they have a full sequence of events. In the Six Sigma world, this is known as the “Five Whys” technique, where you keep asking why on events up to five times until you reach the root cause.

Aviation is very much a systems-oriented activity. You’ve got aircraft design and manufacturing (Boeing, Airbus, etc.); legal and regulatory regimes; flight operations (pilots, the planes they fly, and the training they receive); ground operations (fuel and baggage handling, aircraft maintenance); airport systems (ground controllers, signage, lighting, emergency vehicles); air traffic control; weather forecasting; customer service and safety (flight attendants); and human/personal dynamics among these different disciplines, just to name a few. Sometimes all of these individual factors can play a role in an airline mishap or crash.

The investigators will go through their process, looking at each contributing factor, and try to determine which element(s) played the greatest role. In the engineering world, participants often use a “fishbone” diagram, which lays out the possible paths to the root cause, as depicted below.

Once the investigators settle on the how and why, they will make an effort to identify the “holes” in the system that allowed the incident to occur and they make recommendations to the industry to prevent another similar incident from happening again. This open sharing of causes and effects benefits the entire industry. In this way, the aviation business is much like the scientific community, where information and insight are shared freely for the benefit of all.

Applications for the Technical Communicator

Not everyone gets involved in engineering failure investigations. However, sometimes there will be “debriefs” if a business process goes awry, such as a proposal going out the door late or hundreds of brochures being printed with incorrect information on them. In many of these cases, the “Five Whys” approach is a great way to address underlying problems that lead to other, seemingly unconnected failures. The challenge, of course, is to focus on the process and the activities that didn’t work rather than focusing on the personalities or blame involved. That will happen, of course; but for the sake of preventing future problems, it’s worth taking the time to take a factual approach first and see which processes can be made better. Bottom line: when a major failure does occur, you and your organization an excellent opportunity to dig deeply into your processes and prevent future occurrences.

Digiprove sealCopyright secured by Digiprove © 2019 Bart Leahy

About Bart Leahy

Freelance Technical Writer, Science Cheerleader Event & Membership Director, and an all-around nice guy. Here to help.
This entry was posted in philosophy, Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.