Working Wider

Unexpected Expectations: Leadership Lessons from BP

On Christmas, the New York Times ran its most riveting story of the year. Deepwater Horizon’s Final Hours has all the drama of a Hollywood action movie as it recounts the final nine minutes on BP’s ill-fated rig.

In contrast to the coverage of efforts to stop the ecological damage from the BP oil spill where public outrage, political posturing and blame dominated the press, the story of what happened on the rig illustrates how crisis exposes the latent vulnerabilities that exist in all business operations.  Likewise, these lessons can inform any business.  By amplifying risk and compressing time, crisis draws out the best in people while highlighting our basic assumptions; particularly when it comes to our use of technology.

The Horizon crew devoted hundreds of hours preparing for the expected exceptions.  They trained for blowouts coming up through the drilling pipe because that’s the known danger, but were caught flat-footed when the blowout erupted from the entire well opening.  As recounted by one of the operators, “I had no idea it could do what it did.”

Surfacing Unexpected Exceptions

When I ask CEO’s to tell me what they are worried about, most respond with specific issues and are quick to share their coping strategy.  There are always a select few that tell me they’re worried about what they don’t see.  It’s a healthy paranoia that former Intel CEO Andy Grove reflected in the watchful phrase “only the paranoid survive”.

Learning how to hold a healthy sense of paranoia concurrent with simultaneously operating in the normal world is the leadership lesson within the Deepwater Horizon story.  Three lessons stand out.

There are no foolproof systems – This is the recurring lesson of mankind and technology from the Titanic, Challenger Space Shuttle, Wall Street collateralized debt implosion and now Deepwater Horizon.  By design, systems are most capable of handling exceptions that we know or can imagine.   That generally limits them to those that we have experienced.

The military teaches its leaders to go beyond their experience and assumptions.  They use war games to replicate “the fog of war”.  Rather than prepare leaders for every every possible situation, war games create chaos which soldiers later study in after action reviews to see how they actually respond.  Multiple iterations engrains leadership with the ability to create coping strategies that extend beyond known causes or situations.

Artificial intelligence is heading in the same direction.  Rather than trying to figure out how the brain works, AI research now focuses on machine learning:

AI researchers began to devise a raft of new techniques that were decidedly not modeled on human intelligence. By using probability-based algorithms to derive meaning from huge amounts of data, researchers discovered that they didn’t need to teach a computer how to accomplish a task; they could just show it what people did and let the machine figure out how to emulate that behavior under similar circumstances.  (Wired, December 2010)

Objectives Easily Corrupt Analysis – In preparing to plug the well, the crew of the Horizon conducted a “negative pressure” test.  Designing and interpreting this is a combination of art and engineering.  There are no industry standards.  After the first test, the executives from BP read the results in line with business objectives that focused on saving money and time.  Rig engineers interpreted the results using technical criteria, were skeptical and wanted more data.  Where one sits determines what one sees — more than we’d like to think.

They agreed to run a second test and after which they collectively agreed the well was stable.  Following the disaster, outside investigators determined this was a very incorrect interpretation.  A lawyer for the presidential commission investigating the disaster said, “The question is why these experienced men out on that rig talked themselves into believing that this was a good test.?

My experience is it is the rare person who will continue to hold out when facing a second or even third instance of ambiguous data when combined with the need to move a business forward.   One could easily envision the Horizon team reviewing the second test’s results with an eye to finding an answer that BP leaders could endorse rather than hanging tight to what the data said.  The fact their choice supported business objectives is exactly what happened in the Shuttle Challenger o-ring incident.

Systems Designed for Human Use

Control systems, like government regulations, constantly grow in complexity with learning and experience.  Each new mishap or threat adds a new alarm.  Soon what was a reasonably simple system is cumbersome to use and overly intrusive.

As the din of individual alarms escalated on the Horizon, operators eventually silenced all of them just so they could think.  Similarly, completing some of the Horizon’s prescribed safety procedures required punching thirty buttons.

Intrusive complexity also impacts normal operations and maintenance.  Before the BP oil spill began, the automatic general alarm was set to manual to stop false alarms that inadvertently awakened sleeping crew members.

After the disaster, the chief counsel for the presidential commission looking into the disaster described the well operators’ handbook as “a safety expert’s dream” but struggled with answering a simple question:  “how would you know it’s bad enough to act fast?”  This assumes that if one knew it was bad, one would act.  The facts suggest this may not be so.

The business consequence of deploying the safety equipment was understandably high.  Severing the platform from the well or activating the blowout shears had such a high economic and environmental consequence that people resisted engaging them.  On the one hand, that is the nature of last resort mechanisms yet it was the delay and/or failure of the mechanisms that made the situation worse.  Contrast this to the actions taken by Chesley “Sully” Sullenberger, the pilot of the U.S. Airways plane that landed in the Hudson River.  He concerned himself singularly with landing the plane safely; not minimizing damage to the aircraft.

Apply the Horizon Leadership Lessons

Motorcycle riders are taught one maxim:  It’s not will you go down, but when will it happen and are you prepared?

Just before the holidays, Skype phone service went down for twenty-four hours.  As in the Horizon case, it was caused by the confluence of several events:  overloaded servers combined with a software bug that was exacerbated by re-booting issues unique to peer-to-peer networks.  In hindsight, the root cause was clearly described but in real-time, never anticipated.

Just as artificial intelligence has shifted from a predictive to an active learning model, leaders should do the same.  It’s unrealistic to think that in a world as complex and intertwined as ours that one can foresee complex situations.  Follow the military’s lead and stage a strategic war game, conduct disaster simulation or stress test existing systems beyond their capacity.  Teach leaders through interactive action learning and debrief and then, design the best support systems you can.

As the old baseball pitcher Satchel Paige said, “It’s now what you know that gets you, it’s what you know that just ain’t so.”

0 comments… add one

Leave a Reply

Your email address will not be published. Required fields are marked *