Compared to other industries the protective security industry receives more scrutiny – and heaps of criticism – than many others. Go ahead and Google “bodyguard” and look for the news. Sure, we can say we don’t like the bodyguard word and what Google finds doesn’t represent what we really do. We can complain that it’s one-sided and biased. Whatever. It’s still what most people outside of the industry see. And guess what? Not much of the news is good. Most of it’s pretty bad. Some of it makes us look like idiots.
Perhaps because of this, we in the protection industry are quick to give each other a pat on the back whenever a security person does something well. Or at least seems to. Maybe we’re so used to being looked down on that we’ll grab any opportunity to compliment each other when things are looking up – like when protective security folks occasionally get praise for what we do and some positive press.
Now, don’t get us wrong. We’re all for giving credit where credit is due. But the facts of the matter (and some of the dirty little secrets of the security industry) are this:
- Protective security programs and personnel are rarely in focus unless something goes wrong.
- Most days, nothing much goes wrong that people outside of the industry see, or even a lot of us in the industry see.
- It’s easy to conclude that because nothing goes wrong, security is good enough.
- Just because it’s easy to conclude something doesn’t make it right.
- A lot of security has never been tested, so we don’t actually know if it’s effective or not.
Correlation does not imply causation – not even in the security industry
Any scientist will tell you that correlation does not imply causation. Just because two things seem to be related, it doesn’t mean that one of those things caused the other. This sloppy thinking won’t get you far in a research lab, but it can work for a while in the security industry. It can also get some laughs on the internet, as you can see below.
Sorry, but the correlation between “nothing happened to the principal” and “security was good enough” is similar.
It might be true that security prevented threats from reaching the principal. It might also be true that the principal remained safe despite security, not because of it. You can’t really know unless you test it.
“Good enough” security is simply security that hasn’t yet been tested. It might look good, but most people – including the people who pay for our services – can’t distinguish “good enough” from “poor” when it comes to protective security.
Effective security teams constantly challenge themselves to remain one step ahead of what we must assume to be an advantageous foe. Effective practical security is what we strive to attain and maintain. “Good enough” really isn’t good enough.
Way to go! I think…
The success of a security program should not be judged solely by the apparent outcome of an incident. We must also evaluate the paths taken to reach that outcome. This understanding is extremely valuable: it’s what differentiates stagnancy from progression, good enough from effective, and cookie cutter from best-in-class.
To illustrate this, let’s examine a case from our own world. Some years back, we investigated an incident where a Principal was being driven by a security driver and a car accident was narrowly avoided.
At first glance, we saw this: another car blew a red light, and our security driver’s lightning-fast reaction (and ABS brakes) prevented a serious collision. Our driver was duly praised and deserved it: without his quick reaction, an accident would surely have occurred. But should that be the end of the story?
We investigated, as we do when there is any serious incident involving the Principal’s safety. It’s when programs and people get stress-tested that we can learn, after all, not when everything is running on autopilot. Here’s what we found:
- Both the driver and the Agent In Charge (AIC) were surprised when the car ran the light and was suddenly in front of them. A collision seemed imminent.
- Neither the driver nor the AIC considered that the incident might be the first stage of an Attack on the Principal (AOP). Instead, they reacted as if the incident was “just” a potential traffic accident.
- No emergency procedures were initiated once the accident was avoided.
At this stage, it started to look as if there might be a gap in the team’s security procedures, so we kept digging.
The key person in this investigation was the driver, not the AIC. He was the one who first saw the oncoming car, and he was the one who prevented the accident. When querying why he did not think the incident was an AOP, he answered that in retrospect and upon reflection, it might have been. But in real time as the incident unfolded, he noticed that the driver of the other car was looking straight ahead and NOT at the Principal’s vehicle. In addition, the driver of the third-party vehicle appeared shaken and genuinely shocked at his error and narrow escape from a serious accident.
There were many valuable lessons to be learned here by asking more questions:
- Why did the AIC not transition into emergency mode? It was after all a known route to a known and frequent location.
- Why did the driver not think this was an AOP? What would have made him think it was an AOP?
Our conclusion was that the driver assessed correctly but that the AIC froze up and didn’t even consider the possibility of an AOP. Not good. We needed to ask more questions:
- Were such situations covered in any of the team’s training sessions? If so, where and when?
- Did we have sufficient SOPs for such incidents? Were our drivers and agents familiar with them and trained to perform accordingly?
- How could we disseminate this valuable lesson learned to other security drivers and protective agents within the organization?
We concluded that the team’s basic training had not thoroughly covered all relevant possible methods of attack. They weren’t getting enough tactical drills or ongoing training, either. The team’s emergency procedures were not robust enough, hence the lack of transition to emergency mode. And although the AIC did not react, he was actually not at fault: he had not been provided the tools, know-how, or support to enable success in such situations.
These gaps were was immediately rectified across the team. But they would never have been identified without the after-incident process.
Lesson learned from the lessons learned
We believe there are several takeaways here for protective security teams and for all of us in the industry.
For one thing, we can’t pat ourselves on the back just because nothing happens to the principal. To keep complacency at bay we need to test ourselves, challenge ourselves, red team ourselves. Good enough security is only good enough until it isn’t. Unless we’re proactively evaluating tactical readiness, we just don’t know how ready we are.
For another, we need to dedicate ourselves to continual improvement and career-long learning. As we’ve said many times before, in executive protection, you’re only as good as your last detail. And if you don’t learn as you go, that detail might be your last. Like it or not, we simply have to learn from every incident that disrupts the status quo.
Whether we’re protecting guests at an event, people at a fixed site, or individuals on the move, every incident should be investigated rapidly and professionally. Every incident must be taken seriously and treated as a learning opportunity – for individuals, teams and all of us in the industry. Otherwise we don’t really know the difference between operational readiness and dumb luck.