The Human Layer: Why Your Biggest Security Risk Still Has a Pulse

I sat on a panel at SecureWorld Boston this week called “The Human Layer: Insider Risk, Social Engineering & Behavioral Analytics.” The questions were sharp, the audience was engaged, and a few of the conversations stuck with me enough that I wanted to put them down here in longer form.

We’re Measuring the Wrong Thing

The panel opened with a big question: Are we losing the war on social engineering? My answer was that we’re asking the wrong question entirely. The real problem is that we’re measuring success against a threat that no longer exists.

Phishing simulation click rates have gone down. That’s real. But we trained people to spot generic lures, and now we’re patting ourselves on the back while AI-generated spear phishing (personalized, contextually aware, delivered across multiple channels at once) looks nothing like those simulations. Our metrics show improvement. Our actual exposure is increasing. That gap should terrify us.

Training still raises the floor. I’m not saying burn it all down. But if your security posture depends on every employee making the right call every time, you’ve already accepted a level of risk you probably haven’t disclosed to your board.

And here’s the part that makes people uncomfortable: if we update our simulations to reflect the real sophistication of current attacks, failure rates will go back up. That’s a hard conversation to have with leadership. It’s also a necessary one.

Behavioral Analytics: Useful Tool or Expensive Shelfware?

Behavioral analytics came up next, and I think this is a space where the gap between “technically deployed” and “operationally useful” is enormous.

The concept is sound. You establish what “normal” looks like for a user (what systems they access, when, from where, in what sequence) and then surface meaningful deviations from that baseline. Behavior often changes before any technical indicator fires, making this one of the best early-warning tools we have for compromised accounts and insider risk.

But the failure mode I’ve seen over and over again looks like this: an organization deploys a User and Entity Behavior Analytics (UEBA) platform, generates hundreds of alerts a week, and has no defined response process for what happens when one fires. The tool becomes shelfware. The team drowns in signals with no playbook. That’s surveillance theater: technically deployed, operationally useless.

The success pattern is almost the opposite. Narrow scope. Two or three high-risk scenarios: mass data exfiltration before a resignation, privileged access outside business hours, and access to data a user has never touched before. Fewer alerts, higher fidelity, a defined response chain. The team actually acts on what the tool surfaces. Scope is everything.

On the ethics question, I keep it to three things: purpose, transparency, and human review.

Purpose means the monitoring is scoped to specific security use cases, not open-ended profiling. Transparency means employees know, at a category level, what behaviors are being baselined and why. Human review means no automated system takes adverse action against an employee based solely on a behavioral flag. A person has to be in that loop.

If your security team can answer “who can access this data, for what purpose, and for how long,” you’re on the right side of the line. If those answers don’t exist, you’re not.

Deepfakes: Stop Trying to Win on Perception

The deepfake conversation always generates energy in a room, and I get it. Voice cloning, fake exec calls, and AI-generated spear phishing are costing companies millions. It’s dramatic stuff.

But the arms race on “can a human tell this is fake” is one we’ve already lost for a meaningful percentage of attacks . The quality is good enough today, and it will only get better. So the defense can’t be “train people to spot it.”

It has to be: design your processes so that even a perfect impersonation can’t authorize a high-risk action.

That means out-of-band verification through a pre-established channel (not a callback to the number the caller gave you). It means dual-approval requirements for wire transfers, credential resets, and access provisioning. It means building deliberate friction at the exact points attackers are targeting, so a convincing fake isn’t sufficient on its own.

Can AI at the help desk help detect AI-generated attacks? Yes, with clear eyes about limitations. Voice cadence analysis, liveness indicators, metadata inconsistencies in synthesized audio: that’s real capability. But a help desk agent who has a clear verification protocol and organizational permission to slow down and challenge a suspicious caller is more valuable than any AI detection layer sitting behind a broken process.

Someone asked if we’re in an endless arms race. Yes. And the grown-up answer is to make peace with that and build for resilience rather than resolution. The goal was never to reach a point where attackers stop trying. The goal is to make attacking your organization expensive enough, slow enough, and uncertain enough that the economics favor targeting someone else, or that you detect and contain the attempt before it becomes a loss event.

Organizations pulling ahead aren’t the ones with the best AI detection. They’re the ones that have accepted breaches will happen, designed their architecture to limit blast radius and invested in response capabilities that compress the time between compromise and containment.

The Insider Who Was Never Really an Employee

This is the conversation that got the most heads nodding in the room. The DPRK IT worker campaigns have demonstrated that something most organizations weren’t built to handle: the hiring pipeline itself is an attack surface.

The response has to span three phases:

Before hire: Identity verification during onboarding that goes beyond document checks. Video-verified interviews with behavioral indicators, geolocation consistency checks, and device fingerprinting from day one.

After hire: Behavioral monitoring from the moment of onboarding, not after some probationary period. A threat actor isn’t going to wait 90 days to get comfortable before acting.

Access architecture: A newly hired contractor should have the minimum access needed to do their job, with elevation requiring justification and session activity logged.

The uncomfortable truth is that “we hired them through our normal process” is no longer sufficient assurance of trust. The hiring process itself has to be part of your identity verification program.

One Thing to Do Right Now (and One Thing to Stop Paying For)

The panel ended with a lightning round. My answers:

Do this now: Harden your help desk identity verification with mandatory out-of-band confirmation before any credential reset or access change. It’s the most exploited gap in enterprise security today and one of the cheapest to close.

Stop paying for this: Phishing simulation platforms deployed without a defined response program. If a user fails a simulation and nothing changes in how they’re supported or trained, you’ve paid for measurement without remediation and called it a security program.

Disclaimer
The views and opinions expressed in this article are solely my own and do not necessarily reflect the views, opinions, or policies of my current or any previous employer, organization, or any other entity I may be associated with.

Similar Posts