A three-part exploration of how AI is reshaping cybersecurity, from the front line of defence to the boardroom
Part 2 – Human + Machine: rethinking the balance of trust and control
It analyses behaviour, identifies anomalies, and increasingly, takes action on its own. It’s faster, tireless, and in many cases more consistent than any analyst could be.
But with that progress comes a harder question: how much decision-making are we prepared to give away?
The challenge now isn’t whether AI works in cyber – it clearly does. The challenge is understanding where it fits, and how to keep people thinking critically in a system that no longer depends on them for every move.
In many organisations, AI is already making micro-decisions that shape outcomes – what gets prioritised, what gets ignored, when to isolate a system, when to escalate an alert. It’s efficient, but it’s also opaque.
We typically trust the outputs because they’re fast and, as far as we can tell, accurate. Yet few teams can see why a model acts the way it does. When AI becomes the invisible decision-maker, oversight and peer-review becomes a governance issue, not a technical one.
The most mature SOCs are now building in confidence thresholds: if a model’s certainty falls below a defined level, the decision flows back to a human analyst. When confidence is high, it can act automatically – but with audit trails for every decision.
That’s not just a safeguard; it’s a model for partnership. It recognises that AI is part of the decision chain, not the end of it. In this model AI is an analyst’s peer, not their successor.
AI has transformed the speed of investigation. As we said in part 1, for a Level 1 analyst, having an LLM-powered assistant is like working beside an experienced mentor.
But there’s a risk hidden in that convenience. If everything arrives pre-analysed, we stop asking questions. The “why” behind each event – the root cause, the subtle pattern, the anomaly that doesn’t fit – starts to disappear from the learning process.
Cybersecurity depends on curiosity. The best analysts are those who challenge the obvious and dig beyond what the system shows them. When that curiosity fades, so does resilience.
Some teams are tackling this head-on, using scenario labs, red team exercises and post-incident reviews to deliberately put the human back into the analysis loop. They’re ensuring critical thought remains a skill to be practised.
AI can process information; it can’t develop instinct. (Yet). That still comes from time spent investigating the edge cases: the times when the data doesn’t tell the full story.
For AI to strengthen cybersecurity, not dilute it, organisations need to design the relationship consciously. That means more than just “human in the loop.” It’s about defining who owns the judgement at each stage of detection and response.
In practice, that’s creating a new kind of operating model.
This isn’t about slowing AI down; it’s about keeping it accountable. The worst outcome would be to hand over control too early and discover, too late, that the model has been confidently wrong for weeks.
Over time, as AI systems prove their reliability and transparency improves, autonomy will naturally increase. But that trust has to be earned, not assumed, as it would for any team member beginning their cyber career in a new organisation, or working on behalf of a new customer.
Inevitably, the role of the human in cybersecurity is changing. As AI takes over repetitive investigation, the value of human analysts will lie in interpretation, strategy and oversight.
That requires a shift in mindset. We need to stop viewing AI as a replacement for capability and start treating it as a force multiplier – one that extends human reasoning, not replaces it.
Because in the end, the hardest problems in cyber aren’t technical; they’re cognitive.
When the system thinks faster than we do, staying sharp becomes the real defence.