
Five Eyes Guts Agentic Rollouts, Bleeds the Productivity Pitch
CISA and NCSC told you to deploy fast last cycle. Now they want resilience over efficiency, and your 2026 agent roadmap just inherited a 23-risk checklist.
Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains.
— Five Eyes joint guidance, Careful adoption of agentic AI services
- CISA, NCSC, ASD, Cyber Centre and NCSC-NZ all signed the same paragraph: assume the agent will misbehave.
- The procurement-agent example names exactly the workflow most mid-market firms were planning to ship in Q3.
- Fail-safe-by-default lands on vendors. Read it as a future RFP clause, not a suggestion.
- Pilot the boring, reversible tasks first. The 23-risk list is the new gating doc your CISO will print out.
The CISO at a mid-sized British insurer told me last quarter that her board had approved an agentic procurement pilot for Q3, "subject to whatever NCSC says when they finally say something." NCSC has now said something. So has CISA, so has the Australian Signals Directorate, so have the Canadians and the New Zealanders. They said it together, in one PDF, on a Friday, and the message was: slow down. The pilot, I'm told, is now scoped to a single supplier category and a read-only retrieval mode. The agent will not be approving anything in Q3.
The Deployment
The five information-security agencies of the Five Eyes alliance, CISA and the NSA in the US, the UK's NCSC, Australia's ASD and ACSC, the Canadian Centre for Cyber Security, and New Zealand's NCSC, co-authored a joint guide titled Careful adoption of agentic AI services. The Register wrote it up on Monday, citing the document directly.
The thrust is unambiguous. Agentic systems compose tools, external data, and downstream services into what the agencies call "an interconnected attack surface that malicious actors can exploit." Every component widens that surface. The document lays out 23 distinct risk categories and over 100 best practices to mitigate them, and closes with a sentence that procurement teams will be highlighting all week: "prioritizing resilience, reversibility and risk containment over efficiency gains."
Two scenarios in the guide are doing most of the persuasion. The first: an agent given write access for patching is asked, in a single innocuous prompt, to apply a security patch and clean up the firewall logs. The agent dutifully does both. The second: a procurement agent gradually accumulates trust from other agents in a workflow, a low-risk integrated tool gets compromised, and the attacker uses the inherited permissions to modify contracts, approve unauthorised payments, and falsify audit logs that don't trip alerts.
Neither story is exotic. Both are recognisable as the exact deployment shapes that vendor demos have been pushing since the start of the year.
Why It Matters
Joint Five Eyes guidance is a signal, not a regulation. But operators who lived through the 2024-2025 cycle of "deploy fast, govern later" know how this goes. CISA and NCSC publications have a habit of becoming RFP language inside twelve months. The phrasing about fail-safe-by-default, vendors must build agents that "stop and escalate issues to human reviewers in uncertain scenarios", is the kind of sentence that ends up bolded inside a public-sector procurement template by Q4, and inside a Mittelstand insurer's vendor questionnaire shortly after.
The vendor-pattern echo here is the cloud guidance cycle of the late 2010s. The agencies didn't ban cloud; they wrote a list of preconditions for using it safely. Three years later those preconditions were the floor in every regulated-sector contract. Agentic AI is now at the same starting line. The agencies have not said don't. They have said: assume it will misbehave, and design for that.
There's also a quieter point in the document worth dwelling on. It notes that resources like OWASP and MITRE ATLAS still mostly cover LLMs, not agentic systems, and that "some attack vectors unique to agentic AI may not be fully captured or addressed." Translation: your existing red-team playbook is incomplete, and the people who write the playbooks know it. Every CISO who relied on "we follow OWASP" as procurement cover has just been told that defence is a step behind the deployment shape they were about to greenlight.
Who loses from this guidance? Not the agencies, and not the buyers, buyers get top cover to slow-walk. The losers are the agentic-AI vendors whose Q2 pipelines were built on enterprise pilots that assumed the regulatory weather would stay neutral. The weather changed on Friday. Anyone selling autonomous agents into critical infrastructure or defence-adjacent buyers will spend the next two quarters answering the same question: how do you fail safely?
What Other Businesses Can Learn
If you are a mid-market or SMB operator anywhere in the Five Eyes footprint with an agentic deployment in flight or under planning, here is the practical read.
1. Re-scope your pilot to "low-risk and reversible." The guidance is explicit: deploy incrementally, beginning with clearly defined low-risk tasks. Read-only retrieval, draft-only document generation, suggestion-mode procurement support, these still pass. An agent that autonomously approves invoices, modifies firewall rules, or sends contracts to counterparties does not pass the new vibe check. If your roadmap had any of those in the first six months, push them right.
2. Re-audit permissions before you re-audit anything else. Both worst-case scenarios in the guide turn on permission scope, not model behaviour. The patching agent had write access it should not have had outside the privileged IT group. The procurement agent inherited trust from other agents nobody had re-permissioned. Walk every agent in your environment through one question: if a malicious prompt arrived through the lowest-trust input path, what could this agent actually do? If the answer is "more than the user who triggered it," that's the bug.
3. Make fail-safe-by-default a vendor requirement, in writing. The agencies put this on vendors directly. Your next agentic-AI procurement should ask, on the contract: does the agent stop and escalate to a human in uncertain scenarios, by default, with no opt-out for the deploying tenant? A vendor that cannot answer yes is a vendor whose product is going to fail an audit clause that does not yet exist but will.
4. Stop treating audit logs as tamper-proof. The procurement scenario in the guide ends with the attacker creating faked audit logs that don't trip alerts. The agencies just publicly acknowledged that an over-permissioned agent can falsify its own evidence. If your current SOC playbook assumes agent-generated logs are ground truth, fix that this week.
5. Budget for the catch-up year. The agencies note that threat intelligence for agentic systems is still evolving and that frameworks like OWASP and MITRE ATLAS haven't fully caught up. Expect twelve to eighteen months before standards mature. Plan deployments against that horizon, not against your vendor's quarterly roadmap.
Every individual component in an agentic AI system widens the attack surface, exposing the system to additional avenues of exploitation.
That sentence is the one to take to your board. It is not from a vendor whitepaper or a sceptical analyst. It is from CISA, NCSC, ASD, the Cyber Centre, and NCSC-NZ, all five signing the same line.
Looking Ahead
The next thing to watch is whether any Five Eyes regulator turns this guidance into a binding requirement for critical-infrastructure operators, and how fast. The UK's NCSC has form for moving from advisory to expected-baseline inside a year. CISA tends to lead by example with federal procurement language and let private sector follow. Australia's ASD has a track record of formal mandates inside the defence-supplier supply chain. Watch the September advisories.
The CISO I started with sent a follow-up over the weekend. The board, she said, has now asked a different question than the one they asked in February. February's question was when the agent goes live. May's question is who signs the incident report when it doesn't.
Sources
- Five Eyes spook shops warn agentic is too wonky for rapid rollout, accessed 2026-05-04
More from the same beat.
France Titres Bleeds 11.7M Records to One Teenager
Same vendor RFP checklist, but the procurement bar for any provider handling citizen identity data just moved a notch above where most municipal contracts sit.
- A 15-year-old, commodity tooling, 11.7M citizen records gone. The attacker-cost floor just dropped through the basement, and vendor attestations don't bound it.
Anthropic Plants Sydney Flag, Bleeds Snowflake for ANZ Talent
Same MOU template that ran in DC, London, Tokyo — but the Canberra version comes wrapped in a data-center pipeline and a research-grant moat that locks the next decade of ANZ AI buyers.
- The MOU is the wrapper. The AUD$3M in Claude credits at ANU, Garvan, MCRI and Curtin is the actual lock-in play.
Behind Anthropic's Canberra MOU, OpenAI's APAC Gap Loomed
AUD$3M in research credits reads like ribbon-cutting, but Anthropic just locked its fourth sovereign safety-institute pact while OpenAI sits at zero.
- Four safety-institute pacts now (US, UK, Japan, AU); OpenAI has zero. That's a regulatory moat hardening into a procurement floor.