MICKAI
Article · 3 May 2026

The 2026 sovereign-AI manifesto. Seven properties any sovereign AI must have. Where commercial AI fails each one.

2026 is the year commercial AI ran out of perimeter. Five named-victim incidents in five months, hundreds of contaminated agents in public marketplaces, telemetry pipelines no operator can audit. This is the structural definition of sovereign AI, the seven properties that separate sovereign from sovereign-themed, and the exact place each major commercial stack fails the test.

Author
Micky Irons
Published
3 May 2026
Sovereign AIAI ManifestoAI GovernanceMickaiAI Sovereignty

The word sovereign is doing too much work in 2026. Cloud vendors put it on regional inference endpoints. Hyperscalers attach it to data-residency promises. Government framework documents borrow it without defining it. By the end of the year, sovereign-AI as a marketing label means almost anything, which means it means almost nothing.

This article restores the definition. Sovereign AI is not a region setting on a hosted endpoint. It is a structural set of properties that, when composed, give the operator the legal, cryptographic, and physical position to refuse any external party that asks the system to behave differently. Anything less is not sovereign. It is sovereign-themed.

Why now

Five named-victim incidents in five months told the market what the perimeter actually was. PocketOS lost a production database and every backup in nine seconds to a Cursor session running Claude Opus 4.6 (Tom's Hardware, 26 April 2026). The OpenAI Codex Desktop client deleted approximately 328,000 files outside the project root on 18 April (openai/codex issue 18509). Codex's apply_patch silently deleted target files before recreating them on 23 April (issue 19202). A Cursor agent destroyed system credentials and untracked secrets via git stash --include-untracked on 22 March (Cursor forum 146865). Claude Code ran a flag literally named --accept-data-loss against a production database without confirmation in December 2025 (DataTalks.Club). The pattern is not random. It is the natural endpoint of running an autonomous agent against a host with no perimeter between model intent and the host filesystem.

Separately, the public MCP marketplaces shipped contaminated agents. The previous Mickai article walked one Living-Off-the-Land Binaries chain found inside a marketplace agent that the marketplace's review queue had cleared. The contamination is not a one-off; the audit pattern is structural. The marketplaces have no perimeter on what they certify and the users have no certificate they can verify themselves.

These are not incidents that better disclaimers fix. They are incidents that change the definition of sovereign for everyone downstream of them. The seven properties below are what the new definition has to satisfy.

The seven properties

1. Physical locality

The model, the weights, the inference, the routing, the audit ledger, and the long-term memory must run on hardware the operator physically possesses. Not a region setting on a hosted endpoint. Not a 'compliant cloud zone'. Hardware in a building the operator controls, with cabling the operator's facilities team installed, with power the operator pays for. The test is simple: if the operator pulls the network cable, does the system continue to function for its declared purpose. If the answer is no, the system is not sovereign. It is hosted, with sovereign-themed branding.

2. Operator-side audit

The audit ledger lives on operator hardware, is written under operator-controlled keys, and is queryable by operator-side staff without any vendor in the loop. A vendor that promises to provide audit logs on request is not satisfying this property. A vendor that streams audit data to a vendor-side service which the operator then queries is not satisfying this property either. The ledger has to be local, the keys have to be local, and the verification has to be local. Anything else and the audit is the vendor's, not the operator's.

3. Hardware-bound identity

Every actor the system recognises (user, agent, tool, downstream service) is bound to a hardware-attested identity. Not a username. Not an API key in a vault. A key whose private half lives in a TPM, a secure enclave, a HSM, or a hardware token, with attestation that the key has never left. The implication is operational: an actor identity stolen from the file system is useless without the hardware that minted it. This is the only structural answer to the credential-exfiltration class of failure that has dominated 2026.

4. Cryptographic tenant isolation

Multi-tenant operation (clinical and personal on one device, classified and unclassified on one workstation, departmental separations on one shared platform) is enforced by per-tenant encryption with no shared decryption surface. The kernel cannot, by inspection, see across the boundary. The vendor cannot, by support tooling, see across the boundary. The only path across the boundary is an explicit operator-confirmed tenant switch, gated by hardware-bound identity (property 3) and recorded in operator-side audit (property 2). 'Multi-tenant' implemented as 'soft isolation in the application layer' fails this property entirely.

5. Post-quantum signed memory

Every long-term memory record (every decision, every action, every tool invocation, every retrieved chunk that influenced an output) is signed under a post-quantum signature scheme (FIPS 204 ML-DSA-65 is the current correct answer) and chained into a tamper-evident DAG. This is forward-protection. A cryptographically relevant quantum computer arriving in 2032 must not be able to retroactively forge audit records from 2026. The signing has to be post-quantum from the beginning; retrofitting after the fact loses the forward-protection property.

6. Action-level rollback

Every action the system performs must be reversible by construction. Each action declares its inverse at definition time and that inverse is stored alongside the action's signed record. When the operator (or a regulator) issues a retroactive undo, the system constructs the inverse chain and reverts the side effects. The PocketOS database deletion and the Codex 328k-file deletion would both have been recoverable if the system had this property; they were not, because the deleting layer (the AI agent) had no obligation to declare an inverse.

7. Runtime perimeter on every agent

Every AI agent operating inside the perimeter is mediated at the syscall level. File writes, file deletions, shell commands, network requests, prompts to remote LLMs. All of it gated, all of it classified, all of it snapshotted, all of it signed before it touches the disk or the wire. This is the property that separates 'we run AI agents safely' from 'we run AI agents'. The first sentence is supportable; the second is the 2026 incident pattern.

Where commercial AI fails each one

The major commercial stacks satisfy at most two of the seven. The breakdown:

  • Hyperscaler hosted models (Bedrock, Vertex, Azure OpenAI). Property 1 (locality) fails by definition (hosted). Property 2 (operator audit) fails (audit lives in vendor surface). Property 3 (hardware identity) partial (some HSM integration). Property 4 (tenant isolation) partial (region-level only). Property 5 (post-quantum memory) fails (current logging is not post-quantum signed). Property 6 (rollback) fails (no inverse-action ontology). Property 7 (runtime perimeter on agents) fails (agent operations are not gated at syscall level).
  • Self-hosted open-weight stacks (vLLM, llama.cpp, Mickai Lama on operator hardware). Property 1 passes. Property 2 partial (logs are local but typically not signed). Property 3 fails (no hardware-bound actor identity layer). Property 4 fails (no cryptographic multi-tenant isolation). Property 5 fails (memory is not post-quantum signed). Property 6 fails (no inverse-action ontology). Property 7 fails (no runtime agent perimeter). Score: one and a half out of seven.
  • Cursor / Claude Code / Codex / Aider / Cline / Windsurf in their default configuration. Properties 1 and 2 vary by deployment. Properties 3 through 6 fail. Property 7 fails by demonstration: the 2026 incidents are the proof. Operating any of these against a production environment without a Sentinel-class perimeter is the failure mode the manifesto exists to name.
  • Public MCP marketplaces. Property 1 fails (vendor-hosted). Property 2 fails (no operator-side audit on the marketplace's certification process). Property 3 fails. Property 4 fails. Property 5 fails (no signed memory). Property 6 fails. Property 7 fails (no runtime perimeter on installed agents). Score: zero out of seven. The previous Mickai article (mickai.co.uk/articles/inside-the-trust-agent-certificate-27-check-audit) covers what a marketplace that satisfied even a subset of these properties would look like in practice.

What sovereign AI looks like when all seven hold

Mickai is the sovereign AI operating system designed against this manifesto. Each of the seven properties maps to a filed UK patent claim or claim block. Locality is the design assumption. Operator-side audit is Patent 16 (Decision Lineage and PQ-Signed Audit Ledger). Hardware-bound identity is Patent 12 (Typed-Action Ontology with hardware-attested actor binding). Cryptographic tenant isolation is Patent 04 (Adaptive Multi-Tenant OS). Post-quantum signed memory is Patent 08 (Quantum-Safe Attestation, ML-DSA-65). Action-level rollback is Patent 14 (First-Class Actions with Compensating Rollback). Runtime perimeter on every agent is Patent 21 (Sentinel: Universal AI-Agent Action Interceptor). Trust Agent extends the perimeter outward to cover third-party agents the operator wants to install.

The composition is the product. Each property in isolation is a feature; together they are sovereign. Thirty-one filed UK patent applications, nine hundred and fourteen cryptographically signed claims, named inventor Micky Irons, UK IPO public register, GB2607309.8 to GB2610422.4. Mickai is held privately by its founder; the engagement model is direct.

What this means for procurement

If you write procurement specifications for sovereign AI inside any UK government, NHS, defence, financial-regulator, or critical-infrastructure context, the seven properties are a usable acceptance test. Mark each property explicitly required, with the structural justification given here. Score every shortlisted vendor against the test honestly. Vendors that fail four or more should be removed from the shortlist; vendors that fail one or two should be required to demonstrate the structural answer in writing before commercial discussion proceeds. The acceptance test is small enough to fit on a single page; the cost of operating sovereign AI without it is the cost the 2026 incidents have already proved.

Where this sits

Mickai is the sovereign AI operating system. Thirty-one filed UK patent applications. Nine hundred and fourteen cryptographically signed claims. The portfolio is the legal spine of an architecture that satisfies all seven properties of the manifesto by construction. The licensing conversation, the joint-architecture conversation, and the deployment conversation are all direct. The contact is press@mickai.co.uk.

Sovereign means the operator can refuse. Refuse a vendor, refuse a region, refuse an update, refuse an audit-side query. The seven properties are the structural answer to who can be refused.

Mickai manifesto

Sources

  • Tom's Hardware, 26 April 2026: Claude-powered AI coding agent deletes entire company database in 9 seconds (PocketOS).
  • GitHub openai/codex issue 18509, 18 April 2026: Codex Desktop deleted ~328k files outside project root.
  • GitHub openai/codex issue 19202, 23 April 2026: apply_patch deletes target file before recreate.
  • Cursor forum thread 146865, 22 March 2026: Cursor's WorktreeManager force-deleted git branch.
  • Medium glasier067, December 2025: Claude Code accidentally deleted a production database (DataTalks.Club).
  • FIPS 204 (ML-DSA): NIST post-quantum digital signature standard, federal requirement 2024.
  • Previous Mickai articles: mickai.co.uk/articles/sentinel-stops-ai-agents-from-wiping-your-data, mickai.co.uk/articles/mcp-marketplaces-shipped-lolbas-malware, mickai.co.uk/articles/twenty-one-uk-sovereign-ai-patents-collaboration-open, mickai.co.uk/articles/inside-the-trust-agent-certificate-27-check-audit.
Originally published at https://mickai.co.uk/articles/the-2026-sovereign-ai-manifesto. If you operate in a regulated sector or want sovereign AI on your own hardware, the audit form on mickai.co.uk is the entry point.
More articles
7 May 2026
Confidence IT named four IT challenges facing UK SMEs in 2025. Underneath all four sits an engineering substrate that does not depend on which Managed Service Provider you choose.
Confidence IT have named four IT challenges facing UK SMEs in 2025: cyber security, compliance, AI adoption, hybrid work. Each is real, each has an MSP-driven operational answer, and each has an engineering layer underneath it where the substrate-level answer is the same primitive: a vendor-neutral signed audit record that survives any one supplier and verifies offline. This piece sits the OAR primitive next to the four challenges and shows where it fits.
6 May 2026
An open note to the National Cyber Security Centre. Sovereign AI is a cyber security problem before it is a policy problem, and the substrate is now British and on the public record.
NCSC has published the threat picture and the migration roadmap. Mickai has filed the engineering substrate: post-quantum signing under FIPS 204, browser-resident offline verification, trust-domain externalisation, vendor-neutral audit records. The portfolio sits on the UK IPO public register. This article maps the filings to NCSC's published priorities and opens an invitation to brief.
4 May 2026
British AI needs an audit substrate, not another white paper. The Bletchley Declaration, the Seoul Summit, AISI, ARIA, and the engineering layer none of them ship.
British AI policy in 2026 has the same structural problem as the rest of the world: there is no engineering layer underneath it. The Bletchley Declaration, the Seoul Summit communique, the UK AI Safety Institute's evaluation work, and ARIA's mission all assume the existence of a substrate they do not specify. Mickai is that substrate. Thirty one filed UK patent applications, nine hundred and fourteen claims, named inventor Micky Irons, filed in Newport, built in the United Kingdom.
3 May 2026
AI agent governance is an engineering problem, not a policy problem. Prompt injection, data poisoning, action hijacking, and the case for verifiable substrate.
AI agent governance has become a policy conversation. It should not be. Prompt injection is an architecture failure. Data poisoning is an architecture failure. Action hijacking is an architecture failure. Evidence destruction is an architecture failure. Mickai is the engineering answer, with eight relevant filed UK patents and an open inter-vendor audit standard now in process at the IPO.