Intelligence Isn't Data. Stop Architecting It Like Data.
The Ephemeral Intelligence Layer
Post 1 of 4
Enterprise Architecture Archetype

Intelligence Isn't Data.
Stop Architecting It Like Data.

Every previous architecture era answered "where does our data live?" The intelligence layer answers a different question: where does your organisation think, and who governs what thinks for you?

SM
Shannon Moir
Director of AI, Fusion5 · April 2026
Core Thesis

Your AI strategy should start with one question: where does our organisation think, and who's responsible for making sure it thinks well?

Every era of enterprise architecture answered the same question: where does our data live?

Databases answered with rows and tables. Warehouses with consolidated reporting. Data lakes with "keep everything, sort it later." Lakehouses with "keep everything, but make it queryable." Each layer was persistent: build it once, maintain it, query it.

The intelligence layer breaks the pattern. It doesn't store things. It reasons about things. And reasoning is ephemeral.

The five eras of enterprise architecture
Database Rows & Tables Persistent Warehouse Consolidated Persistent Data Lake Keep Everything Persistent Lakehouse Queryable Lake Persistent Intelligence Reasons, Not Stores EPHEMERAL

Every previous layer was persistent. Intelligence is not.

A data warehouse holds last quarter's revenue figures. Those figures don't change. They sit in a table, indexed and immutable, waiting to be queried.

Intelligence doesn't work that way. The model that generated last week's customer recommendation may have been retrained. The regulatory environment that shaped a compliance check shifted when new legislation landed. The agent that routed a service ticket learned from 10,000 new interactions since Tuesday.

Intelligence is ephemeral. Models change. Data shifts. Regulations evolve. Agents retrain. Context expires. And yet every enterprise conversation about AI architecture treats it like another data layer: something to be built, deployed, and left running. That assumption will cost organisations years of rework.

The sandwich model

The intelligence layer sits between two bookends.

TRUST · RISK · COMPLIANCE ENVELOPE
👤 Consumption Layer
Where value exits: decisions, automated actions, artefacts, dashboards, recommendations
Decisions Automated Actions Artefacts Human-in-the-Loop
▼ consumes from ▼
🧠 Intelligence Layer EPHEMERAL
Where the organisation reasons. Models, agents, knowledge synthesis, context, memory. Computed, not stored.
Model Router Agent Orchestration Skills Memory & Context 📡 AOC
▲ feeds into ▲
🔌 Feed Layer
Everything the organisation ingests: applications, data, events, knowledge, consumption signals, BYOAI
App Connectivity (MCP) Structured Data Event Streams 🔄 RL Feed 🧳 BYOAI

The Feed Layer is everything the organisation ingests: application data via APIs and MCP connectors, event streams, document repositories, knowledge bases, third-party research, regulatory updates. It also includes the signals flowing back from how AI outputs are consumed (more on that in Post 4).

The Consumption Layer is where value exits: decisions made, actions automated, artefacts generated, dashboards rendered, recommendations surfaced. Every board paper, every automated workflow, every customer-facing recommendation passes through here.

The Intelligence Layer sits between them. It houses your agent workforce, your semantic index, your model routing, your prompt orchestration, your skills catalogue, and your organisational memory.

Unlike every previous layer, governance wraps the intelligence layer structurally. Trust, Risk, and Compliance aren't bolted on after deployment. They're the architectural reason you build a layer at all.

Governance is the feature, not the constraint

You built the warehouse, then added access controls. You built the lake, then scrambled to classify what was in it. Every previous data layer treated governance as an afterthought.

The intelligence layer inverts this.

Why governance is structural

When an agent recommends a pricing decision, you need to know: which model generated it, what data informed it, whether it complied with your pricing policy, whether it's been audited, and who's accountable if it's wrong. These aren't "nice to have" controls. They're the architectural reason you build a layer rather than letting every team deploy models independently.

When a consultant uses an AI skill to draft a customer proposal, you need to know: is that skill registered, has it been assessed against your standards, and does it have access to the right (and only the right) enterprise data?

The layer exists to give AI capabilities a governed home.

What this means for your next architecture conversation

If your AI strategy starts with "which models should we deploy?" you're solving the wrong problem. Models are components. The intelligence layer is the system that governs how they reason, what they access, and who's accountable for their outputs.

You can't build it once and walk away. It's ephemeral by design, requiring continuous governance rather than periodic review.

Comments

Popular Posts