Minimum Agentic Requirements. The End of Hype
Minimum Agentic
Requirements.
The End of Hype.
Last year, I was still putting the pieces together. Not dismissing AI — but testing where it genuinely excels and where it falls flat. Finding the real capabilities behind the noise, and figuring out how to retrofit and transform processes to map to them. This year, one person ships production software 24/7 — no dev team, no sprint ceremonies, no handoffs. Here is the architecture that makes it real.
The shift nobody prepared for
Twelve months ago, I was not dismissing AI. I was pulling it apart. The landscape was a wall of noise, so I did what any engineer does: I tested it. I found what AI is genuinely amazing at and started retrofitting and transforming real processes to map to those strengths.
Then something clicked.
Not a prototype. Not a hackathon toy. A production application — web-hosted, multi-user, handling complex government and enterprise bids worth millions of dollars. It processes hundreds of pages of RFQ documents, runs AI analysis, drafts full bid responses, fills compliance matrices, and exports submission-ready documents.
The development team? One person. Me. An ex-developer turned Director of AI. I understood cloud — I was an AWS architect. I understood security, integration, what good software looks like. But my modern dev skills had rusted, and I did not have the time or the team to turn my ideas into something others could use.
The AI agent writes the code. It fixes the bugs. It implements the enhancements. It deploys to production. I review, approve, and steer. That is the entire workflow.
There is no hype anymore. There is a production system processing real bids, with real users, delivering real commercial outcomes — built and maintained by one person coordinating AI agents. The question is no longer if this works. It is whether your organisation is ready for what it means.
From handoff to orchestration
The traditional model is familiar: a functional expert understands the business problem, writes requirements, and hands them to a development team. Weeks pass. Meetings happen. Things get lost in translation. The delivered product is close, but not quite right. Another sprint. Another handoff.
| Traditional | Agentic | |
|---|---|---|
| Who builds | Dev team (3–8 people) | AI agent + 1 human |
| Who decides | Product owner → dev lead → developer | Human decides, agent executes |
| Feedback loop | Sprint review (2–4 weeks) | Minutes. Literally. |
| Build hours | Business hours, minus meetings | 24/7. Agent does not sleep. |
| Bug resolution | Ticket → triage → assign → fix → PR → review → deploy | Auto-detected → auto-fixed → human approves → deployed |
| Domain fidelity | Lost in translation between roles | Zero translation loss. Builder IS the domain expert. |
The functional expert does not hand off anymore. They orchestrate. They have a 24/7 build capability that never misinterprets a requirement because they are the requirement. The person who understands the problem is the person directing the solution — in real time, with no intermediary.
Minimum Agentic Requirements
Reinforcement learning gets the headlines. But production agentic systems need more than model capability. They need infrastructure — a closed loop where humans provide direction, agents execute, errors self-report, and improvements flow back automatically.
I call this the Minimum Agentic Requirement (MAR) — the baseline infrastructure that turns an AI-assisted workflow into a genuinely autonomous development lifecycle.
-
Self-Reporting Error DetectionThe application must detect its own failures and log them as actionable items. Not buried in log files. Structured, contextual, auto-triaged. When a 500 error hits, the system writes its own bug report — endpoint, payload, stack trace, timestamp — ready for an agent to fix.
-
Human-Accessible Enhancement PipelineUsers must be able to request changes from inside the application. Not Jira. Not email. A feedback button that writes directly to the enhancement backlog. The people using the system are the people shaping it — with zero friction.
-
Automated Triage & Impact AssessmentIncoming bugs and enhancements are automatically assessed by multiple AI models. Risk level, affected areas, implementation complexity — evaluated before a human even looks at it. Multi-model consensus removes single-model blind spots.
-
Agent-Driven ImplementationThe AI agent reads the enhancement, explores the codebase, designs an approach, and writes the code. Not autocomplete. Not suggestions. Full implementation — models, API routes, frontend, prompts, tests. The agent understands the architecture because it built the architecture.
-
Human-in-the-Loop ApprovalThe human reviews the plan before execution. Chooses between approaches. Rejects what does not fit. This is not optional — it is the governance layer. The agent proposes, the human disposes. Every deployment has a conscious decision behind it.
-
Automated Build, Deploy & VerifyOnce approved, the pipeline handles the rest. Container build. Registry push. Zero-downtime deployment. Automated health checks. The human does not SSH into servers. The human does not run deploy scripts. The pipeline is the last mile.
-
Continuous Feedback DigestA scheduled process reviews the entire backlog daily, summarises status, and delivers it to the human. Not a dashboard to check. A push notification. The system keeps the human informed so the human can keep the system moving.
The proof: BidAssist
This is not theory. Every MAR above is running in production today on a project called BidAssist — a bid management portal built for Fusion5.
auto-drafted
shipped
developer
AI-generated
BidAssist processes complex government and enterprise RFQs. It ingests hundreds of pages, extracts requirements, runs strategic analysis, and generates complete bid responses — including XLSX compliance matrices, DOCX response documents, and pricing narratives.
But here is what makes it truly different: the project is completely self-aware. It understands its own architecture, its own codebase, its own conventions. It is its own archetype. Ask it to produce a sales brochure for itself — it can. A technical training guide for new users — it can. An implementation guide for deployment — it can. This blog post? Written with full knowledge of every line of code, every enhancement shipped, every design decision made. The system does not just run — it knows what it is.
Here is exactly how the agentic lifecycle operates:
Seven steps. Three actors: human, agent, automation. The human touches it twice — once to approve the plan, once to verify the result. Everything else is autonomous.
What this means — the uncomfortable truth
The system I built — with an AI agent as my entire development team — is in production, processing real bids, with real users, delivering real outcomes. It handles multi-document intake, strategic analysis, section-by-section response generation, XLSX template filling, hierarchical section management with team assignment, and automated deployment.
This is the uncomfortable truth:
The bottleneck was never the code. It was the distance between the person who understands the problem and the person who builds the solution. Agentic development eliminates that distance entirely.
This does not mean developers are redundant. It means the role is changing. The functional expert who can coordinate agents will outperform a team that cannot. The developer who understands agentic architecture will build systems that a solo developer never could. The skill that matters now is orchestration.
What the functional expert gains
- 24/7 build capability — The agent works when you sleep. Literally.
- Zero translation loss — No requirements documents. No misinterpretation. You describe what you want and the agent builds it.
- Instant iteration — See a problem, describe it, watch it get fixed. Feedback loops measured in minutes, not sprints.
- Focus on what matters — The rewarding cognitive work — strategy, customer conversations, solution design — stays with the human. The mechanical work moves to the agent.
What the organisation gains
- Speed — 17 enhancements shipped in weeks, not quarters.
- Cost — One person instead of a team. No sprints, no standups, no velocity tracking.
- Domain fidelity — The builder is the domain expert. The solution fits because the person who knows the problem directs every decision.
- Morale — People doing the interesting work. Not writing status reports. Not sitting in refinement sessions. Actually solving problems.
Start here
You do not need to rebuild everything. You need to build the loop. Start with these three things:
- Put a feedback button in your application. Not a ticketing system. A button that writes structured data to an API your agents can read. Make it frictionless for humans to request change.
- Auto-report every failure. When your application errors, it should write its own bug report. Context, stack trace, timestamp. An agent can fix what an agent can read.
- Close the loop with a pipeline. Approved changes should flow to production without manual steps. Build, deploy, verify. The human approves the what. The pipeline handles the how.
Everything else — multi-model assessment, daily digests, hierarchical planning — is optimisation. The loop is the foundation. Once it exists, the system improves itself faster than you can plan improvements.
Twelve months ago I was still figuring out where AI truly fits. Today it is the single most significant shift I have experienced in twenty years of technology. Not because of what the models can do — but because of what one person can do when they stop handing off and start orchestrating.
The scary reality? It is just one person. And that is enough.

Comments
Post a Comment