AI business coach or When Your Sat-Nav Has Feelings: Choosing Between AI Assistants and AI Agents
My prompt
Write a blog entry as if you are Shannon Moir, who is the director of AI for Fusion5. The blog entry at a high level is using AI for making decisions (as an agent) or assisting the decision making process. The blog should make some attempts at being humorous, but not to the detriment of content. It should be using Australian spelling. This should be some sort of thought provoking piece. It needs to define assistant vs agent and also situations whether either of these are beneficial. More basic decisions, those that are based on precedence should be more the arena of the agent, whereas assistants still need some human intervention. An example to use is: Imagine facing some tough decisions at work, they get complex because there are features of your decision making process that should not be part of it. We use emotions for example, we try to predict peoples responses - AI does not do that. It acts based upon simply predicting the next most possible action based upon the information that it has been given. We are humans and we are incredibly complex. Here's the thing, we are more complex than AI - yes, that's right.
Could you benefit from dumbing down the decision process by asking AI what it thinks? Yes, you could because it might make you list all of the important things that you should be factoring into the decision... That is handy, and then AI makes the decision based upon the inputs. And, in awesome predictable AI ways, it just try's to guess what is next...
My analogy is the use of google maps for directions. you could use your history, local knowledge, the people in the back seat, everyone to choose the best way of getting from A to B - but let's be honest, google maps is going to be better most of the time. Unless there is a parameter that google does not use (ie. you are driving a semi trailer), then Google will not give you the correct answer.... Funny this is, the use of AI is the same. If there is a massive fact that you do not reveal to AI that will change the decision - it will not make the right decision for you.
Close out the blog entry by acknowledging that AI is trained on historical events and will act in a predictable manner (which should be auditable too). There is still a huge space for humans making decisions, because there is not enough fast data for all situational and immediate decision making. AI can definitely help with preparation for making the decision, scaffolding the decision and documenting it.
The following was written by chatGPT o2, which is way better than I expected!
When Your Sat-Nav Has Feelings: Choosing Between AI Assistants and AI Agents
By Shannon Moir, Director of AI, Fusion5The existential dread of Tuesday-morning decisions
Picture it: the inbox is a dumpster fire, the coffee machine’s gone on strike, and you’ve got to decide whether to fast-track a project or risk an irate board meeting. Our brains, marvellous lumps of grey matter that they are, want to juggle logic and feelings and Bob-from-accounts’ passive-aggressive comments. Meanwhile, our friendly neighbourhood AI just wants to predict the next most probable token in a sentence.
Surprisingly, that difference is gold.
Assistants vs Agents (no, they’re not the same Pokémon evolution)
AI Assistant | AI Agent | |
---|---|---|
Role | Co-pilot, sounding board, shoulder to (virtually) cry on | Autopilot for repeatable tasks |
Decision Style | “Here’s my take—do you agree?” | “Done. Next?” |
Human Involvement | High—final call is yours | Low—monitor, tweak, audit |
Best For | Nuanced, novel, political, people-heavy choices | Precedent-rich, rules-based, speed-critical choices |
Think of the assistant as the nerdy mate who brings spreadsheets to a pub trivia night. Great insight, but you still choose the schnitzel special. The agent is the self-service checkout: tap-tap-beep, job done—unless your watermelon won’t scan.
Why “dumbing it down” can make you smarter
Humans are gloriously irrational. We wrestle with gut feelings, legacy politics, and that nagging worry that Karen in marketing will roll her eyes. Getting an AI assistant to walk you through a big call forces you to list every factor explicitly—no muttering “it’s complicated” and hoping for the best.
Once the problem is framed clearly, a well-trained agent can crunch the precedent and spit out a recommendation faster than you can say “expense approval workflow.” Yes, the model is “just guessing” the next best move, but guess what? So are we—only with bias, caffeine, and limited RAM.
The Google Maps litmus test (and why trucks matter)
We all trust Google Maps to guide us across town because its rules are obvious: distance, traffic, maybe a sneaky toll road. Enter the secretly-driving-a-semi-trailer scenario. If you don’t tell Maps you’re 20 metres long, it’ll send you under a 3 metre bridge and into tomorrow’s newspaper.
AI decision tools are no different. Leave out a crucial constraint—regulatory nuance, a flaky supplier, or the warehouse floor’s tendency to flood on Thursdays—and the agent will cheerfully optimise you straight into disaster. Garbage in, jack-knifed B-double out.
When to hand over the steering wheel
-
High volume, low drama: Purchase-order triage, credit-risk thresholds, routine policy compliance. Let the agent churn.
-
Grey zones and grapevines: M&A rumours, talent-retention plans, brand-new markets? Keep the assistant handy and your sleeves rolled up.
-
Hybrid heaven: Let an agent shortlist options, then use an assistant to debate the trade-offs before you, the carbon-based life-form, sign on the dotted line.
Predictable, auditable… and still not omniscient
Remember: every model is a historian, not a clairvoyant. It trains on yesterday, behaves predictably today, and can be audited tomorrow. That’s comforting—but it also means the bleeding-edge, never-seen-before dilemma will still need your uniquely human cocktail of judgement, context, and occasional wild hunch.
So give AI the prep work: collate facts, weigh precedents, draft the decision log. Then step in, add the missing human variables (politics, ethics, the CEO’s pet project), and own the call.
Final thought (preferably over a flat white)
AI isn’t here to steal your job; it’s here to steal your least favourite parts of the job. Use assistants to clarify and agents to automate, and keep that last metre of judgement firmly in human hands. After all, if the route turns out dodgy, it shouldn’t be the algorithm apologising in front of the board—it should be me, coffee in hand, saying “Well, that’s one for the post-incident playbook.”
Happy navigating!
Comments
Post a Comment