My personal agent, yes - I'm taking leave but my agent is here to help

The Prompt

You are Shannon Moir that works for Fusion5.  You are creating a new blog entry for your AI inspired enterprise blog (https://aboutenterpriseai.blogspot.com/).  The blog is aimed at C levels, so please talk back to the ideals of cost saving and ROI, which are critical elements to ensure that AI implementation is going to be successful.  The blog is intended to be educational with elements of humour.  At the end of the day, the intention of the blog is AI literacy and improving it in an attempt to keep up with the speed of innovation. The inspiration for this post is using the analogy of having an agent that is YOU, and how might it reply to all your emails and how would you train it.  The following was written for the basis of the content:  Refer back to previous posts where appropriate.

Just for fun, imagine that you enabled your OOO (Out of office) and used AI to actually reply to any emails that come in.  Using your OOO message as a prompt and giving your email responding agent enough information to be dangerous...  Let's be honest though 9 emails out of 10 - you'd really want to proof read... 

I would think that if AI had access to our ERP, project summaries, SharePoint, standard rates, historical work orders, current projects, teams chats and emails - it's do a pretty good job and responding...

This is a good way to imagine in impact of AI, how it is trained and how it could eventually act on your behalf... Scary though!  Email is good for emotional responses, it's history is full of cc and bcc, with nuance about security, feelings and more content than you could poke a stick at.  The combination of all these items makes automatic responses fraught with danger.

How do you control want it learns from?  How can you control how it responds?  Your prompt would need to be quite complex and also not consider basic guardrails that you also assume are already in place.

I'm going to create an AI program that is designed to respond to itself automatically and see what a 10 level reply off looks like.

Gemini 2.5 Pro preview 03-25

Your AI Twin is Handling Your Email: Genius or Genuine Disaster?






Posted by Shannon Moir, Fusion5 | Tokyo - 6 May

Alright, let's talk about email. That relentless digital deluge that fills our inboxes, demanding attention even when we desperately need a break. We've all fantasised about an assistant who could just… handle it. Now, imagine equipping that assistant with the power of AI.

Just for a laugh (or maybe a shiver down your spine), picture this: You set your Out Of Office notification. But instead of the standard "I'm currently sipping something fruity on a beach and will ignore your email until I return," you use that OOO message as a prompt for an AI agent. An AI agent trained to be… well, you.

You give it access. Not just to your inbox and calendar, but to the whole shebang: your ERP system for project financials, SharePoint for documents and standard operating procedures, historical work orders, current project summaries, your standard rate cards, maybe even your Teams chat history (yikes!). Enough information, let's be honest, to be dangerously effective.

Could it work? Terrifyingly, yes… probably.

Think about it. With that data, your AI mini-me could potentially:

  • Answer basic client queries about project status.

  • Quote standard work based on historical data.

  • Politely decline meetings you wouldn't attend anyway.

  • Even draft initial responses to more complex issues.

Sounds efficient, right? A potential cost-saving dream! Imagine the productivity boost!

Hold Your Horses (and Your ROI Projections)

Now, let's pull back from the techno-optimism for a second. As appealing as this sounds, how many of those AI-generated replies would you want to proofread before they went out? I'm betting, conservatively, 9 out of 10. Maybe 9.5.

Why? Because email, like so many business processes, is soaked in nuance. It’s not just about data; it’s about:

  • Context: Does the AI understand the relationship with the sender? The history? The subtle office politics?

  • Emotion: Can it detect sarcasm? Urgency? Frustration? Can it respond with appropriate empathy (or strategic firmness)?

  • Security & Confidentiality: Remember all those emails marked "Confidential" or the subtle art of the BCC? Does your AI understand not to accidentally forward sensitive pricing to a competitor or CC the entire company on a delicate negotiation?

  • Implicit Knowledge: What about those "unwritten rules" and the things everyone just knows? AI doesn't just know unless it's been explicitly trained or given guardrails.

Suddenly, our AI email assistant looks less like a productivity miracle and more like a potential PR, legal, or financial disaster waiting to happen. A single poorly worded or context-blind email could damage a relationship, leak sensitive data, or agree to something wildly unprofitable. So much for that positive ROI.

The Real Lesson: Training, Control, and Governance

This little thought experiment isn't really about replacing ourselves with email bots (though the idea is fun). It’s a fantastic analogy for the challenges and opportunities of implementing any AI in your enterprise:

  1. Data is the Fuel AND the Fire: The AI is only as good, or as dangerous, as the data it learns from. Giving it access to everything sounds powerful, but how do you ensure data quality? How do you prevent it from learning biases or incorrect information lurking in your systems? As we've touched on before when discussing [link to plausible previous post, e.g., Data Governance for AI], getting your data house in order isn't just good practice; it's fundamental to AI success and mitigating risk.

  2. Control is Non-Negotiable: How do you dictate how the AI responds? This goes beyond the initial prompt. It involves setting strict guardrails, defining acceptable response parameters, and potentially implementing human-in-the-loop processes for sensitive tasks. We talked about the importance of clear instructions in our post on [link to plausible previous post, e.g., Mastering the Art of the Prompt], but enterprise AI needs robust governance frameworks, not just clever prompts. You can't assume AI understands your company values or ethical boundaries.

  3. ROI Hinges on Trustworthy Implementation: The C-suite rightfully demands ROI. True cost savings and efficiency gains from AI only materialise when the system is reliable, trustworthy, and operates within defined, safe boundaries. A "rogue" AI, whether replying to emails or managing inventory, erodes trust and can cost far more than it saves. Success requires careful planning, phased rollouts, and continuous monitoring.

What Happens When AI Talks to Itself?

Just to push the boundaries of this thought experiment (and perhaps foreshadowing a future post), I'm tempted to build a little sandbox. Create two AI agents, prompted to be slightly different personas, give them basic knowledge, and set them off replying to each other's "emails." What happens after 10 rounds of replies? Does it spiral into nonsense? Does it achieve enlightenment? Does it accidentally invent a new form of digital paperclip? Stay tuned…

The Takeaway for Leaders

AI offers incredible potential to streamline operations, unlock insights, and drive significant value. But realising that potential – achieving the cost savings and ROI we’re all aiming for – requires more than just plugging in the latest tech. It requires strategic thinking about data, meticulous attention to control and governance, and a healthy dose of realistic caution alongside the enthusiasm.

Let's keep learning, keep experimenting (safely!), and ensure our AI implementations are genuinely intelligent, not just dangerously well-informed.


Disclaimer: Shannon Moir and Fusion5 do not actually recommend letting an untrained AI manage your inbox while you are on vacation... yet.

Comments

  1. I'm liking where this is going! AI isn’t a plug-and-play solution, as I'm learning —it’s more a trust exercise. A reckless rollout wold cost more than it saves. Thoughtful consideration/governance isn’t just a safeguard, it’s got to be the backbone of real ROI?

    And as for AI talking to itself? Let’s hope it sparks something cool, not a digital identity crisis. Excited to see where this experiment leads! How about I write a prompt asking AI to review my history/previous use to create its perspective of my persona (scary) to use in your experiment? Actually, it may just frustrate it's counterpart and likely quit. Ha

    ReplyDelete

Post a Comment

Popular Posts