- Perspectives on Power Platform
- Posts
- Fine. Let's call them agents then.
Fine. Let's call them agents then.
The rebranding of LLM based business software products will continue until adoption improves.
“My AI is better than your AI!!!”
It sends a strong “my dad is stronger than your dad” vibe when the CEO of Salesforce decides to post a rant about how Microsoft has rebranded Copilots as agents, in an alleged panic response to their AI based products failing to deliver an impact:
It’s a bit rich coming from yet another enterprise software company that’s high on it’s own AI. Like everyone else, Salesforce is desperately trying to combine large language models with structured business data, to produce something customers would be willing to foot the growing GPU bill for.
We know that Salesforce has always been more about the marketing show than any of its competitors. If they can’t find an angle where Salesforce can claim to be revolutionary and ahead of everyone else - you gotta fabricate something controversial. It does indeed smell a bit like trying to take control of a lost narrative.
Having said that, Marc Benioff does have a point with his poking at Microsoft AI branding. Because even many professionals who keep a close eye on the Microsoft product strategy find it exceptionally challenging to follow what’s happening on MS AI branding front. Referring to one comment I received when explaining the latest turn of events:
Let’s talk about agents then, from the perspective of Microsoft’s AI offering. Today I’ll discuss:
Branding
Capabilities
Security
Branding
In a move that surprised absolutely no one with experience on Microsoft product naming, the Copilot Wave 2 event back in September revealed the rebranding of a few AI products. Copilot for Microsoft 365 became Microsoft 365 Copilot. No big announcement from MS, yet we can see loads of updates pushed to MS Learn documentation as a result of this move:
Another subtle yet more wide ranging update took place beneath the product name layer. Whereas previously the message had been that with Microsoft Copilot Studio users can build their own custom copilots, that was now replaced with the term “agents” across the board. Again, MS doesn’t like to draw attention to what the earlier names have been, so we need to dive into the GitHub repo of MS Learn documentation to examine the impact:
So, now we’ll all be building “declarative agents”. Or “custom engine agents”. Soon, presumably, the much hyped “autonomous agents”, too. But will this autonomy be a new agent type alongside these other two, or yet another dimension that will introduce further matrix options to achieve productivity improvements in the AI wonderland?
I bet most information workers are already anxiously waiting when they’ll get the chance to “open Microsoft 365 Copilot BizChat and use declarative autonomous agents that leverage Graph connectors and Power Platform connectors to interact with business systems via plugin actions in real-time to deliver exceptional workday experiences”. It’s a spicy soup of technical concepts that combines many novel ingredients in a way that says absolutely nothing to most people. Don’t you miss the good ol’ days when we just had “an app for that”?
You’ll have to forgive me for not immediately realizing why do I need agents in my life? I do understand what business apps are for. I certainly know how automation can help get things done. We’ve had both of those in Power Platform - and already back in XRM. Yet the further this agent story evolves without me clearly seeing in real-life what’s the added benefit over apps and flows - the less excited I get about Copilot.
The amazing thing about the term “agent” is that Microsoft already had a product called Power Virtual Agents available before the AI agent boom began. Then, a year ago, it was rebranded into Copilot Studio - to support the extensibility story for Microsoft built Copilots. Now, we’re back into the land of agents. Same, but different.
I spend a lot of time chatting with ChatGPT every day. I know its limitations and I’ve discovered many valuable ways in which it can help me get work done. But then, once I switch to the Microsoft 365 Copilot app and start thinking about how to gain value from AI extensibility here - my mind goes blank. I just can’t identify those cases where I wish there was a copilot agent for that. I think I understand the potential of LLMs fairly well. What I don’t yet understand is what Microsoft will be able to deliver me when using LLM based agents.
Some people are criticizing AI features in today’s software as being a solution in search of a problem. The further we go along the road where enterprise software companies inject LLMs into everything they sell, the more I keep wondering: how many customers asked for this? This classic tweet about how Twitter back in the days was ignoring the real asks from users and instead rolling out meaningless updates captures the current situation so well:
“Likes are now florps.” “Copilots are now agents.” Okay, fine. Let’s move on.
Capabilities
This week we saw Microsoft announce ten autonomous agents inside Dynamics 365 products. What we in practice got at this point were marketing videos of how the features may look like once the preview versions land. I watched them all. They’re not bad. There’s potential to create helpful suggestions to users, based on the business data retrieved from the systems of record.
What confuses me is: what part of these features are handled by “agents”? Each video appears to contain custom user interfaces tailored for the specific Dynamics 365 app in question. Are these now a separate breed of Dynamics 365 agents that are not the same as our Copilot agents? From where can you use them? How do you customize and govern them? Are they the same for each user, or indeed a personal AI assistant, as Copilot is advertised to be?
During the past year, we’ve seen Copilots added to every MS business app product, as demanded by Microsoft senior leadership. This latest announcement still says that Copilot would be the interaction pattern for agents:
“We envision organizations will have a constellation of agents—ranging from simple prompt-and-response to fully autonomous. They will work on behalf of an individual, team, or function to execute and orchestrate business process ranging from lead generation, to sales order processing, to confirming order deliveries. Copilot is how you’ll interact with these agents.”
It bet Microsoft would love to claim that there’s “one Copilot, many agents”, in order to deliver a simple message about their AI strategy for the end users. Unfortunately, it doesn’t seem like we’re getting any closer to that kind of experience. Instead, we’ll get more & more visible Copilot features inside apps. Then, there are those secret agents hidden behind the UI, working on their given tasks autonomously.
Let’s look at the Customer Intent and Customer Knowledge Management Agents, available for Microsoft Dynamics 365 Customer Service and Microsoft Dynamics 365 Contact Center. In the video we see a perfectly rational story about the ways in which LLM can try to figure out what problem the customer has, search for a suitable solution from internal data sources, and generate lots and lots of text for both the customer, the service rep and the screens of the CRM system.
Dynamics 365 Customer Service workspace, now with agents.
Count the number of Copilot icons in the screenshot above. I can spot it at least 4 times.
Is there a single Copilot that’s aware of each of those features and the data context? How does the Copilot on the left side with its customer facing chat deal with the user facing Copilot chat on the right side? Which Copilot generated the new case record from the conversation as said in the top banner? Where’s the Copilot that summarizes the activity records in the Timeline control at the bottom? Does this all get logged somewhere?
In practice, it appears that the Copilot icon today means largely the same thing as the “sparkling magic” button that can be found across most applications in the year 2024. The universal solution - that you simply have to trust.
Applications like Dynamics 365 Customer Service used to be built for human agents. Now, every field on the forms and sidebars could potentially be filled with AI generated text. We have the same core CRM systems in the background, built for tracking customer communication and managing processes related to things like support tickets or sales leads. Only now, it’s the AI agents that are primarily reading and writing the data. The human agent will only be requested to sign off on what the machine is doing. “By clicking on the Resolve Now button, I hereby confirm that whatever the LLM wrote into the case resolution dialog looks credible enough.”
I’ve written about the copilotization of CRM half a year ago. At that point, the Copilot form fill assistance was generating mostly distracting garbage. It was enabled by default by Microsoft, which lead to annoyed customers sharing tips on social media on how to turn it off. In my own small CRM system I left it on, just to see if it improves over time. Today, whenever I create new accounts and contacts, I still get annoyed that Copilot suggests everyone’s country to be Finland. Luckily, it no longer suggests random phone numbers for them.
6 months later, we are now asked to believe that this technology has evolved at an amazing speed. We hear it is time to let Copilot / Dynamics 365 agents take control of tedious manual processes. To let them work autonomously for us, in real customer facing processes. After all, this is what Marc from Salesforce is claiming the true meaning of AI to be for businesses that will get to use Agentforce. Generally available tomorrow.
At this point it’s only fair to ask a question: is it likely that there’s been a technological breakthrough in the area of generative AI? Did both Microsoft and Salesforce discover the secret recipe for turning the ultimate bullshit generator into a virtual employee that can be trusted to do the work of human employees? No, the answer is that this is not at all likely.
LLMs do next token prediction. No one knows yet why they are so good at many things as a result of this. We can ask them to do all sorts of wild things, beyond just producing text into a chat window. Like completely taking over your computer screen and trying to order a pizza for you:
Agent.exe is a 6 hour hobby project shared on GitHub. Copilot agents and Agentforce, on the other hand, are the all-in bet from the worlds largest business software corporations. Developers are obviously warning people not to let something like Agent.exe run unattended, let alone allow it to access your user credentials or payment information. Microsoft and Salesforce, however, are asking companies to use their credential sharing as a service low-code platforms to apply these new agents in processes related to financial transactions.
Sometimes when reading the tech announcements around exciting new AI capabilities, I get the feeling as if I am being shown the flashing neuralyzer. The futuristic device from Men in Black movies that is used for erasing specific memories of people who witness alien activity or other classified events that the MiB agents must protect the public from. Only this time it’s about forgetting what I’ve seen GenAI do before.
Security
Even if the marketing claims for AI agents wouldn’t yet meet the real capabilities or readiness of this technology, there’s no point in trying to stop AI from happening. It is already deeply integrated into everyday tools we use at work and at home. As a result, many of the AI policies designed by organizations in isolation from these tools are already obsolete. The decision of whether or not to use AI in some task is not going to be based on the individual user navigating to chatgpt.com anymore. It’s built into our phones, office applications, everything.
What we then should rather focus on is how to make sense of it all - and how to reduce the risks. Just like no one knows exactly how LLMs produce the results they do, there’s no established practices yet on how to govern the AI tools at work. This is a journey full of experimentation and reflection. The more broadly it is shared out in the open, the faster we as a community can learn.
For Microsoft customers taking control of the new AI tools, there are going to be a lot of similarities to how the usage of low-code/no-code solutions built with Power Platform has been secured and governed in a scalable way. MS envisions every business user creating their own agents in the future, just like Power Apps and Power Automate empowered citizen developers to build better tools for their everyday needs. Many technical elements, like connectors and environments, will be the same, whether you are building low-code apps or Copilot agents.
There are brand new threats to analyze. Processing large amounts of unstructured text is clearly an area where LLMs excel in. However, they suck at being able to tell apart which text is an instruction from the legitimate user - and which part has possibly been injected by a malicious actor. Prompt injection allows attackers to take over your Copilot. Given how many internal tools we will allow AI agents to access in the future, the implications are very similar to remote code execution (RCE).
Autonomy is a huge step for AI agents to take. The more we apply them into business scenarios that involve taking in text from various external sources, the more options we give to the bad guys for capturing our agent in the field of duty. I get the feeling that we’re going to need an army of agents focused on especially fighting these forces of evil, in this rapidly evolving digital landscape (pun intended).
Agents can only ever learn to become more reliable if they get enough realistic training data. Where does this data come from? That is the interesting dilemma for the likes of Microsoft and Salesforce. On one hand, they are sitting on top of massive amounts of business data - owned by their customers. On the other hand, the reason why customers would rather use a service by MS than by OpenAI is their expectation of trust. That the data will not be used for training purposes.
We’ve recently seen MS introduce optional data sharing for Copilot AI features in Dynamics 365 and Power Platform. Today, it is off by default. There aren’t direct benefits for customers to enable this data sharing.
What will happen in the future? It is entirely possible that the providers of the coming agentic workforce, like Microsoft and Salesforce, will need to have deeper visibility into what data their agents are working with. Training the future models that can truly meet the expectations built by AI product marketing today is going to need data at a massive scale.
This is one reason why the tech providers must be so eager to get the products out there as early as possible. So that they can start capturing these raw materials of the AI age into their own systems - and not that of a competing platform vendor.
Reply