After the TADSummit panel discussion, “TADSummit Panel Discussion. Are LLMs about to disrupt enterprise SaaS (Software as a Service)?” Paul Sweeney wanted to have a deeper discussion on Agentic AI.
To recap the conclusions from the SaaS/LLM discussion:
- SaaS will evolve to include AI/LLMs
- LLMs are about to make enterprise SaaS even easier to setup and use.
- We’re entering a massive wave of automation through LLMs used in SaaS
- X does not replace Y, rather together we’re able to do more.
- AI models are going to change. We’re in a period of constant innovation, every month there’s something new. AI is a constant journey, not a product release, which is a challenge for the whole industry.
- Props to Mark Zuckerberg in open sourcing LLMs, which has changed the industry. We can avoid OpenAI ruling over industries and not worry about them going bust.
- Control/rational and orchestration will get wrapped up into LLMs, it’s going to get even easier.
- There’s no point specializing as the technology is developing too fast, and we’re only a couple of years into it. Except where you can take advantage of language issues and local politics, e.g. in France.
This podcast brought Paul Sweeney of Webio, Kane Simms of VUX World (here’s the link to the vux.world podcast), and Steve Tannock of Fuel iX (Telus Digital) to discuss Agentic AI. I think of Fuel iX as CPaaSAI, an agent that runs at scale on telecom infrastructure. We finish this podcast on what I thought was a stretch of reasoning in Agentic AI, but by the end of discussion, it most definitely is on the horizon.
We begin with defining what is Agentic AI? Simply a piece of software that uses generative AI to solve and conduct tasks free from human intervention; AI Agents. Terms like autonomous AI also fall in this category. Its software that gets work done, based on its design (learning, prompts, and guard-rails). Steve had a nice analogy to gaming, where the AI in a game has closely guarded rails, while agentic AI has much more freedom to act within a relatively open environment
We then move onto use cases. Paul gave an example of summarization. Which we’ve discussed previously at TADSummit, on the importance of purpose, why are you summarizing? Paul gave an example of using summarization to produce a label for the a conversation, putting it in a category. Paul then highlighted the summary can go beyond what is said, but also includes all the meta data. For example, emotion, reading age, stage in a process, etc. And that brought us to the importance of context.
Kane then explains Otter.ai is providing a generic summarization tool, while if it was tuned to a specific scenario, say project management in the construction industry, the summary would be much more useful. Steve also explains interpretation is important, just take the word Vancouver, we’ll all say it in a different way. Getting such a label tagged correctly is important, because it’s PII (Personal Identifiable Information).
We then moved onto speech to speech and how that could impact Agentic AI. Kane gave tomato.ai as a simple example of accent softening, and respeecher.com is used in the movie industry to make 68 year old Mark Hamill sound like a 20 year old Luke Skywalker. Steve is quite bullish on the transition to speech to speech. BUT financial costs are prohibitively high (that will come down). But energy costs also matter to Telus, so the compute costs are also a challenge for speech to speech.
Steve then explains some of the elements that define an agentic experience. He uses the example of an agent being asked a question, and relevant information being presented to help the call center agent. A customer asks about problem X, and the customer had a similar problem last month, so the agent can better understand the customer’s situation. Its a current work item for Steve, but progress is exciting in helping human agents understand the context of the caller.
We then move onto co-pilots, a hot topic given the recent negative comments. Kane quips, GIGO (garbage in, garbage out). But the core issue is what is the problem being solved. Building a co-pilot is not a solution in itself. Steve then moves the conversation onto the importance of focus, building an assistant for specific problems, than more generalist assistants. Kane brings up the point on orchestration between multiple agents, and some of those ‘agents’ could be a simple rule based bot. Orchestration is a hot area for the next few years, and increasingly integrated within LLM platforms.
Steve brings up an interesting point on the differentiation between a public and private/personal agent. Particularly across the orchestration and guardrails required. For public, it must be solid, it represents the brand. While for private/personal it’s a handy tool, you can ignore/reset.
Paul highlights orchestration is still early in its development, and will get better. 3 years ago, transformers were a concern for Paul, and it was behind the great leap forward the market has experienced. Paul’s current preoccupation is reasoning, as its a make or break issue for the industry.
Paul reviews the process where the assistant is asked a question, and builds a set of questions, then researches information and answers to those questions based on a number of sources, to build a rationale for its answer. The architecture is LLM mashing up research from other LLMs. A “neural network of LLMs operating almost instantaneously,” to quote Paul.
Kane brings Paul’s vision to a near term example. You move house, and a problem occurs with one of the services not being activated as you arrive in the new home. Six months later you call on another issue. The agent has a priori knowledge, as well as the total customer value, and is able to make an offer as part of a restitution of the problem and maintain goodwill. It’s a good example of an Agentic AI, as the agent has access to information that today it does not, and builds a fuller understanding of the customer’s experience.
Steve uses the task of publishing an article based on a podcast, and all the tasks required across research, linking to prior work, speech to text, summarization, and graphics required. The key is being able to chain together many tasks. Steve positions it as the value of removing minor hassles, and in writing this article I definitely identify with that.
This is a fun discussion, that heralds the emergence of reasoning in LLMs. With the implications of scale as lots of LLMs and simpler ML (Machine Learning) agents remove many minor hassles to deliver vast performance improvements on what was once complex tasks.