I found this really helpful, discussing enterprise software and AI. The call was hosted by Mark Leonard, CEO of CSU. You very rarely get to hear ML talk, so it speaks to how important CSU think this subject is. CSU is one of the largest software companies, specifically in VMS.
As an observation, I think the AI story is developing very differently from the internet story at the turn of the century. Back then, the incumbents had huge brands, retail networks, and factories etc, and the path to monetisation was not clear. This opened up the opportunity for counter-positioning and ultimately trouble for many of the incumbents. With AI, I can see a very different path developing at this stage. The incumbents are leaning into the technology hard. Many are technology companies and understand the significance. As well, the monetisation path is much clearer. Therefore, we have the incumbents in a much stronger position, imo. Of course, as the text below describes, we are early, and it depends on how much the AI agents develop over time and the responses.
I downloaded this from Seeking Alpha, couldn't find it on the CSU website.
I may have to listen to this again for it to sink in , lol….. it's 1.5 hours.
Constellation Software Inc. (CSU:CA) Discusses On AI's Impact On Software Businesses Call (Transcript)
Constellation Software Inc. (TSX:CSU:CA) AI’s Impact on Software Businesses Conference September 22, 2025 09:00 AM EDT
Company Participants
Mark Leonard - Founder, President & Director
Paul McFeeters - Board Director and Chair of Audit Committee, LightSpeed HQ
Conference Call Participants
Thanos Moschopoulos - BMO Capital Markets Equity Research
Stephanie Price - CIBC Capital Markets, Research Division
Paul Treiber - RBC Capital Markets, Research Division
David Kwan - TD Cowen, Research Division
Samad Samana - Jefferies LLC, Research Division
Presentation
Operator
Good day, and welcome to the Constellation Software Inc. Conference Call and Webcast. [Operator Instructions]. Please note this event is being recorded. I would now like to turn the conference over to Mark Leonard, President of Constellation Software.
Mark Leonard
Founder, President & Director
Good morning. Thank you for joining us this morning. First of all, I'd like to introduce our panel. On the line, we have [ Cam; Chris; Henan; ] and Paul. We're going to stick with the first name. So these brave volunteers don't get overwhelmed by outreach from our shareholders or competitors. I'm going to address certain questions to them individually, but particularly if there's an anecdotal data point that I think they can address well. But I'll also throw open questions to any of them on the panel as well. And I may [ weigh in ] myself from time to time.
Before we get going with the Q&A, I'd like to tell a story. It's a true story. And it illustrates a useful but unsatisfying lesson, which I think we should all understand. In 2016, Jeff Hinton made a long-term forecast. For those of you who don't know, Jeff, is known as the godfather of AI and is a Noble Prize winner for his work in the field. And long-term forecasting is very difficult. I talked about this before and happy to send you some source, information if you'd like to delve into that further.
Jeff's forecast in 2016 was that radiologists were going to be rapidly replaced by AI. And specifically, he said people should stop training radiologists now. And in the intervening 9 years since he made that forecast, the number of radiologists has increased from 26,000 in the U.S., these are U.S. Board certified radiologists, to 30,500 or a 17% increase. Now that's outpaced the population growth in that period. So the number of radiologists per capita is up from 7.9% to 8.5%.
Now Jeff wasn't wrong about the applicability of AI to radiology. Where he was wrong was that the technology would replace people, instead it's augmented people. The quality of care delivered by radiologists has improved and the number of practicing radiologists has increased. So I told you the story to make 2 points.
Firstly, you and I will never know a tiny fraction as much about AI as Jeff did. And secondly, despite his deep knowledge of AI, he was unable to predict how it would change the structure of the radiology profession. So I think we're at a similar point today with the programming profession. It's difficult to say whether programming is facing a renaissance or a recession.
Programmers could experienced massive demand for their services if their efficiency improves tenfold. You can imagine not having to put up with software that does 80% of what you want, you'll be able to get software that does 100% of what you want, that's customized to your needs and the cost of programming will drive that increased customization. What a wonderful outcome that would be.
Equally, you can imagine the a tenfold increase in programmer productivity, driving massive oversupply of programmers and demand -- and particularly if demand for the services remains static. Similarly, if the 10x efficiency doesn't happen, if it's a 10% efficiency gain, you can imagine that there would be very modest changes to the current status quo. So, we don't know which way this is going to go. We're monitoring the situation closely. We're going to tell you stories from both Constellation and from third parties that we talk to that support many possible outcomes in the software development world.
With that, I'm going to ask the operator to open the lines for questions from the listeners. And I'll intersperse some questions that I've received by mail from a number of our institutional investors in the course of the day. So it's not just driven by the telephone lines. So Dave, if you could introduce the first question now.
Question-and-Answer Session
Operator
[Operator Instructions] Our first question comes from Thanos Moschopoulos with BMO Capital Markets.
Thanos Moschopoulos
BMO Capital Markets Equity Research
Mark, maybe to start off with, can you just provide some context in terms of the 4 individuals? Are they like an internal AI [indiscernible] just generally speaking what the roles are? And then just secondly, recognizing that you're a decentralized organization, but that AI is going to impact all of your businesses in some shape or form?
Have you thought about missing some metrics and/or incentives to ensure that your businesses are actively leveraging AI and not fall behind curve? Or is your current framework sufficient to ensure that businesses will do what they need to do in that regard not be laggers?
Mark Leonard
Founder, President & Director
So by way of background, the 4 panelists people who are working at AI full time generally have been with us for a considerable time. Often strongly technical backgrounds, but they're not data scientists who are programming in CUDA on NVIDIA machines. So they are application specialists for the most part, but obviously, among the best and brightest.
In terms of metrics that are standardized across the operating groups, the -- some of the operating groups are very detailed in terms of the metrics that they're following and I'm going to report on one of the operating groups that sent me their data who is not represented among the speakers.
Others have taken a deeper, more nuanced approach to reporting progress on AI that is less, 3% of business units have replaced people with AI. It's more sort of individual case studies and individual projects and looks at the maturity of the use of AI tools and things of that nature. So everyone's following AI, but there is no Constellation level metric that you can look at right now.
I don't doubt that we'll end up with one that is sort of a subset of the rolled-up metrics at some stage. But right now, you're going to hear about individual use cases and individual results from both business units and operating groups rather than a nice easy answer at the Constellation level.
You'll also hear -- sorry, you'll also hear some funny tales of what happens with radical decentralization, where you end up with duplication of effort. But it's not like we're duplicating multi-millions of effort. We're just duplicating modest amounts of effort and sometimes there are lessons in that.
Sorry, I was going to say did you have a follow-up question, yes?
Thanos Moschopoulos
BMO Capital Markets Equity Research
Yes, I did. Sorry. My follow-up is just a couple of the main use cases we're hearing from other businesses around programming, efficiency and customer support efficiency. I presume that some of what you're hearing, just wondering if there's any other use cases you call out that seem to be probably talk about across a lot of your businesses?
Mark Leonard
Founder, President & Director
So I think your question was, you're assuming we're trying for programming efficiency? And is there anything else we're doing with AI? Is that the question? Or did I miss it?
Thanos Moschopoulos
BMO Capital Markets Equity Research
Yes, just whether programming efficiency and customer support efficiency are two of the key use cases that you're already seeing some of your businesses leveraged and whether there are any other use cases beyond those you would point to that are probably applicable?
Mark Leonard
Founder, President & Director
Yes. I mean, I think we'll get there, and those are sort of very 20,000-foot kind of questions. Maybe we'll move on, and I'll recapture this question at some later stage after we've talked about a bunch of individual triumphs and failures.
Operator
Next question comes from [ Jerome Debreu with Desjardins].
Unknown Analyst
I have 2, but I'll just start with one here. It seems like a lot of the conversation about AI and programming is all about what AI can do well. But I'm wondering if there are things that AI isn't good at for coding. For instance, we've heard that debugging can be an issue with AI program. So I'm wondering about that as a way to better than whether this could be a major hindrance for the implementation of AI in VMS.
Mark Leonard
Founder, President & Director
Yes. I think the challenge in answering a question like that, of course, is the state of AI is changing rapidly, and so one has no real sense. So what it will be capable of a year from now. But why don't I throw it open to the panel, if any of them have views on what -- perhaps the other way around, where we're getting benefit from programming with AI and where we're not. So why don't we do it alphabetically? Let's start with [ Cam ]. Anything you wanted to talk about there?
Unknown Executive
Yes, in generally speaking, there is actually some strength but any type of automated AI engine or AI coding assistance can bring to the [indiscernible] most of the efficiencies that we've seen thus far is really around automating things like the aspects of unit testing, unit coding, test plan creation, store procedure optimization, vulnerability trapping, code commenting. So really niche type of specification.
And the reason why I think sort of when we take a step back and really look at the code is the -- most of the VMS are written sort of tens of millions of lines of code. So the context windows are just simply not large enough to be able to troubleshoot by taking all of it into account. So that would be the weakness part.
Mark Leonard
Founder, President & Director
But you know context window is getting bigger every day. Yes, go ahead.
Unknown Executive
Yes. For example, if you have just a clean sheet, you can just build your software from scratch, then you will be really productive. But as Mark just said, the limit of the context space, that's currently the limit for a broad support of software developers in all kind of activities.
So maintenance and support, book solving, all those kind of things are limited by the context space. But there's -- currently, there's a rat race going on where the big vendors are setting up a multi-agent architecture, where every agent can cope with some scope, which is allocated to that agent. So there can be an agent which makes, for example, a refactor plan. And the master agent can give assignments to client agents to do the job, all kind of jobs, which will fit in the context space and it's going on right now. So within a couple of months, we see a lot -- you can do a lot more jobs with a bigger context space. So that looks very promising from a productivity kind of [ fuel ].
Mark Leonard
Founder, President & Director
Yes. Maybe following on from that -- we've -- we're doing sort of 2 approaches. I think one thing that is very, very important is the tools are fast evolving and to sort of make any statement about the tools today is inevitably going to be evolving and changing. But we're actually seeing progress with a dedicated team that we're working with on almost every area of the software development life cycle.
Now not everything is always as impressive as you hope, but you are certainly seeing some time substantial gains, things like multi-agent developing, sometimes looking at some of the areas you can push across [ doesn't matter ] where it's testing, documentation, the coding itself. Even interestingly, things that maybe take a bigger shift of people's mindset.
So again, it isn't so much about the tools themselves. It's sometimes about how we approach. So one thing we started to see, and it is only early and it is only through people who've become much more comfortable with the tool sets is even in architecture and architecture design, being willing to maybe spend more time at the front of a project looking at how could you do this? How could you change the approach? Could this be done in 5 different ways that you'd never really been able to evaluate?
And that actually can lead to unintended or sometimes quite substantial benefit further down the cycle. So I think yes, it's not in every business, it's not in every environment, but it's something we're pushing and trying to then broaden and can get that thinking into all the business units, but allowing them to develop their thinking in that same time frame as well.
So I think today, we can point to benefits across the life cycle. But I think it's also been very aware that the tools will continue to evolve and change. I'd certainly say we do -- we're going to keep giving us more opportunities to keep driving that progress.
Unknown Executive
May I just give a little bit of background, Paul? Paul gets to see what we're doing across a number of business units, works on projects with a number of our businesses on a for-profit basis, sort of think internal consulting for which you pay, and so obviously, people really want him and his business units input. Otherwise, they wouldn't be writing checks and he specializes in the AI sector. So do you want to answer the question, Paul?
Paul McFeeters
Yes, just to add something on what was already said. Indeed, the models tend to perform better when it comes to writing new code and troubleshooting or kind of finding issues in existing code bases, that's mostly because of how these systems have been trained. Keep in mind that these large language models have been trained on publicly available data, which for coding means open-source repository.
So the systems have become quite good at generating new code, but there aren't a lot of data sets online when it comes to defects like examples of data set where the LLM can easily understand, this is a defect or this is about even the existing sector. Indeed, the context window is the limitation, but we're seeing practices around context engineering and best practices for developers that learn what to put into the context window in order to achieve good results.
I agree with what Chris is saying that we're seeing velocity gains and improvements across the development life cycle. Obviously, with different games in different stages. But coming back to the original question, which was, should we consider rebuilding software than extending or maintaining what we already have, considering that these LLMs are better at writing code than maintaining code.
Well, regarding this question, it's important to consider that even if we rebuild with AI, we're still going to have to maintain it. So I think, in some cases, AI will enable us to modernize and rebuild our solutions more effectively than we were able to do so before. But at the same time, how we maintain troubleshoot and bug fix our solutions, we're still going to have to do that whether the code has been written by AI or by humans 10 years ago.
Mark Leonard
Founder, President & Director
So just to jump in and sort of drive home that point. It's really easy to get excited about 10x improvements in programmer productivity as you generate that new application. But if that new application, a, goes out into the field and generates scads of bugs reported by clients and is fundamentally difficult to change and improve.
You may give up on the roundabouts what you made on the swings. You may end up with higher lifetime cost of the code base. And similarly, you have to take into account the efficiency of the code that the AI produces. And I think we're very early days. We know there are some wonderful advances in programmer efficiency on the front end, and we just don't know the answers on the back end yet because we haven't lived with it for long enough. Yes, Jerome did you have a second question?
Unknown Analyst
I did. More on the financial aspect on the M&A side, not knowing what's going to happen in the future, sometimes comes with a higher discount rate affecting terminal value in some instances. So I'm wondering if the organization is now using a different discount rate and if there's an impact in terms of meeting your hurdle rates?
Mark Leonard
Founder, President & Director
So the advantage of using a high discount rate already is that it minimizes the value of the terminal value and your overall assessment of the attractiveness of the investment. So we've got that going for us inherently. We're already discounting the future a lot.
However, I haven't yet seen, except in a few instances our acquisitives coming along and saying, yes, AI is a real threat on this one. We've got to increase the probability of a wipeout and modest win in our distributions as we look at the probabilities of the outcomes. That doesn't change fundamentally a discount rate, by the way. It just changes the probabilities of the downside scenarios, which is important.
It's really the essence of what you're asking. But I'd say it's rare right now, but people are aware of it. And those sectors were such as customer support software, they are factoring that into their thinking. But in many of the verticals, it's not yet influence the prices that are being paid. That's my sense. I don't think any of the panelists are really in a position to talk to M&A specifically. So let's move on to the next question.
Operator
The next question comes from Stephanie Price with CIBC.
Stephanie Price
CIBC Capital Markets, Research Division
Curious what Constellation would do if a competitor came out with an AI embedded feature that was driving higher customer attrition at the VMS software. Can you walk through kind of the thought process on what the competitive response would be from Constellation at a high level?
Mark Leonard
Founder, President & Director
I hope that we'd respond quickly and be a fast follower. But maybe there are individual use cases or instances that some of the panelists have encountered that talk to your question. Maybe let's start with [ Cam]. Cam have you seen anything where we're chasing a competitor with an AI solution?
Unknown Executive
They're not deeply yet. A lot of the industries that we tend to sort of focus on the divisions that I'm with tends to be really slow followers to use market expression. They tend to be sort of in the -- let's say, utility space, telco space and so on and so forth. So I think, if anything, we're often finding ourselves driving should be innovation sort of piece forward and oftentimes based on just the sheer volume of data that we have, so running new models on it we're able to all of a sudden look at optimizations from workflows or optimization of water consumption and so on and so forth that historically we may not have been able to.
But as we look around us and the competitive landscape and the market consultants that ultimately help the cities, municipalities, utilities so on and so forth, to acquire new vendors, they were usually pretty good. That's giving us a bit of a sort of indication of how we in a postmortem fashion, we've fared and we always sort of score actually quite well in the AI space.
Mark Leonard
Founder, President & Director
And Chris, any thoughts on how you respond to AI-enhanced competition?
Unknown Executive
I think at a fundamental level, it doesn't change any of the business principles we all apply. We're fundamentally passionate about the customer journey, being very, very close. And it's one of the huge benefits of the decentralized model that we are very, very focused on our markets and our sectors. So frankly, it's something that's been around forever. There could be a competitor bringing a new feature, new capability, and we'll address that.
I think specific to AI, I think it is sector dependent. There are some sectors probably obviously, the well-funded, larger segments are always ones where probably there's going to be those opportunities. But I think probably what I can talk to is, we're seeing examples where our businesses early stages, nothing yet that we could give you absolute definitives on. But are already starting to look through a different lens, looking at how they can actually provide new opportunity either in the existing space or even starting to maybe look slightly wider.
And I think it does go back to an earlier point in a way, is even with older technologies and legacy systems, there is a drag. And I think AI across the life cycle can actually help make that drag more efficient. And again, we're still yet to see whether that's 5%, 20%, 100%, whatever. But that does potentially allow people to then get back to that close to customer and start to innovate.
So we have one project running at the moment with a customer that essentially was, I suppose, in a sense getting frustrated. That's something they've been hoping for, yes, 12 months haven't been delivered, but we've changed completely the way it's being delivered, developed, tested, use cases being built, everything in terms of the life cycle and it's seen a significant shift in velocity.
So something that was already behind is actually now back on track and working through. So it's more of a recovery, if you like, than a sort of a new area, but it is providing opportunities that we do keep seeing. But it's patchy. It's still evolving. But I think, again, another to go back to our decentralized nature.
The other thing we do is try and share the good and the bad. So we're trying to make sure people understand the best practice, some things that don't work, and that's giving real confidence as each business starts to see other people seeing those, let's say, social proofs, that it's something that they can then take forward and move forward. So I think there's lots of good examples moving forward.
Mark Leonard
Founder, President & Director
And [ Hakan, ] you deep enough in the weeds of the individual business units that you are aware of, one where we are having to respond because of competitors' deployment of AI?
Unknown Executive
Well, for sure, there will become -- competitors will come with initiatives based on AI, but we're not waiting for them. So we really love the context richness of our VMS' and it gives a lot of opportunity to enter fields of functionality in order to benefit our customers with, for example, for something which can reduce a lot of hours, for example, in the field of education.
We will release functionality, which will enable mentors of an high school to save a lot of hours because we have all the data of students, which we can convert to a holistic view of the students involved. So you can give better advice to a student where the mentor can give a better advice, and it saves them a lot of hours. So we see a lot of opportunities. But for sure, our competitors will also develop kind of those functionalities, but we are very well positioned to profit from all the possibilities the AI technologies gives us.
Mark Leonard
Founder, President & Director
So Paul, are you actively involved in any projects where we're responding to competitors AI initiatives?
Paul McFeeters
Yes, there is one example that comes to my mind. And the response approach was as follows. So first, distinguish between actual value being brought by an AI feature and what's called AI washing. This is kind of a term which refers to companies having the AI label on their products for sales and marketing purposes.
So kind of a good approach is for say, okay, is this kind of -- is this competitor targeting some real value, actually making a meaningful difference in our market? Or are they just adding the label -- the AI label on a feature to make it more attractive.
And after that, kind of the response that we're recommending is make sure that you really understand the problem that you're aiming to solve with AI. We're seeing so many AI solutions, which are actually solutions in search of a problem. So it's starting with the technology rather than starting with the customer journey or with the customer problem that we're trying to achieve.
So again, in CSI, as all of you know, we're highly pragmatic in terms of our approach and taking the same level headed approach when it comes to responding to AI threat, I think is the way to go. There is a risk that we're overreacting and over-investing into certain things where again there is just solutions in search of problems.
Mark Leonard
Founder, President & Director
Yes. I know of an instance in one of our largest business units where we launched an AI-based business intelligence solution aimed at senior executives that would enable them to query the entire ERP and gather information from it. And it's been successful. It works but the utilization is flat. Basically, it was a solution in search of a problem as opposed to something that was driven by real customer need. And so as you say, you can use AI, but it doesn't mean you necessarily deliver value. Next question.
Operator
And the next question comes from Paul Treiber with RBC Capital Markets.
Paul Treiber
RBC Capital Markets, Research Division
One of the panelists mentioned data that was proprietary to Constellation or one of the business units. Can you speak to like what do you see as the characteristics of vertical market software that make it an attractive market structure and specifically things that are proprietary that insights that your company's -- Constellation's companies have accrued over time that would make it difficult for new entrants to try to emulate the underlying attributes or functions of that software?
Mark Leonard
Founder, President & Director
Before we pass that on to the panelists to respond to, I'd like to distinguish when we talk about data. Between the data, that is the customers' data, and that lives in their system and the data that is our data about the customer and how we interact with the customer and lives in our system. The customers' data frequently is highly available to them. Even if it lives in a proprietary database, they frequently have some sort of data repository where the data gets dumped near real time, which they can access if they want to generate queries and reports and interfaces and things of that nature.
So customer data, I don't think is -- the customers' data, rather, I don't think is a barrier to anything and anyone, whether it's AI or other vendors. But our data about our interactions with our customers is very proprietary and has the potential to be a pretty exciting tool.
Why don't we do the old alphabetical approach? [ Cam ], any thoughts on data as a barrier or asset when you're using AI?
Unknown Executive
So point one may that think, fundamentally, the customers have access to [ their data set and ] whatnot. And I think the power that we're able to call less is really the aggregation of these various data sets as layered geographically as layered, it's based on the volumes, seasonal changes layering sort of weather patterns and so on and so forth.
So taking that sort of history recreating, running AI models to try to find novel correlations that may not have been there. All of a sudden makes path or makes way for net new insights and offerings were predicted -- sort of in a predictive fashion you're able to detect certain events or leaks or collections events or what have you that would have been purely off the buyback sort of catalog of our data sets that have been there and sort of nuance.
So we have a really, I think, reasonably good idea based on the specific set of circumstances that given sort of city operates by and how we can more closely correlate it to a given customer that might be sort of closely correlated. So I think having a model reside on top of that data set and be able to provide us with the insights that could then be productized is sort of one key area that we focused on and making sort of reasonable sort of headway and we're sort of optimistic on what's ahead there.
But there are also some dead ends that we've also hit in doing some of these investigations. Complexity of the VMS space and then the utilities/government sector is that every town, every village, every county, every sort of everything they may completely be a bit of a unique snowflake. So really there's no real rhyme or reason why they've adopted this billing methodology and such. So that certainly adds some complexity in the correlation of data. But despite that, there's been some good ones identified so far.
Mark Leonard
Founder, President & Director
So I think the main point there was the fact that across customers, you can extract insight that you might not be able to extract within a particular clients' data. And I think that, for sure, is powerful with most of our businesses because, of course, we have multiple customers in a particular vertical. Chris, any thoughts? Feel free to -- if it isn't your cup.
Unknown Executive
No, I agree with everything [ Cam ] said, I also feel that aggregation of understanding, which I think is more than just data, its process, understand how they drive value in their customer markets, actually just layers on top of all the opportunities that you can actually drive forward. But I also feel like there's a whole range of opportunities coming with things like predictive that could drive capability there, clearly, still to be proven, but I do think there's capability in that all of those areas.
Mark Leonard
Founder, President & Director
And [ Hakan], any thoughts?
Unknown Executive
Yes. Because, of course, the data of the customer is from the customer, no discussion about that. But I think it's all about the dynamics of the data. A lot of functionality only works if it's almost real-time -- the real-time status of the data. And of course, customers have invested a lot of time and their way of working. And I think because we can influence the dynamics of the data and use it with the possibilities given by the AI tooling.
We can maximize the value for our customers in terms of all the context richness we have, we can serve to the AI tooling in order to make great functionality, great value for our customers. And it also makes things -- the scope we connect in is also more broader and deeper. And I think a lot of new value can be unlocked to our customers. So I'm looking very positive in that way to the future.
And we have a lot of examples which prove that AI tooling can give a lot of value. For example, in health care, people can just ask their questions in natural language. And because of -- AI understand the semantics of the question, it can give answers they really want in terms of the information needs and just because of natural language is understood by our systems. And that's what I also means by the dynamics of the data. If you have a question about some patients, it can get very precise and correct answer because it understands real dynamics like the semantics of the question. So a lot of possibilities in this perspective, I should say.
Mark Leonard
Founder, President & Director
Cam, I wouldn't mind looking back, [ Hakan's ] comment made me think of the demo that I circulated to the Board last quarter of a customer service interaction between an AI-powered support agent and the customer inside of one of your utilities and it was really remarkable.
The fact that it wasn't a chatbot. It was a telephone call and that the voice recognition was working in real time and the interaction was very sophisticated, lengthy, nuanced and you didn't at all feel like you were being held hostage by a low intelligence chatbot. So can you talk a little bit about what you were developing there? And how close we to being able to roll it out? Are we still limited by the quality of the AIs that are out there or the expense of deploying these kind of solutions?
Unknown Executive
No. It's a very good question. Sorry.
Mark Leonard
Founder, President & Director
Go ahead, Cam.
Unknown Executive
Yes. So I think for us, the catalyst was because we have all of the workflows already sort of established, all of the various click throughs that an agent would have to do instead of using just a regular agent having to do it sort of by hand and such, we essentially look to tie those processes, those actual workflows to in our natural language structure and LLM that specifically -- that as its asset class of specialty.
And the key component to decipher there, then the complexity that we have to overcome is oftentimes, the right or the same LLM isn't the correct LLM to tackle the various challenges that I had and Mark have alluded to it based on the weaknesses of the actual LLM and such. So having to embed a router structure through that architecture as an example, where it's able to decipher which ones to pick in order to tackle mathematical computation versus those that can just be purely sort of word comments as an example, in multi-lingual, so on and so forth.
So the capability of doing this in real-time was really a critical component of making that successful. And the neat thing sort of for us is that it's the reliance on our own systems and not a third party that's capable of doing that, which would be sort of far more difficult to retrofit. So -- yes, I'm not sure if that answers your question, Mark, if there's anything else that you want me to go zero in on, but the fundamentally, I think the key thing is that proprietary sort of applications tied into our actual workflows, natural language sort of interaction with the customers, then able to automate flows in a much, much more intuitive way than to the historical way of doing things.
Mark Leonard
Founder, President & Director
So I think to Paul's point, which is the access to proprietary data about the workflows of that particular client, I think you've answered that nicely. I guess what I am asking is, is it ready for prime time? Are we going to install this on every utility next month? Or is it keep a human in the loop 95% of the time, just in case?
Unknown Executive
Okay. Got you. So I think right now -- so it is out in pilot for -- with customers as we speak. So should they greenlighted and be comfortable, then we're off to the races. The biggest complexity, especially an understandable byproduct of the industries we tend to operate in, where sort of getting things in any type erroneous could have catastrophic impacts, the cities, the utilities, the health districts or what have you, they really want to ensure that their CSRs ultimately can rubber stamp the thought process and how the AI came up with its reasoning prior to pushing it through.
So that user in the loop component will, over time, as we get enough data and they build enough comfort with the statistical proof that it was not overwritten, and this has now circulated 5,000 times and so on and so forth. Will we start decoupling things from being user in the loop to fully automated end-to-end.
So I think that type of partnership is going to be key for us to push it forward. And quite frankly, it's just something we encourage our customers to do as opposed to let it be as is from right off the rip. So it also helps us gain greater and greater comfort, certainly.
Mark Leonard
Founder, President & Director
Thanks, Cam. So Paul, you always get stuck at the end of the list. Do you recall the original question about data and whether it was a barrier or an asset and any thoughts?
Paul McFeeters
Yes. Just a quick thought. From my perspective, so there is a lot of focus on data as being valuable and the fact that it will act as a moat. But in my view, what might be more important than data will actually be processes and workflows. Our businesses have incredible knowledge of the end users processes and workflows, often better than the end users themselves.
And I believe this will be the big opportunity for us in looking at those processes, try to reimagine some of them by embedding AI in certain areas and go from systems of record, systems that capture data can allow you to edit and retrieve data to systems of action, which, in some cases, the AI agent or the AI solution will do certain steps that will automate more of the human work.
So one of the things that we're recommending our business is, again, looking at data is an important first half. But as has been discussed, that data, the customers already have access to their data. In some cases, the end customers are experimenting with other AI solutions on top of that data, but the processes, the business rules, the workflow, that's something that we build into the systems. And I think we're going to be able to leverage quite nicely.
Mark Leonard
Founder, President & Director
I agree with Paul entirely, I believe that vertical market software is the distillation of a conversation between the vendor and the customer that has gone on frequently for a couple of decades. And you distill those work practices down into algorithms and software and data and reports and it captures so much about the business. And being able to examine that in a new way because of AI, creates new opportunity to modify and change and suggest new approaches. So yes, I'm hopeful that, that unique and proprietary information will be of value.
And that was a call from likely spam. Why don't we do one more call from the lines, and then I'm going to ask some of the questions that I was sent beforehand. The neat thing about e-mailing in questions is you can be less pleasant in e-mails and you can ask tougher questions. And so let's take another one of these nice questions from the line, and then I'll pose some of the tough ones from the e-mails.
Operator
Our next question comes from David Kwan with TD Cowen.
David Kwan
TD Cowen, Research Division
I was wondering, Mark, if you're seeing many of your probably larger customers using Gen AI to internally build solutions that could displace or maybe are displacing some of your PMS solutions?
Mark Leonard
Founder, President & Director
So maybe a response that helps put that question into context, because I think it's a great question, but it's one that we have been confronting forever. So I see vertical market software sitting somewhere between horizontal applications that are cheap and cheerful and do 50% of what you want and highly customized systems that do exactly what you want. Now obviously, you have to hit a certain price point to live in the middle where most vertical market software companies live.
You frequently have professional services to provide some customization, but those professional services, whether it be custom-programming or otherwise, are expensive. And hence, only a certain class of client can afford them. So those who graduate from horizontal point solutions laced together with Excel to vertical market software at the low-end are going to have very little in the way of professional services and customization and customer interfaces and customer reports.
And at the high end, are going to have highly customized systems where we are willing to do whatever they want as long as they have budget. And our people aren't cheap, and so they're going to have to pay for that. Well, that has always been the case and the very largest clients frequently see their software as strategic. It isn't just the tools to do business. It's a way that they differentiate themselves from their competitors.
And if you are dealing with a highly differentiated large client, they're going to want to have that be proprietary to them. And they're going to try and capture as much of that information technology advantage within their own realm as they can and control it. And so we frequently do lose large clients to an SAP implementation or a proprietary implementation and that's always the case.
We capture the small companies as they graduate from horizontals. We take them and some of them grow enormously and become very successful large companies. And then they graduate to no longer using our systems but to using a much more proprietary system that they have a much stronger hand in driving.
Now AI has the potential to allow us to do way more work on making the client happy and customizing our solutions but it also allows the client to potentially do that and so there's a natural tension there. We obviously would love to capture that. Our clients, if they don't have a list of 5 years' worth of IT projects to get to, would obviously love to capture that as well.
And so I think to some extent, whenever we go see a large client's IT director, we're in a negotiation regarding what we'll do and what they'll do. And it's not going to be an easy answer. It's going to be somewhere in between. AI makes it potentially way more exciting for us to provide customization, but it also makes it much more likely that the client will do it themselves.
Why don't we do the reverse alphabetical and Paul, any thoughts on this? I know you and I have talked about it a little bit.
Paul McFeeters
Yes. Nothing to -- a lot to add on what you've mentioned there, Mark. I think that natural tension will still remain. I think there will be big customers maybe with the new CTO being tempted to say something like hey, let's give this a go, let's try to write it ourselves or build it ourselves with the help of AI. I think some will be successful. I think most will minimize the effort and the complexity of replicating some of our offerings.
But to me, what this really means is that this should be a trigger for us to spend more time with the customers to refocus on customer intimacy, to make sure that we're understanding whenever they are considering this type of approaches and to make sure that we remain as competitive as possible.
Mark Leonard
Founder, President & Director
[ Hakan, ] any views?
Unknown Executive
Yes, what we're seeing is that AI add-ons are proposed by clients, okay, that's okay with us. But of course, we want to be proactive in those kind of new functionalities. So we see both working close together with clients in order to connect to AI tooling and, of course, provided it to our customers.
But mostly, those kind of tooling are more scratching on the surface, the real functionalities which needs integration and deep integration with our solutions, that's the things we are working on and of course, in close interaction with the customers. And that's -- those are not the areas which will be claimed by customers because then you have to have deep knowledge of the workflow and of all the functionalities within our software themselves. So both is happening.
Mark Leonard
Founder, President & Director
Yes. Chris, any views?
Unknown Executive
I think also the original question in terms of customers building their own, effectively VMS, don't underestimate the complexity even with AI. It still requires high skill, high capability and yes, there's easy headlines to say systems able to be rebuilt. And yes, we're seeing efficiency and a huge change in what can be achieved.
But I think reiterating Paul's point and what we've been saying all along, if you maintain the customer intimacy, we see ourselves as continuing to move forward, be able to offer more capability through AI, probably actually help a lot of those customers. Again, many of our customers are in mature, less dynamic marketplaces, and actually just interested in their own business and making money. So sure, some large customers may try.
But I think overall, if we've got good relationships, good connectivity to what they're trying to achieve, I think it may shift in terms of exactly what we're doing and how we're doing it, but I feel like we will still need that to drive new opportunities and create the capability. And only if they've got a particular dogma, which as Mark said, has happened many, many times over the last 30-odd years of software where they make a decision to shift to a different product set or build themselves, I don't think that will shift its dynamic to anything we've seen in history or going forward.
Mark Leonard
Founder, President & Director
Tempting to hang you by your own [indiscernible]. You've got a couple of use cases where you've seen like 50x productivity improvement and 10x productivity improvement. If our large clients have those same sort of improvements in their development, are they going to chew into our value-added?
Unknown Executive
I feel this was more so a trend in that sort of -- been in the space for give or take 20 years. And so I would have seen that well in the past. And the key reasons why the customers have largely been increasingly dissuaded from wanting to do this is often driven by regulatory changes that all of a sudden hits their mandates, they are now all of a sudden a time crunch and having to withstand those governmental pressure independently.
And so there is often a sizable benefit of them hitching your wagon to a best-of-breed solution like ours where if there is, let's say, the concept of water conservation that they're all of a sudden having to buy by governmentally, there's a really good chance that we would have equally done such a functionality in different zones or for different city -- different states, so on and so forth.
So we're able to re-leverage much of the architecture, much of the structure of what had been built there to greatly accelerate their current piece. And so we don't anticipate AI changing much of that. I think it's ultimately the customization that's required governmentally that comes with a penalty or a time crunch, it's not something that they have a great deal of appetite for.
Mark Leonard
Founder, President & Director
Yes. And just to sort of attack the underlying thesis of David's question, we have a large business unit where we've had AI programming tools in place for a year. And we've certainly seen significant increases in the number of lines of suggested code over that period of time by the AI. And the percent of lines adopted has stayed pretty stable. But if we look at the actual programming efficiency, it's almost entirely flat. And so it isn't a panacea, we aren't always going to get enormous increases in programmer productivity in this large business unit, it is an example.
Now we haven't used a programming -- having used programming agents in this particular instance. So we've been using fairly simple tools, and we are moving to the next stage and trying out agents that are much more sophisticated and we'll see. There's a couple of other instances where we're seeing overall efficiencies in the 10%, 12% region. And as you may have heard, Google has reported something similar. But there are also instances where we do see some very significant improvements. But those will be through the full life cycle of maintenance and support, whether those will be maintained is yet to be seen.
Here's a tough question that came in, which was that AI will eat software budget. So if clients have a budget and they view software and AI as one particular pocket that they're going to spend in the coming year and they are being approached by a host of horizontal AI vendors that are doing voice recognition and OCR and a variety of other sexy things. Are they going to siphon off a bunch of money that they would otherwise spend with us to those other AI vendors who are incredibly good at promoting their solutions. So why don't we start with Cam and go alphabetically?
Unknown Executive
Yes. So is it going to either budget? I mean if -- I think if we are in front of the customer, proactively speaking with them, talking with them, partnering up with them the way that we tend to really engage with our customers then part of our job is to not give some tremendous amount of reasons to want to sink their teeth into other sort of AI one-offs, if you will.
Ultimately, this is not a new phenomenon where third parties of any kind at any conference that we may share, ultimately sell their assets and there is complexity of frank and signing it all together and making sure that it all works adequately well and hidden fees that they may not have factored in and such. So I think there's all of these components to really think of.
As it relates to the AI piece itself, there is this perception depending on the viewpoint that a person has been the use cases that it could be relatively inexpensive, the actual token cost of an LLM as an example. But if you factor in and as we have where we're crunching an actual invoice that evokes the LLM numerous times to do analysis or anything computational wise or as an example set, that could become expensive very, very quickly.
So one of the pieces that we've gone about combating and that speaks to the pricing thoughtfulness, if you will, and making sure that work or conscientious of not eroding margins and whatnot is to use some of the pre-existing assets that we have proprietarily sitting, so hardware and servers and so on and so forth and then run some of our own models, run some of our own LLM that we end up morphing into one that's trained with our data sets with our specialty with our additional intelligence and whatnot, where most cases, those LLM are independently able to address given AI needs and cost, therefore, no incremental token costs because it's really all within our own 4 walls, if you will.
So I think that's one of the approaches that we've taken to minimize the cost. So I'll pass it on to the next speaker.
Mark Leonard
Founder, President & Director
Chris, any thoughts?
Unknown Executive
Yes. Very, very similar to what Cam is saying. But I just generally think that -- if you continue to talk to customers, you talk about relevant pain points. I think one of the responsibilities we have is to understand the potential not to get carried away with the high, but to understand be sort of healthily skeptical but driving where we can see potential opportunity and just continues we're always going to work with customers in their specific markets and show that evidence.
Let me give you one example in an education sector where they've been talking to their customers and always had an ambition to try and be something very different and very focused on that particular segment and it's really now allowed them to explore that. And they actually feel that it's going to get the revenue growth and real opportunity still to be proven. It will be over the next couple of years that, that will come through.
But it has allowed them to change the conversation. And I think that's probably the most relevant part. If you keep going and talking to people about things that were relevant a few years ago or just in the traditional environment, yes, sure, in some circumstances, that's relevant. But I think it's really also being able to talk in a modified language with the cautious, skeptic in your mind as well to actually ensure that you're still driving those new opportunities. And I think we do that, and I think the budget at worst will stay the same and it could even increase.
Mark Leonard
Founder, President & Director
[Hakan]?
Unknown Executive
Yes, I think the thesis assumes that the world will be steady. Well, the opposite is, of course, true. There's a lot of dynamics around this answering this question, I should say. We saw it with low-code. Low-code was the promise of automating all the processes, and there wouldn't be very sophisticated software developers needed anymore. Anyone could contribute to low-code systems and reality was completely different.
And I think because of the scope will broaden and deepen, there will evolve very interesting business cases. And if you talk about software budget, it's only costs compared to the very interesting business cases, which will evolve. So I should say it could be that software budget will become much bigger because of the broadening and deepened scope. I think that will be the case because there are a lot of more possibilities to optimize businesses. And of course, all the businesses using our software, also will have stronger competition.
So they have to respond to their competitors. And of course, AI and IT will be the means to compete with competitors. It will become more -- even more important to have a unique selling proposition from our business -- from the businesses using our VMS. So I would say, so our budgets will certainly not be eaten by AI. It will be leveraged by AI. But I can't see in the future, of course. That's my personal opinion.
Mark Leonard
Founder, President & Director
Yes, yes. My personal belief is that we all can't see in the future. Paul, I'm going to post the next tough question to you. Specifically, it's AI introduces a new COGS element, not historically present in software. Does this change the economics of the business model? I think specifically, what we're seeing here is the adoption of AI is being massively subsidized by the AI companies, the AI model companies. At some stage, they will want to recapture that investment.
And are they going to be in a position where we have -- we face very large switching costs, and they are going to be able to capture a large chunk of our value-added through the -- what they charge us for their systems, whether it be on a per token or per whatever basis. So Paul, any speculation?
Paul McFeeters
Yes, maybe it's worth briefly outlining like what we know right now before we look into what might happen in the future. So as of right now, these model providers are charging anything between $1 to $3 for 1 million tokens. You can think of tokens roughly as vault. Now there are many studies that show that AI platform users consume on average per month between 50,000 tokens, which these are the light users and 1 million token users, these are the heavy users.
So based on the data that we have right now, we can infer that if a CSI customers will start to embed AI features into their product, there's going to be an estimated COGS per user of anything in between like $1 and $8. So I think we can easily cover this and maintain our margins by having premium add-on where our end customers, if they want to leverage these features, they can buy those premium add-ons.
Now indeed, it's hard to predict how this will pan out in the future. But the good news is that currently, these large language models, they don't have a very large moat around them. Some of you might have read that when GPT-5 came out, there were some doubts regarding its performance. They had some issues with their deployment. And overnight, there was a huge switch from OpenAI models to different providers with virtually one line of code being changed into the consumer of these LLMs.
So this tells me that there will be high competition between these model providers. And this will keep the pressure on the cost. And I think in the long run, the price per token will go down. Now again, we can speculate about how these companies will start to build up their moat so that it's harder to switch from one LLM to another.
But so far, again, we don't have clear data or indicators towards that. I believe that as of right now, if CSI companies are adopting AI figures, we will be able to maintain our margins. And if kind of one of the big providers are starting to hike up their prices, we always have the option to use things like model routing, so using smaller models for different tasks or even on-premise LLM inference by leveraging open-weight LLMs.
Mark Leonard
Founder, President & Director
And Cam, you have thought about this probably more than anyone else inside of Constellation. You've architected much of your efforts around AI, around the threat of third-party LLM providers praying upon their customers. Do you want to talk a little bit about what you've done?
Unknown Executive
Sure. Yes. So we've essentially created our own centralized platform that essentially removes the various factions that are currently going on where, to a certain extent, you have to largely be within this cloud provider to have access natively to this LLM and so on and so forth. So there is these turf wars being kind of created across the various cloud providers and whatnot.
And so maybe our strategy has been to really play a very neutral Switzerland-type role where by centralizing things, through strategic relationships, either directly with the model providers or with the platform providers and so on and so forth, we've managed to negotiate, I think, some really aggressive deals and do remove the element of the factions they're all willing to kind of play nice with us in the sandbox.
So that puts us in a very unique position where technically, we have access to 15,000 unique models. And that's because we're essentially coalescing anything that otherwise couldn't be or reside within other platforms. The other piece that I had touched on very briefly and Paul alluded to as well is using a on-prem-based assets where and when possible.
So to the extent that the LLM needs to be or the AI model needs to be hyper specific or a specific trained one that resides with pre-existing best-of-breed provider, then sure, that may make sense to kind of tap into that one. But for basic, let's say, translation service, summarization service and a myriad of other hosts of functionality and whatnot, the on-prem one is plenty sufficient, been capable about doing it's own -- also sort of thing.
And I think that there's -- there ought to be some thoughtfulness of the whole, do we need to go with this -- there's a website, I think it's called like there's an AI for anything. And so -- and so really, at the end of the day, going through these repositories, do we build or do we partner? And in many cases, it's been fairly painless to kind of augment the functionality to be able to natively create, let's say, slides or Excels or presentations and so on and so forth.
So the functionality is becoming richer and richer. And the adoption rates internally from our business units with the closest thing we've sort of -- our narrative internally is that it's the closest thing to a viral application that we've ever had. So the adoption has grown, I think, month-over-month, something 450%, give or take, continuously. So the lion's share of our staff are using it for a myriad of things. And so we're excited by that. So I think it's being smart about the way we architect stuff and not reinventing the wheel 10x over if we don't have to.
Mark Leonard
Founder, President & Director
Dave, why don't we take another call from the lines?
Operator
And the next question comes from Samad Samana with Jefferies.
Samad Samana
Jefferies LLC, Research Division
I'll echo the sentiment of others, I appreciate you guys doing this. Maybe just -- I know we've talked about the implications for Constellation's portfolio, but how should we think about how this changes -- the nature of M&A you may pursue, whether that's changing the size of the company you may look at, whether that's targeting different verticals or maybe where they sit in the software stack?
How does it maybe -- how does AI change your M&A strategy? And then how does it actually change your appetite for M&A volume, meaning is it better to be in more of a holding pattern right now? Or is now the time to really lean in. And we've seen some large M&A deals announced by private equity. So I'm just curious what your philosophy there is.
Mark Leonard
Founder, President & Director
Yes. I think the underlying assumption is that we're not opportunity constrained, and we are opportunity constrained. And so narrowing the aperture is a bad idea. We end up sitting on a whole pile of cash. And when you're striving to generate very high rates of return on your investments, sitting on cash is not a good plan.
So we already are working hard to look at things outside of strictly vertical market software. We've done some horizontal stuff. We've done some hybrid hardware-software. We've done some hybrid data software. And so I would say that AI is not reducing what we're looking at. I'd say it's -- it may influence the pricing on certain things where we see it having a current impact. But yes, it isn't changing much in the M&A world.
Yes. I'm going to -- I'm going to pose a question to myself, which is, tell me more about the operating group where you have stats on their business units, et cetera. And then I'll ask the other folks on the line who none of them is from this particular operating group, if they have similar stats or if they have an impression on how they stack up versus this particular operating group.
So first off, 27% of the business units in this operating group developing an AI product for their customers. And I wondered why don't you start, Cam, in the BUs in your operating group, what percentage of the BUs do you think are developing an AI product for their customers?
Unknown Executive
I think for us, we mandated for all of them to experiment with the creation of a solution. So technically, the answer is really all have been based on the footprint that I'm also able to capture using our centralized tool set where whatever LLM it is that they would have wanted to use the respective vendor or what not would have been captured here. They're all in various degrees of progress, having a solution offered.
Now I think there's some low-hanging fruits that most of our business units ought to really as a table stake starting points have. And so I don't think that they're striving for some ideas that they don't already have. But really, it's -- some already have some products out there and in full sales mode, whereas a lot of them are varying degrees of the build, if you will.
So yes. So I would say the -- now the nuance of the question, I think that I don't want to misrepresent this piece is, there's a consideration of where AI is internal focused versus external focus. So the customers will see benefits oftentimes by way of better quality and support and so on, so forth as an example, but not necessarily in a new front-end screen or widget right off the bat. So these projects vary between internal focus versus external focus. So the tune of roughly 50% -- 55% is spacing, 45% is process related.
Mark Leonard
Founder, President & Director
So was what you were saying there, that the product is being developed for the customer, 50% of them are aimed at the customer's customer and 50% are aimed at making the customer -- our customer more efficient?
Unknown Executive
Perfect. Yes, you bet.
Mark Leonard
Founder, President & Director
Okay. Okay. And the 100% mandate, I mean, one can order people to do all kinds of things and you can get lip service as opposed to strict and enthusiastic compliance. And so sometimes it works and sometimes it doesn't. I suspect the 27% that was reported by this operating group is people who are seriously implementing a development for an AI product for their customers as opposed to. Chris, any comments?
Unknown Executive
I'd probably say we're trying to look at it in 3 levels. So I think if you said, are people using AI in some way to drive something either internally or even just engaging with a customer, I would say that's very high adoption. But for us, what we've really been trying to focus on is -- I don't know, it's very heavily used, but the sort of 10x principle.
So where are things being done that are genuinely shifting either thinking or approach or even product for the customer. And I would say that drops back to probably in the 20s as a game. But what we're also trying to do is, I guess, differently to the sort of 100% mandate route, again, the joy of decentralization. We've been taking a much more educational proof point, trying to show best practice. And I think we're now still can see that really come through with a lot more people there trying to outpace that mentality. So yes, today, probably in the 20% region for something that is meaningful, but I think we'll see that accelerate through the next 18 months.
Mark Leonard
Founder, President & Director
Yes. And I think that meaningfulness criteria is clearly CEO fought it. Managers love that stuff. But I also love AI tools. And I think, so do some of our clients that are aimed at relatively small demonstration type activities. We have, for instance, inside of our operating groups, 3 separate initiatives all designed to inhale contracts of businesses that we're looking to acquire and analyze those contracts.
And all of them have been relatively modest efforts and quite successful and easy to benchmark against not using AI and easy to benchmark against commercially available AI. And I think we're all feeling though with ourselves about those developments. And that's wonderful. I think from a morale and proof point of view, it's terrific, but maybe we're optimizing 100 people here out of our 75,000 with those 3 solutions.
But good on you. I'm glad people are doing the experiments and are learning from them and applying them. So not everything needs to be order of magnitude meaningful for it to be useful from both the morale and customer enthusiasm point of view.
Let's move on to the next category on the reporting that I got here. It says this BU is currently using for customer service currently using AI for customer service and the result was 29%. And I guess, [ Hakan], any sense of whether inside of the operating group that you're associated with AI is being used for customer service.
Unknown Executive
I think it's more than 50%, and it's a very logical area to have AI support within because you can have all the questions -- with all the data, all the questions, which were asked and then, of course, it's very easy to AI -- let AI support you in to have a very correct and good answer to end customers. So -- and I think there's still space to gain, but more than 50% is supported by AI in my group.
Mark Leonard
Founder, President & Director
Yes. And I had expected this to be higher, and I'd also expected it to be to have better results. I've a couple of instances where I've gotten the data about our AI customer service agents. And I'm seeing call diversion of 10% to 20%, not 50% to 60%, which I guess I was disappointed, because we already had comprehensive knowledge bases for our support personnel from which the AI could be trained. And so I'd sort of hoped that it would be a lot more effective. Obviously, with time, it will get better, at least I'm hoping it will. But yes, I had thought this would be the most prevalent area for us to apply AI.
Let me move on to the next category. This BU is currently using AI for sales and marketing, 50% of the BUs reported that they were. And I've also heard some very nice individual anecdotes about AI being used for sales and marketing that have led to a significant new business that we haven't seen previously. Anyone have any comments regarding that and within their business unit to -- any sense of what the penetration of AI is in sales and marketing. Paul, go ahead.
Paul McFeeters
Okay. So we definitely see -- we've seen higher adoption in sales and marketing when it comes to AI tools compared to any other bucket. That's also in line with what other organizations outside CSI are also importing. But at the same time, I think what's important to remember is that sales and marketing is really about differentiation. And I think that we will start to see games initially while we're maybe some of the first using AI in telemarketing. But once everyone uses AI in sales and marketing that in itself will not be a differentiator anymore.
If everyone is generating LinkedIn post with AI and they all start to sound generic and kind of the same, I don't think that in itself will drive better sales and marketing results. So one of the things that we're discussing with the business is how can you use AI to kind of enhance your edge and enhance the relationships that you already have? Because otherwise, in a couple of months when everyone is using AI, it's going to be quite hard to stand out just because you are an AI user.
Mark Leonard
Founder, President & Director
Yes. Don't disagree. Any other comments? Or should we move on?
Unknown Executive
From a sales and marketing perspective, oftentimes for us, where we factor things in is we have a better success rate of selling assets when we're able to present a more comprehensive solution set. And as we continue to acquire more and more kind of assets been having [ evolves ] thousands products and whatnot.
So I think a novel way that we've been able to make some, I think, reasonably good progress on the sales and marketing front is by way of actually putting what the customer is through an actual LLM based asset knowledge base that we created. That essentially has information about every single one of our collective [indiscernible] other partners and so on and so forth, assets in order to be able to kind of call less, if you will, the right solution set that the customer could want.
And so that has led for successes in where the stand-alone product, we didn't feel had a really good chance. I think that's maybe kind of a neat way of using in the sales and marketing function. The other piece is if we factor in the RFP within the sales function, then there's -- I think that's another neat area, trim based on the sector that we operate in, I think there's some really good opportunities for us to dwindle that down based on the hundreds that we've historically done, answering these thousands of questions in some cases.
Mark Leonard
Founder, President & Director
There were 2 more data points in the survey. The next one was this BU is currently using AI tools in R&D, and this operating group reported that 61% of their business units were using AI tools in R&D. Cam, any sense of your operating group percentage of R&D?
Unknown Executive
Yes. It would be really close to that. I think there's a number of tool sets that have been very largely embraced. The interesting piece is really what the output percentage of efficacy is quite ebbs and flow. There's some that are gaining abnormally high because they're using it a certain way or their tech stack and Paul has touched on it where AI natively is perhaps better trained at specific instances or circumstances and programming language in this case versus some that are more niche based assets, if you will. So the adoption, yes, about the same, but the output of efficacy is the biggest delta.
Mark Leonard
Founder, President & Director
And Chris?
Unknown Executive
Yes. I'd say very similar. In terms of -- I think there's 2, again, metrics here we would be looking at. One is very similar, certainly more than half in terms of adopting, using the tools, starting to see the progress. But then it's also obviously enablement, the mind shift to use it to get the maximum delivery. And I think that's probably at a lower percentage at this point, but we're sort of seeing that drive up as well. So yes, very similar to [ Cam ].
Mark Leonard
Founder, President & Director
[ Hankan ]?
Unknown Executive
More than 70%, but the level of using AI is still a long gap between because we see -- [indiscernible] like a toolbox full of tools and maybe then to choose the right tool for every job and the knowledge to know which kind of tool you use in which situation. But still a learning curve for a lot of developers within our operating group. But everybody is touching the use and is using, but the deepness will hopefully rise the coming months.
Mark Leonard
Founder, President & Director
I enjoy how [ Hankan ] assesses the AI maturity of the business units because I think that's an important way of distinguishing between those who use it and casually and those who have really understood how to use the tool.
The last question, and I thought this was an interesting one because it relates to a lot of the underlying questions that I got by e-mail was, has this BU replaced any roles with AI tools? And I think when they say roles, I think they mean people. And roughly 3% of our BUs reported that they had replaced people with AI tools, which is lower than they thought. And actually, I think is probably a good thing, but I'm placing value assumptions on the question.
Why don't we start with Paul? Have you encountered any instances where business units have been able to replace people with AI tools?
Paul McFeeters
So not at least. Sorry, go ahead.
Unknown Executive
No, because, of course, we have roadmaps for our tooling and all the freed-up capacity will be used to develop new value for our customers, not things to do. So no, we didn't replace software developers. We could give them other tasks to develop. And of course, we also see that if you have young unexperienced software developers, they have a much steeper learning curve. So we want to grow and we are growing. And we -- AI helps us to make our software developers productive and develop functionalities with AI to make the value our customers are willing to accept.
Mark Leonard
Founder, President & Director
And I think even in support when we've seen 10% to 20% call diversion and you can, in our relatively small businesses translate that to the removal of a person, I think in most instances, we just keep the people and try to respond faster to the calls that aren't diverted and try and do a better job with them. So once again, much like in R&D, you don't get to reduce the people cost, you re-deploy it, hopefully, with in the case of support and Net Promoter Score improvements.
I'm going to stop the questions there, and we've been on the line for over 1.5 hours I'd like to thank you all for attending. I know we have several hundred participants still on the line. And so there's obviously a real appetite for this kind of information.
Let me encourage you not to listen without healthy skepticism to what you read. In the last few weeks alone, I've heard that a major soft drink company increased its sales by 7% to 8% because of AI and I had a look at its stock and it went down. I've heard that from the founder of a major software investor that AI just increases TAM, and that's wonderful, but you got to consider the source.
He's not about to say that the software is threatened by AI.
I've heard from a bank CEO that AI is revolutionizing their business and is going to lead them to a brave new world. It's really important to dig in and try and understand to be an anthropologist, to observe and test the claims that you hear and try to understand the current state of the art.
There's 2 ways to do it. One is obviously through sort of smoothing claims that you hear. Obviously, if you have trusted partners from whom you're getting evidence that makes life a whole lot easier. The other thing you can do is be a scientist instead of an anthropologist and observing actually run experiments, try AI and ideally try it against the alternative and see if you get significant improvements in whatever it is that you're endeavoring to do.
So predict in the future, really, really hard, particularly at times like these, but monitoring what's happening in real time, a whole lot easier. You just go to approach it with, as Chris said, a healthy skepticism.
So thank you for joining the call. Really appreciate it. This is important to us to share with you where we're at on the pursuit of AI and hope that you learned something from today's session. Thank you, Dave, for teeing up the call, and you can end the call now. Thank you.
Operator
The conference has now concluded. Thank you for attending today's presentation. You may now disconnect.