The private markets industry evaluates AI the way it evaluates software: features, integrations, seat licenses, annual contracts. That frame is wrong, and it's the reason most AI adoption in this space stays stuck at the pilot stage.

The firms that are actually gaining ground don't sell software; they deliver outcomes. The client gets the analysis, the memo, the risk assessment. They don't install anything, retrain anyone, or rebuild their data infrastructure. AI handles the production work inside a firm that was built around it, and the client receives the result. I've started calling this "service-as-a-software"* because neither "SaaS" nor "consulting" describes what it actually is.

a16z's enterprise research documented this shift across enterprise AI: when AI can handle the work, the natural pricing metric becomes the successful outcome, not the number of humans who access the tool. In private markets, the shift goes further. The outcome isn't a resolved support ticket or a generated email. It's a due diligence report that a deal team can stand behind at the investment committee meeting, a credit risk memo that a lender can use to make a deployment decision, an insurance risk assessment that connects financial, legal, and operational findings into a single picture.

Why buying software doesn't work here

I wrote about this in The 5% deployment problem: roughly 60% of firms in private markets are experimenting with AI, but only 5% have deployed it in production. The barriers are infrastructure, trust, and organizational design, not technology.

When a PE fund, credit shop, or M&A advisory evaluates an AI tool, they run into those barriers immediately. The data is scattered across deal rooms, CRMs, and email threads that don't talk to each other. The compliance team needs to approve data flows before anything moves. The deal partners need to trust that confidential information is handled properly. And even if all of that gets resolved, someone still needs to redesign the workflow to accommodate the tool. Most firms don't do that last part, and the pilot quietly dies.

The SaaS model assumes the buyer has the infrastructure to integrate the product and the organizational capacity to adopt it. In private markets, that assumption fails more often than not. HBR's 2026 analysis of the "last mile" problem identified seven frictions that prevent AI pilots from scaling, and concluded that "if AI pilots aren't turning into real business value, the problem likely isn't the tech itself; it's the operating model." The operating model is exactly what most firms cannot change quickly, because their processes, contracts, and incentive structures were designed around manual analytical work and haven't been rebuilt.

Service-as-a-software sidesteps this entirely: the client shares documents and receives analysis. The AI runs inside a firm that was designed around it from the start, not bolted onto an existing workflow that was built for something else.

What works instead

The most effective way to build credibility in private markets is to demonstrate capability on real work, on actual data that a deal professional can evaluate against their own judgment.

From what I've seen, the conversation changes the moment someone sees output that matches or exceeds what they produce internally. The question shifts from "does this technology work?" to "what else can you do?" and that shift happens through the work itself, not through marketing. A credit analyst receives a pre-built risk memo they didn't ask for and finds it covers the same ground their team would have covered in two weeks. An M&A advisor sees contradictions between the VDD narrative and the data room surfaced in hours rather than discovered during the committee discussion. An insurance underwriter gets a risk assessment connecting financial, legal, and operational findings in ways that their current sequential process structurally misses.

In each case, the work is the proof. Nobody needs to be convinced that AI works when they're holding the output and can compare it against what they would have produced themselves. The wow effect, if you want to call it that, is about how useful the result is, not how impressive the technology looks.

Bain's 2025 M&A report found that adoption of AI tools in M&A more than doubled that year, with 45% of executives now relying on the technology. The firms that moved fastest weren't the ones experimenting with general-purpose tools. They were the ones using AI for transaction execution: deal analysis, integration planning, and learning from prior transactions. In plain words: they used AI to do the work, not to explore whether AI could theoretically do the work.

What the client actually sees

From the client's side, the experience is closer to working with a consulting firm than buying software. You share deal documents and receive structured analysis with findings traced to their sources. The quality is consistent across engagements because AI standardizes the production layer, so the output doesn't vary based on which analyst happens to be available that week.

The difference from traditional consulting is speed and the ability to cross-reference at scale. A human team working across financial, commercial, and legal workstreams takes weeks to connect findings from different advisors and jurisdictions. An AI-native firm doing the same work can surface contradictions in hours because the system reads everything simultaneously. That matters in compressed deal timelines where the window between exclusivity and the committee meeting keeps getting shorter.

The difference from SaaS is that the client doesn't change anything about how they operate. No integration project. No internal training program. No data migration. They get the deliverable in the same way they've always received deliverables from external advisors, except the production behind it is different. In plain words: service-as-a-software means the client buys the outcome, not the tool. They don't need to understand how the production works any more than they need to understand how their legal advisor's practice management software works.

Why it takes time, and why that compounds

People evaluate new things using old mental models. The first question in almost every conversation is some version of "so you're a software company?" or "so you're a consulting firm?" Neither is right, and explaining why takes patience. The mental model of buying AI is either a SaaS subscription or a consulting engagement, and service-as-a-software doesn't fit neatly into either category. That's a category problem rather than a communication problem, and it resolves itself through experience: once a client has worked with an AI-native firm and received the deliverable, the model becomes obvious. But it's never obvious at the start.

This is familiar territory for anyone who has watched technology adoption in financial services over the past few decades. The firms that succeeded in bringing new technology into the industry didn't do it by selling the technology directly. They did it by delivering results using the technology and letting the results build trust over time. The technology was the production layer, not the product. The pattern repeated with every major shift: mainframes, terminals, structured data, and now AI. The firms that tried to sell the tool struggled, while the ones that sold what the tool could produce found their market.

The compounding works the same way PE professionals experience it in their own business. Each successful transaction builds relationship capital that makes the next one easier, each reference client shortens the next sales cycle, and each published piece of analysis adds to a body of evidence that a prospective client can evaluate before the first call. Trust accumulates rather than switching on, and once it reaches a threshold, the dynamic shifts from explaining to executing.

McKinsey's research on AI in services found that the real productivity unlock comes from "reimagining workflows so people, agents, and robots each do what they do best." That reimagining is what AI-native firms have already done internally. The question for everyone else is whether to build that capability in-house (where the 5% deployment rate suggests most will struggle) or to work with firms that were built around it from the start.

What this looks like for us

At Axion Lab, we run investment analysis, commercial due diligences, and sustainability assessments for European private markets. AI handles over half of the production work, and the client receives the deliverable: the DD report, the investment memo, the risk assessment. We built the firm around this model because we saw that selling tools to an industry with fragmented infrastructure and deep trust requirements was not going to work.

That doesn't mean the trust problem disappears. Every new client conversation still starts with the same question: can we trust you with our deal data? And it still takes time to answer, through real work, through transparency about what the system sees and doesn't see, through letting compliance teams audit the architecture. But each project that delivers good work makes the next conversation shorter, and each client who validates the model makes the next one easier to explain.

The transaction lifecycle is where this model applies most clearly. Deal screening, due diligence, credit analysis, risk assessment, post-deal monitoring: these are all document-heavy, judgment-intensive workflows where AI can handle the production and experienced professionals provide the interpretation and the relationship. That's not limited to PE buyouts. Private credit, M&A advisory, and insurance all touch the same transaction lifecycle from different angles, and the same service-as-a-software model applies to each.

Where this goes

The firms that adopt outcome-based AI services first will build an advantage that compounds. Not because of the technology, which will be available to everyone, but because of the trust, the references, and the proven track record of work that holds up under scrutiny. That kind of capital takes years to build and cannot be shortcut by a product launch.

For deal professionals evaluating how to bring AI into their work, the question is shifting. It's less about which tool to buy and more about which firms to work with. The ones that deliver consistently will earn the relationship capital that makes them the default choice, transaction after transaction. That's how technology has always entered financial services: not through purchases but through demonstrated results that compound into trust.


*I wrote this article before Foundation Capital called out this term. Happy that more and more VCs see this trend.

Sergei Maslennikov is co-founder of Axion Lab, an AI-native services firm for European private markets. Based in Luxembourg.