Every PE fund, private credit shop, and M&A advisory I've spoken with in the past year says the same thing: we're investing in AI. The language varies, from "building our data infrastructure" to "exploring generative AI use cases," but the direction is always forward.

The numbers tell a different story. McKinsey's 2025 private markets research found roughly 60% of firms in private markets experimenting with generative AI. About 5% have deployed it in production. Only 1% of executives describe their rollouts as mature. MIT's NANDA Initiative, studying 300 enterprise AI deployments, was more direct: 95% of generative AI pilots yield no measurable business return. BCG's finance survey found that only 45% of executives could even quantify their AI ROI, and among those who could, the median return was 10%.

These are not firms that lack ambition or budget. These are firms that tried, spent real money, ran pilots, and still ended up where they started. The question is why.

The data layer that doesn't exist

The first problem is the simplest to describe and the hardest to fix. Most private markets firms don't have data infrastructure. They have data, scattered across deal rooms, CRMs, email threads, Excel models, and SharePoint folders. But they don't have infrastructure: a unified layer where that data is accessible, structured, and ready for anything beyond manual retrieval.

From what I've seen across European mid-market transactions, a typical firm's information architecture looks roughly like this. Deal documents live in a virtual data room (Intralinks, Datasite) that exports PDFs but resists structured extraction. Relationship and pipeline data sits in a CRM (DealCloud, Salesforce) that was configured for contact management, not deal analysis. Financial models live on individual laptops. The actual reasoning behind investment decisions, the part that matters most, exists in email chains, meeting notes, and the heads of people who may or may not still work there.

KPMG's 2025 report on private credit technology documented this as a "fragmented technological landscape" where systems "operate in isolation, leading to integration challenges, data inconsistencies and operational inefficiencies." Holland Mountain found that 31% of a firm's technology stack consists of legacy systems holding operations back. The IIF-EY annual survey reported that 79% of financial institutions cite data quality as the top barrier to AI deployment. Within that, 96% specified noisy, untimely, and inadaptable data as the core issue.

In plain words: most firms don't have a data problem they need to solve before adopting AI. They have a data problem they need to solve before AI can even see what they're working with.

The trust gap that grows with scale

The second problem looks like regulation but isn't. GDPR, NDA clauses, data processing agreements, fiduciary obligations: these create real constraints on what data can flow to an AI system. But they're solvable. You can design around them with the right architecture, data residency, access controls, and processing agreements that satisfy compliance teams. The regulatory layer is an engineering problem, and engineering problems have engineering solutions.

The trust layer is different. A deal partner at a mid-market fund has spent years building relationships on the understanding that the information they receive is handled with discretion. Asking them to feed that information into an external AI system isn't a technical request. It's a question about whether they trust a new vendor with the material they've spent a career learning to protect. The answer, more often than not, is no. And it's not irrational. These are people whose professional reputations depend on confidentiality, being asked to share their most sensitive work product with a system they didn't build and don't fully understand.

Bain's 2025 executive survey surfaced something that most vendor pitches ignore: privacy and security concerns don't decrease as firms scale their AI use. They increase. Companies that moved from pilots to production reported rising concerns about data security, not falling ones. This makes perfect sense if you think of it as a risk problem that compounds with scale. A pilot processes ten documents from one deal. A production system processes thousands of documents from every deal the firm has done. The stakes go up, and so does the anxiety.

EY's 2025 governance survey found that AI adoption is outpacing governance frameworks across financial services, with risk awareness among C-suite still low. That combination explains why trust becomes harder rather than easier as firms move past the pilot stage. You can run a pilot without governance in place. You can't run production without it, and most firms haven't built it yet.

Why pilots die

Most AI initiatives in private markets follow the same trajectory. A deal team runs a pilot, the output is good, and somebody presents the results at an internal meeting. But the rest of the firm's processes, vendor contracts, compliance procedures, and reporting lines were built before anyone considered AI, and nobody rebuilds them to accommodate it. The pilot stays a pilot.

This pattern is not exclusive to private markets, but the enterprise data is instructive. RAND's 2024 study on AI project failure found that over 80% of AI projects fail, twice the rate of non-AI IT projects. The root causes weren't technical. They were misunderstanding the problem to be solved, lacking the right training data, and focusing on technology rather than process redesign. Gartner predicted that 30% of generative AI projects would be abandoned after proof of concept by end of 2025. By mid-2025, 42% of companies had abandoned most of their AI initiatives, up from 17% the year before.

The pattern is consistent across every case study I've read and every firm I've spoken with. A pilot succeeds because a small, motivated team adapts their workflow around the tool. Scaling fails because the rest of the organization didn't adapt anything. The processes, the incentive structures, the reporting lines, the vendor contracts, the compliance procedures: none of them were designed for AI and none of them changed to accommodate it. McKinsey's earlier assessment of generative AI in PE was direct about this: companies "historically failed with advanced analytics or AI transformations because it requires significant change management to win hearts and minds." They compared generative AI to "an intern who returns excited with work that looks good but could be totally wrong."

That's not a technology critique but an organizational one. The technology produces output, and the organization doesn't know what to do with it.

There's a structural reason this problem is especially persistent in private markets. The transaction chain from origination to close involves multiple intermediaries, each with their own systems, formats, and economic incentives. Upton Sinclair's observation applies with uncomfortable precision: it is difficult to get a man to understand something when his salary depends on his not understanding it. If your advisory fee structure assumes a certain volume of manual analytical work, you don't have a natural incentive to adopt tools that compress that work, even when the output quality improves. The process was built around labor, and the economics of the process still reflect it.

What we see from the inside

Three layers of AI deployment in private markets

Axion Lab is an AI-native services firm for European private markets. We run investment analysis, prepare commercial due diligences, and sustainability assessments where AI handles over half of the production work. I say this with full awareness of the perspective it gives us, and the blind spots it creates. We're not selling a tool for firms to integrate into their processes. We're delivering the outcome directly: the DD report, the investment memo, the risk assessment. The client doesn't need to solve the data infrastructure problem internally because we built our firm around that problem from day one.

That changes the dynamic, but it doesn't eliminate the barriers this article describes. Every conversation with a potential client still starts with the same question: can we trust you with our deal data? That question doesn't resolve in a single meeting. It resolves over time, through processing real documents, through showing exactly what the system sees and what it doesn't, which can take months. The trust problem is real regardless of whether you're adopting AI internally or working with a firm that runs on it.

The infrastructure problem also shows up differently but doesn't disappear. We receive data in the formats clients have it: PDFs, Excel files, email exports, data room downloads. We're not building intelligence on top of well-organized data. We're building intelligence on top of the same messy inputs that every deal team works with, and structuring them is part of the work. The difference is that we've built the processes and tooling to do that structuring at speed, because our firm was designed around it rather than adapting an existing workflow.

From what I've seen, this is the distinction that matters: not which AI model you use or which vendor you choose, but whether the organization doing the work was built for AI or is trying to bolt it on. The analytical capability itself is real. Cross-referencing findings across financial, commercial, and legal workstreams, spotting contradictions between VDD narratives and data room evidence, running scenario analyses on compressed timelines. This produces output that would take a human team weeks to assemble. But the capability is only as good as the organizational layer underneath it.

Where the 5% goes from here

The firms that reached production scale with AI didn't get there by choosing the right tool. They got there by doing the organizational work that most firms skip: building data infrastructure before buying AI products, getting governance frameworks in place before scaling past the pilot, and rebuilding processes around what AI changes rather than bolting AI onto processes designed for manual work.

McKinsey's 2026 Global Private Markets Report found that PE firms have increased engagement with portfolio companies on technology and AI, that operating teams have more than doubled since 2021, and that the industry's focus is shifting toward "asset quality based on competitive moats, data advantages, embedded workflows, and execution capability." That reads like an organizational transformation agenda rather than a technology shopping list, and that's exactly the point.

The gap between the 60% experimenting and the 5% deploying will close. It won't close because the AI models get better. They're already good enough. It will close because the firms that are serious about this do the hard, slow work of rebuilding the layer underneath: the data, the governance, the trust, the processes. That work doesn't demo well at conferences and it doesn't make for exciting quarterly updates. But it's the work that separates a pilot that impresses the board from a system that changes how the firm actually operates.