The Incongruency Problem: Why AI Is Failing Enterprise

Michael HofwellerMichael HofwellerMar 17, 202610 min readPublished
agent-nativeenterprise-aiorganizational-designresearch-facility

The world is experiencing one of the most rapid shifts in the history of labour and capital swifter and more severe than before. Policy, people and organizations are struggling to catch up, to find their place in the world; to make sense of it all. And this revolution won't slow down. It will only compound exponentially as new advances in AI are released.

What does this mean for us today? Incongruency, I'd argue.

#The Round Peg Problem

We live in a world designed for humans. Not AI. This is my thesis as to why $30 to $40 billion in enterprise generative AI investment has produced almost nothing. MIT's NANDA initiative revealed some interesting data: only 5% of enterprise AI pilots achieve measurable financial returns [1]. The remaining 95% stall, delivering little to no impact on the P&L. A separate Forbes study found that fewer than 1% of executives report achieving significant ROI from their AI investments [2]. The Fujitsu Technology and Service Vision 2025 tells a similar story: while 98% of organizations are deploying generative AI, roughly 5% have achieved impact at a million-dollar scale [3].

This is not a technology problem. It's a design problem. Most of humanity is trying to fit a round peg into a square hole. Holding on to notions of the past; roles, rituals, infrastructure and processes, all while attempting to right-size AI solutions to be backwards compatible to them.

It's analogous to putting an engine onto a horse-drawn buggy. The buggy is still limited by everything it's built on including the horse. One likely will not even see a significant increase in horse-drawn performance from it. It's just the wrong rails to put the engine on.

#The Data Tells the Story

The MIT researchers identified this exact dynamic. They found that generic tools like ChatGPT boost individual productivity by over 10%, but those gains evaporate inside enterprise environments because AI is being bolted onto outdated, people-centric systems [3]. The technology works in demos but fails in daily operations [4]. Crucially, the primary barriers to value are not technical but organizational: a systemic learning gap that prevents businesses from integrating AI into their core workflows [1]. Enterprises became digitally enabled but never digitally native. They could digitize existing processes, but they could not reimagine how work itself was done [5].

There's a striking irony buried in the data. While only 40% of companies provide official AI subscriptions, workers from over 90% of surveyed organizations report regular personal AI tool usage for work tasks [1][6]. A thriving shadow AI economy exists where individuals have crossed the divide that their organizations cannot. The tools work. People know how to use them. The organizations are the bottleneck.

#Two Compounding Levels

We're facing this inefficiency on two compounding levels:

The culture. The processes, organizational structures, roles and rituals that make up daily life at work. These were designed around human-to-human handoffs, linear approval chains and manual coordination. They are structurally incompatible with a system where machines can reason, execute and iterate in parallel.

The technology. B2B tools built for humans; visual, UI-first and inefficient for agents. An agent doesn't need a dashboard. It needs an API. This is why services like AgentMail are emerging. This is why the industry is seeing a shift toward API-first, headless infrastructure designed for machine-driven execution rather than human interaction. These are symptoms of the deeper incongruency.

Combined, these produce a compound effect that drastically reduces the impact horizon of AI.

#The Loosely Organized Research Facility

Closing that gap requires dedicated research into reimagining an agent-native world. This is the impetus for the Loosely Organized Research Facility.

Our mission is to explore the way the world could be, rather than the way it is, through applied product research for an agent-native world. LORF will not study agent-native organizations theoretically. It will be one. A living experiment, built from the ground up, without legacy human-centric processes to demonstrate what new operating models, and the underlying technology, could look like. The proof of concept is the organization itself.

Here are some of the questions we're exploring:

  • What does an agent-native organization look like?
  • How do agents from other organizations find each other?
  • How do agents cooperate across systems and at scale?
  • What does agent-native infrastructure and tooling look like?
  • How do agents operate autonomously to solve problems?

#Starting With Memory

To start, I've found the first problem to solve. Memory.

The MIT research specifically identified this gap: most enterprise AI systems lack memory, contextual adaptation and continuous improvement. The researchers called these the exact capabilities that separate transformative AI from expensive productivity theatre [1].

Tacit knowledge is critically important as an organization grows in complexity. Without it, everything else falls apart. External people interacting with my agents won't have the rich conversations they would with a human. This will be crucial for my agents, and for me, to access and manage well.

More on this exploration soon. Follow along.


#References

[1] MIT NANDA Initiative, "The GenAI Divide: State of AI in Business 2025." Based on 150 executive interviews, a survey of 350 employees, and analysis of 300 public AI deployments. mlq.ai

[2] Forbes Research, "Forbes AI Study 2025." Survey of enterprise executives on AI ROI measurement and investment outcomes. mavvrik.ai

[3] Fujitsu, "Technology and Service Vision 2025." As reported in CIO, "Breaking the 5% ROI ceiling: Why enterprise AI stalls at the pilot stage." cio.com

[4] ZoomInfo, "2025 Go-to-Market AI Survey." As reported in Brookings Register. brookingsregister.com

[5] Consulting Magazine, "Why Enterprise AI Stalled and What Is Finally Changing in 2026." consultingmag.com

[6] Fortune, "MIT report: 95% of generative AI pilots at companies are failing." fortune.com