Rethinking Delivery For AI (Part 2)
Services may be the winning model in the AI era: best suited to overcome adoption hurdles, and finally more scalable
I previously wrote about why SaaS may be the wrong delivery model for GenAI (which I’ll refer to simply as AI).
Each major technology platform shift has transformed how software is built, delivered, and monetized, ushering in new dominant business models. Personal computing popularized licensed software, characterized by one-time purchases installed on local machines. The mobile era amplified the freemium model, with app stores enabling frictionless distribution and monetization through in-app purchases, ads, or subscriptions. The cloud facilitated SaaS, largely replacing upfront licenses with scalable subscriptions and continuous delivery.
These delivery methods became widely adopted because there was strong business model-technology fit: they best leveraged the unique strengths of their respective platforms
What does that look like with AI?
Enablers for Successful AI Deployment
I focus on enterprises because their scale, data volume, and urgency to automate make AI deployment economically viable. They also have the budgets to fund the experimentation and infrastructure needed to operationalize AI at scale. However, deploying AI effectively requires overcoming several challenges.
Establish AI-human handoffs
Enterprise workflows demand reliability and consistency, with predictable outputs for given inputs. While LLMs have made strides, they are probabilistic and prone to errors or hallucinations. To improve performance, firms use prompt optimization, output evaluations, and model fine-tuning, all of which rely on human feedback. Designing AI-human handoffs, seamless interactions where humans review and refine AI outputs, is critical to embed this judgment into workflows.
Even with these improvements, fully autonomous AI carries risks, as errors in enterprise settings can be costly. Human oversight remains essential to supervise outputs and ensure accuracy. Enterprises also need guardrails to protect data privacy and intellectual property. While startups are building tools to automate these safeguards, humans remain the ultimate trust and safety layer. This need for human involvement and risk mitigation is underscored by McKinsey’s latest survey of enterprise AI adoption.
Bridge data siloes
AI’s effectiveness hinges on high-quality, accessible data. Yet, most enterprises have data scattered across CRMs, spreadsheets, ticketing platforms, and emails. To deploy AI, this data must be located, cleaned, and integrated.
Many enterprises have built data pipelines through a variety of integration approaches, but AI demands more. It requires scalable, real-time data flows tailored to its workloads. This often means redesigning pipelines, upgrading APIs, or adopting modern infrastructure like data fabrics, a unified system for managing scattered data sources. Without this foundation, AI outputs risk being unreliable or inaccurate.
Identify the right use cases
At its core, when you want a computer to do work for you, there are two main approaches: either you build an app by writing code yourself, or you use an app someone else has already built.
Before AI, writing code was complex and costly, so most industries defaulted to the second option. Developers would take a problem, design a solution, then package it as a one-size-fits-all abstraction for end-users.
AI promises to change this dynamic by acting as an assistant or autonomously completing tasks, tailored to individual needs. But the “last mile” of automation remains elusive because it’s inherently subjective. For example, two analysts looking at the same data might reach different conclusions, and even if they agree, the way they communicate those insights will vary depending on their style and audience. Similarly, two writers with the same viewpoint could write very different articles.
The future of AI-native products will likely involve domain experts, the end-users themselves, building and customizing parts of applications that touch this subjective last mile. I think this partly explains the rapid adoption of AI-powered coding environments like Cursor, Windsurf and prototyping tools like Lovable, Replit.
This introduces a unique challenge: identifying valuable use cases becomes a more nuanced, subjective process. The potential impact depends heavily on the expertise of the end-user and the specific details of an enterprise’s workflows, much more so than traditional SaaS products.
Unlike SaaS, there’s thus unlikely to be one-size-fits-all AI solution. AI products will likely resemble domain-specific platforms that empower users to create and edit their own agents. Success requires bottom-up discovery within each organization.
Implement training and process transformation
This brings me to my final point.
To unlock AI’s full potential, employees must be retooled. Unlike traditional SaaS, AI tools offer greater flexibility but require new skills to navigate. Staff will need training not only on how to use these tools effectively but also on how to identify where AI can add value within their own workflows. Additionally, they must learn to manage new modes of interaction with AI systems, such as writing prompts and reviewing outputs.
Business processes themselves must be redesigned to integrate AI seamlessly. This involves rethinking workflows, setting up feedback loops, and creating incentives that encourage consistent adoption and experimentation.
These challenges highlight why SaaS struggles with AI delivery. SaaS models prioritize standardization and scalability, but AI requires customization and human oversight, areas where professional services excel.
The Right Delivery
Professional services are uniquely positioned to address AI’s adoption challenges, leveraging their expertise in tailored solutions and complex change management
Services mitigate AI adoption challenges
Hired for taste
Enterprises hire service firms for their trusted methodologies and consistent, high-quality outcomes. Large service firms like McKinsey or Deloitte set industry standards for delivery processes, methodology and output style.
While outcomes are tailored for specific clients, there is less variability in individual contributor output by design with built-in quality control in processes. In fact, most service firms would already have extensive reference libraries of past artifacts and templates. Standardization and repeatability make it easier to identify use cases and design effective AI-human handoffs at scale.
Scale to support required investments
Service firms specialize in specific workflows (e.g., accounting firms handling bookkeeping, law firms drafting contracts) across multiple clients. This focus and depth enable domain-specific investments to maximize AI performance. Agent building platforms for task automation, tooling for prompt optimization, and guardrails for output quality.
This extends to organizational design. For example, dedicated teams to evaluate outputs, refine prompts, and fine-tune models, ensuring consistent performance.
Data intake is built-in
Service firms already have the manpower and project management capabilities to deal with client data intake. They also have the senior management buy-in required to navigate enterprise complexity to extract information, including tacit knowledge not stored in systems
They would possess experience in processing raw and unstructured data. Take accounting, where clients hand over invoices, bank statements, payroll records. The service provider then processes it, reconciles it, and delivers the outcome.
They might also have developed required connectors to access client data more programmatically across an extensive set of commonly used systems that domain.
AI makes the data sharing process more seamless by:
Reducing the cost of building lightweight system connectors to source data directly from client systems (e.g., bank APIs, POS systems), reducing manual handoff
Making it easier to extract unstructured data from common sources (e.g., email, pdf documents)
Processing data faster (e.g., classification, transformation) and with less human oversight
Services become more scalable
The conventional wisdom is that professional services don’t scale. Every new client means more bodies, more hours, more overhead. Beyond a certain scale, additional administrative overhead is required to maintain quality of deliverables and coordinate resources.
AI changes the scalability equation.
Increase staff productivity
AI meaningfully improves the productivity of service teams by taking on repetitive, time-consuming tasks. Administrative work like data entry, reporting, or billing can now be partially or fully automated using LLMs, reducing operational overhead and freeing staff to focus on higher-value client interactions.
Beyond admin, AI can accelerate the creation of core client deliverables. Research tasks that might once take hours – reading through industry reports, summarizing regulatory frameworks, extracting insights from financials – can be compressed. Teams can point AI at internal or public data sources to quickly generate a research brief to kickstarts analysis or early drafts of client outputs.
The result isn’t full automation, but delivery acceleration. Staff move faster, with less friction. And the firm increases its effective capacity without proportional increases in headcount.
More consistent, high-quality outcomes at scale
Traditionally, service quality depends heavily on the experience and judgment of individual team members. AI offers a path to scale that quality by turning institutional knowledge into reusable assets.
Every deliverable is stored in a structured repository and enriched with metadata like industry, project type, and client goals. When a new engagement starts, semantic search retrieves relevant past work. Teams begin work with a tailored foundation of prior thinking, adapted to the context at hand.
This institutional memory becomes part of the delivery process itself. LLMs are prompted using standardized templates, and past excerpts are embedded into their context windows using retrieval-augmented generation. The AI generates working drafts with historical precedent in mind, improving both quality and consistency. Over time, firms refine these workflows using feedback loops, human-in-the-loop reviews, and eval sets to continuously improve performance.
High-quality work isn’t just about consistency, it’s about relevance. Historically, that relied on someone on the team knowing the client well. But now, delivery workflows can draw directly from client-specific inputs: documents, CRM records, prior correspondence. This allows outputs to be automatically tailored, adjusting tone, assumptions, or formatting to reflect that client’s history and preferences.
Each project contributes back to the system, creating a compounding flywheel of expertise. The more you deliver, the better your delivery gets.
Expands market reach
As a firm codifies its expertise into structured knowledge systems, it unlocks new, scalable ways to serve the long tail of the market that traditionally is not economical to serve.
AI allows firms to repackage their accumulated know-how into digital-first interfaces: a chat assistant trained on past deliverables, a guided decision tool built on proprietary frameworks, or even an avatar that helps users generate outputs.
In effect, firms can create productized layers atop their service stack. These AI-powered interfaces can address routine queries, provide templated analysis, or generate first drafts, that are grounded in the firm’s methodologies. These lightweight tools can be delivered at significantly lower variable cost, enabling service firms to address smaller clients that cannot afford full-service fees.
This also becomes a powerful acquisition channel. Clients who begin with digital self-serve tools gain exposure to the firm’s capabilities and approach, which can translate into higher-value engagements once their needs grow more complex. The AI interface becomes the modern equivalent of a “free consult” but one that scales infinitely and captures intent signals that feed into a more intelligent sales pipeline.
By codifying expertise and embedding it in scalable delivery channels, firms transform themselves from time-constrained consultancies into hybrid providers.
Putting it all together
My core contention is that the AI platform shift will usher in a new pre-eminent business model for delivering that technology, and that model is services.
Higher monetization potential. In services, customers are already accustomed to paying based on billable hours or cost-plus pricing. AI introduces the opportunity to decouple effort from output, moving pricing closer to outcomes delivered rather than hours logged. Since the baseline for service fees is already high, firms can price disruptively, offering better outcomes at the same or lower cost, and still retain margins. Higher fees relative to SaaS also mean inference and tooling costs are meaningfully lower as a percentage of revenue, giving service firms more headroom to absorb these costs.
Better gross margins over time. The classical argument is that services are structurally low margin. However, AI transforms the delivery model: automating repetitive work, augmenting human expertise, and embedding institutional knowledge into every engagement. As delivery becomes faster and more efficient, margins should expand over time.
More addressable market. AI unlocks scalability for services in a way that was previously impossible. By reducing the cost to serve, firms can bring high-quality service to clients who historically couldn't afford it. Alternatively, they can deliver superior outcomes at the same price point and win market share. As AI continues to improve quality and reduce cost, the economic case for clients to outsource more workflows becomes increasingly compelling.
Stronger lock-in. Once a client outsources a workflow, rebuilding that capability in-house becomes prohibitively expensive. If the service firm’s quality improves over time and adapts deeply to client-specific contexts, switching providers also becomes harder. The firm's internal processes, proprietary tooling, and growing codified knowledge form are hard to replicate, creating a defensible moat over time.
AI is democratizing intelligence and real-time reasoning. That shift makes it increasingly hard to execute SaaS businesses successfully, but it supercharges services. In services, AI has business model-technology fit.
I’d love to chat if you are exploring or building an AI-native services business.
I muse on how AI impacts problem areas that are personal or that interest me in a series of articles. I hope it sparks learning and conversations with likeminded folk.