Sequoia AI Ascent 2026: What AI Founders Should Change in Their Pitch Deck
A founder-focused 2026 follow-up to Sequoia AI Ascent: long-horizon agents, AI-native services, investor expectations, and how to update your AI startup pitch deck.
Updated April 30, 2026 — AI Ascent 2026 has now taken place
If you came here for an event recap, close the tab. The keynote is on YouTube, the takes are everywhere, and another retelling helps no one.
This article is for AI founders preparing to raise in 2026. Sequoia held its fourth AI Ascent in late April 2026, opening with Andrej Karpathy and a partners' keynote framed around long-horizon agents and "selling work." Around it sit four pieces of Sequoia writing that, taken together, form the most coherent investor argument in the market right now: AI Ascent 2025, AI in 2026: A Tale of Two AIs, 2026: This is AGI, and Services: The New Software. The point of this article is to translate that thinking into pitch deck decisions you can actually make this week.
By the end, you should know what to change in your AI startup narrative, which metrics matter more in 2026, how to talk about defensibility without hand-waving, and how to avoid pitching a product that investors quietly believe will become a feature in the next model release.
InnMind is not affiliated with Sequoia Capital. This is an independent founder playbook based on public Sequoia materials and current market data.
TL;DR: what AI founders should take from Sequoia's 2026 AI thinking
AI Ascent 2025 already moved the conversation from "AI is big" to "AI is moving from generic tools toward applications, vertical workflows, agents, infrastructure, and new business models."
In 2026, Sequoia's thesis hardens around three things: long-horizon agents, AI-native services that sell completed work, and a sober view that AGI and data-center timelines will slip while adoption keeps rising.
Stop pitching "AI-powered tools." Start pitching owned workflows, measurable outcomes, replaced budgets, and evidence of repeat usage.
A 2026-ready AI pitch deck answers reliability, agent evaluation, data advantage, workflow depth, distribution, customer ROI, and pricing power — not just model novelty.
The strongest narratives explain why model progress makes you stronger, not just survivable.
The funding market is huge but historically concentrated. Q1 2026 hit a record $285.5 billion globally, but a single round (OpenAI's $122B) drove 43% of that, and global deal count fell to its lowest since Q4'16. Capital is plentiful for the right narrative and brutal for everyone else (CB Insights).
What is Sequoia AI Ascent?
Direct answer: Sequoia AI Ascent is Sequoia Capital's annual AI event and content series, where founders, researchers, and investors set the next 12 months of agenda for the AI market. For founders, the value isn't the event recap — it's reading how top-tier investors are actually thinking about applications, agents, infrastructure, and business models.
The third edition, AI Ascent 2025, took place on May 2, 2025 in San Francisco. Sequoia gathered more than 100 AI founders and researchers, with publicly listed speakers including Sam Altman, Jensen Huang, Jim Fan, Jeff Dean, Mike Krieger, and Bret Taylor. Themes: AI agents, vertical applications, data centers, open source, and new business models (Sequoia Capital).
The fourth edition, AI Ascent 2026, took place at the end of April 2026. The opening fireside was Andrej Karpathy with Konstantine Buhler, followed by a partner keynote from Pat Grady, Sonya Huang, and Buhler titled "This is AGI."
For a founder, the useful question isn't "what happened on stage." It's: what do these signals tell us about how the most influential AI investors now decide what's fundable?
In 2025, the answer was that AI had stopped being a model story and had become an application, workflow, infrastructure, and business-model story. In 2026, the answer is sharper: the next strong AI startup pitch must explain what work the company owns, why long-horizon agents make that work newly possible, and why the startup gets stronger as frontier models improve.
What changed between Sequoia AI Ascent 2025 and 2026?
Direct answer: The conversation moved from broad AI opportunity to a single hard question — which companies can turn model progress into reliable, paid, workflow-level outcomes? Sequoia's 2026 writing points toward long-horizon agents, AI-native services, infrastructure bottlenecks, enterprise AI fatigue, and founder narratives focused on completed work rather than productivity.
Here is the founder-relevant shift, in one table.
| 2025 AI Ascent theme | 2026 Sequoia update | Founder implication |
|---|---|---|
| AI opportunity is huge | Adoption keeps climbing, but data-center buildouts and AGI timelines are slipping | Be ambitious, but show timing discipline and proof |
| Applications matter | Long-horizon agents become the application frontier | Show workflow ownership, not chat UI |
| Agents are emerging | Agents are "doers," not "talkers" | Pitch work completed, not prompts answered |
| Infrastructure matters | Compute, data center, and supply-chain constraints shape AI economics | Explain cost curve, margins, scaling assumptions |
| Vertical apps can win | AI-native services may capture work budgets bigger than software budgets | Sell outcomes, not seats |
| Enterprise adoption is rising | Enterprises hit fatigue when trying to build AI in-house | Show why customers buy from you instead of building it themselves |
David Cahn's December 2025 essay AI in 2026: A Tale of Two AIs anchors this view. Cahn predicts a "year of delays" for data centers and AGI timelines while end-user adoption keeps accelerating. The same essay introduces the "$0 to $1B" club as the new ceiling for fast-scaling AI companies and notes that strong AI startups can already generate more than $1M in revenue per employee. It also flags enterprise fatigue with in-house AI implementation as one of the year's most exploitable openings for startups. (Sequoia Capital)
That combination matters when you raise. Investors can believe AI adoption is real and still kill your deck if your business depends on cheap compute, a vague timeline, or the assumption that enterprises will magically adopt your product the way they adopted Slack.
In 2026, "AI is growing fast" is not a fundraising argument. It is the backdrop. Your deck has to show where you fit in the new stack.
The investor question every AI founder must answer in 2026
Direct answer: What stops OpenAI, Anthropic, Google, another foundation-model lab, or a faster-moving startup from turning your product into a feature? A strong AI pitch explains why you benefit from model progress instead of being erased by it.
This is not theoretical. Sequoia partner Julien Bek has publicly pushed founders toward AI-enabled service models — autopilots that combine model capability with human judgment — and away from thin tooling that lives one model release away from displacement. Business Insider summarised the argument cleanly: the fragile AI tool is the one that gets eaten the moment a frontier model can do the same job inside its own product surface. (Business Insider)
A weak answer to the displacement question sounds like this:
"We use the latest LLMs to automate X."
A stronger answer sounds like this:
"We own a high-frequency workflow in a regulated or complex domain. The system improves through proprietary task data, user feedback, integrations, evaluation loops, and domain-specific guardrails. Better models improve our margins and output quality. The model is not the product."
For AI startup defensibility, investors are looking for evidence in seven areas:
Workflow depth. Are you embedded in a real business process, or just generating outputs?
Proprietary data. Does usage create data that improves the product?
Integrations. Are you connected to systems of record, permissions, and customer operations?
Reliability. Can you measure success, failure, quality, and recovery?
Distribution. Can you reach the buyer faster or cheaper than competitors?
Trust and compliance. Can customers actually use you in sensitive workflows?
Outcome ownership. Are you paid for seats, usage, tasks completed, savings, or revenue impact?
You don't need all seven on day one. You do need to show your deck which path you are building.
From AI tools to AI-native services
Direct answer: AI-native services are companies that use AI to deliver completed work or business outcomes — not software tools. In Sequoia's framing, a copilot sells a tool to a professional; an autopilot sells the work itself.
Julien Bek's Services: The New Software makes the distinction explicit. Sell tools and you compete with the model. Sell work and model improvement makes your service faster and cheaper. The post also reframes the addressable market: for every dollar spent on software, six are spent on services. (Sequoia Capital)
This is one of the most important fundraising angles for AI founders right now.
Your deck should clarify which business you are actually building.
| Business type | What customer buys | Common investor concern | Stronger 2026 version |
|---|---|---|---|
| AI tool | Productivity software | "Will this become a feature?" | Show deep workflow adoption and retention |
| AI copilot | Better professional output | "Is the human still doing all the work?" | Show measurable speed, quality, and revenue lift |
| AI agent | Delegated task completion | "Can it work reliably?" | Show evaluations, guardrails, and task success rates |
| AI-native service | Completed business outcome | "Is this software or services?" | Show gross margin path, repeatability, and workflow data |
The market is already producing the proof points.
Legora, the Stockholm legal-AI company, announced on April 2, 2026, that it had crossed $100M ARR, roughly 18 months after its general launch, serving more than 1,000 law firms and in-house legal teams across 50+ markets. By the end of April it had extended its Series D to $600M total at a $5.6B valuation, with Atlassian and NVIDIA's NVentures joining the cap table. The most interesting part isn't the headline number — it's the way Legora describes its product evolution: customers are moving from discrete tasks like research and review to multi-step, agentic workflows that handle real legal work end-to-end. (Legora, TechCrunch, that it had crossed $100M ARR, roughly 18 months after its )
Auctor emerged from stealth on April 15, 2026 with $20M led by Sequoia, building what it calls an AI-native system of action for enterprise software implementation — a market where projects routinely miss deadlines and one in six exceeds budget by more than 200%. Bek's pitch on the round: "For every dollar spent on software, six are spent on services. Auctor is building the agentic operating system for software implementation to go after those six dollars." That's the services thesis, made operational. (GlobeNewswire)
Netomi raised a $110M Series C on April 30, 2026, led by Accenture Ventures — a global services firm putting its own balance sheet behind agentic customer experience for clients including United Airlines, Delta, Paramount, and DraftKings. The round size is the easy story. The harder story is that the world's largest services company is now buying equity in an AI-native services platform. (Reuters via U.S. News)
These three companies are not interchangeable. But they point in the same direction: investors are funding AI that attaches to expensive, repeated, measurable work — not AI that adds another panel to a dashboard.
For your deck, the implication is simple. Don't only describe the AI. Describe the work budget you are replacing or expanding.
Long-horizon agents: why "doers" matter for pitch decks
Direct answer: Long-horizon agents are AI systems that can plan, use tools, maintain context, retry, recover from mistakes, and complete multi-step work over time. For pitch decks, this changes the product story from "our AI answers questions" to "our system completes a valuable workflow reliably."
In 2026: This is AGI, Pat Grady and Sonya Huang argue for a functional definition: AGI is the ability to figure things out. They describe three ingredients — pre-training as baseline knowledge, inference-time compute as reasoning, and long-horizon agents as iteration. Coding agents are the first concrete example. The line that matters for founders: AI applications of 2023 and 2024 were "talkers"; the applications of 2026 and 2027 will be "doers." (Sequoia Capital)
Skip the philosophical AGI debate. The product implication is what counts.
The same essay asks founders five questions in plain language. What work can your agent accomplish? How will you productize it? Can you do it reliably? Are you obsessively improving your agent harness? How will you price and package outcomes?
Translate that into how the deck looks today versus how it should look in 2026:
| Deck slide | Old AI pitch | 2026 investor-ready version |
|---|---|---|
| Problem | "This workflow is inefficient" | "This work is expensive, repeated, measurable, and already budgeted" |
| Customer | "Teams in X industry" | "The budget owner is X, currently spending Y on people, vendors, or legacy tools" |
| Product | "AI assistant for X" | "Agent or autopilot that completes X with human oversight where needed" |
| Moat | "We use GPT/Claude" | "Workflow data, integrations, evaluation loop, distribution, domain trust" |
| Traction | "Users tried it" | "Paid usage, retention, task completion rate, ROI, expansion" |
| Market | "Huge TAM" | "Specific workflow budget today, expansion path tomorrow" |
| Team | "AI builders" | "Domain experts who understand the work and can sell to the buyer" |
| Metrics | "Number of generated outputs" | "Tasks completed, error rate, review rate, time saved, cost reduction, revenue impact" |
Agent interoperability is starting to matter, too. The Linux Foundation announced in April 2026 that the Agent-to-Agent (A2A) protocol had passed 150 supporting organizations, with deep integration across Google, Microsoft, and AWS, and production deployments in supply chain, financial services, insurance, and IT operations. (Linux Foundation)
For a seed-stage deck, you don't need to over-explain protocols. You do need to show that your agent can live inside a real enterprise environment: systems, permissions, data flows, audit trails, and handoffs.
How to update your AI startup pitch deck after Sequoia AI Ascent
Direct answer: In 2026, an AI pitch deck should move from model novelty to business evidence. The deck has to prove the company owns a specific workflow, can complete useful work reliably, has a path to defensibility, and knows which investors are relevant for its category.
A practical 10-slide checklist follows. None of this is hypothetical — you can stress-test your current deck against it tonight.
1. Problem: define the expensive workflow, not the generic pain
Bad: "Customer support is broken."
Better: "Mid-market travel companies spend millions a year on support tickets that require access to booking, refund, loyalty, and policy systems. The expensive part isn't answering questions; it's resolving cases correctly across four systems of record."
Investors have heard every version of "X is inefficient." Show the workflow, the cost, and the current workaround.
2. Customer: show who owns the budget
AI founders pitch users. They forget to pitch buyers.
Show:
- who feels the pain
- who owns the budget
-who approves the purchase
- what line item you replace
- whether you sell to a department, an SMB owner, a service provider, or an enterprise procurement function.
If there is no budget owner, you may have engagement but not a business.
3. Current workaround: human service, outsourcing, spreadsheets, or legacy software
The best AI-native services start where customers already pay for the work. Sequoia's services thesis specifically points to outsourced labor as a strong wedge — the budget exists, the customer already buys an outcome, and the procurement habit is built. (Sequoia Capital)
Name the existing substitute. Do customers use consultants, agencies, junior analysts, offshore teams, spreadsheets, internal ops teams, legacy SaaS, plus manual labor? That is where pricing power comes from.
4. Product: what work does your AI complete?
Don't only show UI. Show the job.
Bad: "Our dashboard lets users generate reports."
Better: "Our agent ingests contracts, compares clauses against company policy, flags risk, drafts negotiation language, and routes exceptions to legal."
Investors need to see task completion, not content generation.
5. Agent reliability: how do you evaluate output quality?
This is the most commonly missing slide in AI decks.
Add: task success rate, human review rate, error taxonomy, escalation logic, evaluation dataset, customer feedback loop, before/after quality benchmark, and examples of failure handling.
If the agent can act, the investor will ask: what happens when it's wrong? Have an answer.
6. Data moat: what proprietary pattern improves the product?
Don't write "data moat" unless you can defend it.
Show: what data is captured through usage, why competitors cannot easily get it, how it improves quality or speed or personalization, whether it compounds across customers or only within each customer, how privacy and permissions are handled.
A data advantage should be operational, not decorative.
7. Distribution: why can you reach buyers efficiently?
Strong AI products still die from weak distribution.
Show: founder-led sales motion, partnerships, community, marketplace strategy, integrations, channel access, CAC assumptions, sales-cycle evidence, and why your team has unfair access to the buyer.
Investors fund route-to-market clarity, not just product insight.
8. Traction: paid pilots, retention, repeated use, task completion
Raw user count is rarely enough anymore. Usage quality is.
Better metrics: paid pilots, expansion from pilot to contract, weekly or daily active usage inside a workflow, completed tasks, retained teams, time saved, cost reduction, revenue generated, human review reduction, customer quotes with specific ROI.
The Q1 2026 funding data makes this more important, not less. Quarterly venture funding hit a record $285.5 billion globally — but a single deal (OpenAI's $122B) accounted for 43% of it. $100M+ mega-rounds were 86% of all funding. Deal count fell 15% quarter over quarter to roughly 7,000, the lowest since Q4 2016. The headline market is hot. The practical market is brutally selective. (CB Insights)
9. Business model: tool, copilot, autopilot, or services wedge
Be explicit about what you charge for. Seat-based SaaS? Usage? Per completed task? Percentage of savings? Success fee? Hybrid software-plus-service? Enterprise license?
If you're moving toward an AI-native service, show how margins improve over time. If humans are still in the loop, explain why that's necessary today and how the ratio shifts as the system matures.
10. Fundraising ask: what milestone does this round unlock?
Don't end with "we are raising $2M to grow."
Show what the capital actually unlocks: the next reliability threshold, the first 10 enterprise customers, regulatory approval, integration depth, a distribution channel, a domain dataset, a revenue milestone, a margin improvement, Series A readiness.
The investor must understand why this round changes the company's trajectory. If the ask doesn't read like a milestone plan, the round won't close.
What AI/Web3 founders should take from this
Direct answer: AI/Web3 founders should not pitch "AI + token" as a category. They should explain why crypto rails — identity, verification, settlement, coordination, or incentives — are necessary for a workflow that AI agents or AI-native services can improve.
The worst AI/Web3 pitch in 2026 is some version of:
"We combine AI agents with token incentives."
That sentence tells investors almost nothing.
A stronger AI/Web3 pitch answers:
- What work is being completed?
- Who pays for it?
- Why does the workflow need crypto rails specifically?
- What does the token actually do — coordination, access, verification, settlement, ownership, incentives?
- Is the token essential, or is it a fundraising wrapper?
- How does the system avoid adding complexity to an already difficult AI product?
AI/Web3 founders carry an extra burden. You have to educate investors twice — first on the AI workflow, then on why Web3 architecture improves it. That means the deck, tokenomics, financial model, data room, and investor shortlist all have to align before the first email goes out.
This is where it pays to use tools that compress your prep time. InnMind's pitch deck templates and AI fundraising templates cover the slide structure investors expect in 2026, and the AI angel investors database helps you build a focused shortlist before outreach instead of spraying. The goal isn't to make the deck prettier. It's to make the fundraising argument harder to dismiss.
If you're already getting silence from investors, PitchPop's why investors don't reply diagnostic can help you separate narrative problems from traction or targeting problems.
AI founder fundraising checklist for 2026
Direct answer: Don't open investor outreach until you can explain the workflow you own, the outcome you deliver, why model progress helps you, which metrics prove reliability, and which investors actually fund your category.
Stress-test the deck against this list before sending it.
Narrative
- We own a specific workflow, not a vague AI category.
- We can name the current workaround and the existing budget that funds it.
- We can say whether we are a tool, copilot, agent, autopilot, or AI-native service — without flinching.
- We can explain why model progress helps us instead of replacing us.
- We can describe our wedge and expansion path in one clear sentence.
Product and defensibility
- We have evidence that users repeat the workflow.
- We track task completion, quality, review rate, or ROI.
- We understand what can fail and how the system recovers.
- We have a real feedback loop.
- We can explain our data advantage without using empty phrases.
- We know which integrations or systems of record matter.
Traction and business model
- We have paid demand or strong usage evidence.
- We can show retention, expansion, or repeated task completion.
- We know who owns the budget.
- We can explain pricing logic.
- We can show how margins improve as AI does more of the work.
Fundraising readiness
- We have a clean AI startup pitch deck.
- We have a focused investor shortlist.
- We know which investors fund our category.
- We have a data room ready.
- We can explain why now is the right time.
- We can explain what this round unlocks.
CTA — stress-test your AI fundraising story. Use InnMind's AI fundraising templates, AI angel investors database, and pitch deck templates to build your deck, shortlist, and outreach plan before your next investor push. While you're at it, check the startup perks and software discounts — there's no reason to burn runway on the same SaaS stack as everyone else.
FAQ
What is Sequoia AI Ascent?
Sequoia AI Ascent is Sequoia Capital's annual AI event and content series for founders, researchers, and investors. The 2025 edition gathered 100+ AI leaders in San Francisco to discuss agents, vertical applications, data centers, open source, and new business models. The 2026 edition ran in late April. For founders, the value isn't the recap — it's reading how leading investors frame the next AI cycle. (Sequoia Capital)
Is there a Sequoia AI Ascent 2026 deck?
Sequoia AI Ascent 2026 took place at the end of April 2026 and is being released as videos and partner essays rather than a single downloadable deck. The closest written companion is 2026: This is AGI by Pat Grady and Sonya Huang. The opening fireside featured Andrej Karpathy with Konstantine Buhler. (Sequoia Capital)
What did Sequoia say about AI agents in 2026?
Sequoia's 2026 AGI essay argues that long-horizon agents are a major new capability layer — systems that can reason, iterate, and act over time. The line founders should remember: AI applications in 2023–2024 were "talkers"; in 2026–2027 they are "doers." For a deck, that means pitching completed work, not AI conversation. (Sequoia Capital)
What should AI founders learn from Sequoia AI Ascent?
The market is moving from generic AI excitement to specific execution questions. What workflow do you own? What work can your system complete? How reliable is it? How do you sell it? Why does your company become stronger as models improve? Your pitch deck has to answer all five directly.
How should an AI startup pitch deck change in 2026?
The deck should focus less on model novelty and more on workflow ownership, buyer budget, reliability, evaluation, proprietary data, integrations, distribution, and measurable customer ROI. Investors need to see why your startup is more than an interface on top of someone else's foundation model.
What do investors look for in AI startup pitch decks?
Evidence the startup can turn AI capability into a durable business. That usually means: a clear customer, a painful workflow, a budget owner, a strong product wedge, usage or revenue proof, credible defensibility, strong team-market fit, and a fundraising milestone that fits the round size.
How do AI startups prove defensibility?
Through workflow depth, proprietary data, integrations, evaluation loops, distribution advantages, compliance, domain trust, and outcome ownership. "We use AI" is not defensibility. A stronger argument shows how usage compounds into an advantage that competitors and model providers can't replicate overnight.
Should AI founders build tools or sell outcomes?
In 2026, every founder should explicitly choose: tool, copilot, agent, or AI-native service. Sequoia's services thesis argues that work budgets are roughly six times larger than software budgets, and that autopilots can capture value by selling completed work rather than productivity software alone. The choice changes your pricing model, your team, and your investor pool. (Sequoia Capital)
What are long-horizon agents?
AI systems designed to work across multiple steps over time. They plan, use tools, maintain context, retry after mistakes, and complete tasks that require persistence. In Sequoia's 2026 framing, long-horizon agents are the reason AI applications can finally shift from answering questions to completing work. (Sequoia Capital)
What are AI-native services or autopilot companies?
Companies that use AI to deliver the work itself, not software that helps a human do the work. Instead of selling a legal-drafting tool, an autopilot company sells completed contract workflows with AI plus human oversight where needed. The business model is closer to outcome delivery than to traditional SaaS. (Sequoia Capital)
Read also:


