In this Thanksgiving special of "20 VC," Harry Stebbings orchestrates a virtual panel with luminaries in AI, including Emad Mostaque from Stability AI, Des Traynor from Intercom, Jeff Seibert from Digits, Jan Lacun from Meta, Richard Socher, Chris from Runway, Tom Tungus from Redpoint, Miles Grimshaw from Benchmark, and Christian Lang from Tradeshift. They delve into the future of AI, debating the commoditization of large language models (LLMs), the significance of model size, the open versus closed source AI paradigm, and the shifting business models from seat-based to consumption-based pricing. The panelists also discuss the defensibility and value accrual in AI, whether at the infrastructure or application layer, and the role of incumbents like Apple, Google, and Amazon in the evolving landscape. They conclude with reflections on AI's societal impact, emphasizing the need for political and social adaptation to ensure equitable benefits from technological advancements.
"Welcome to 20 vc with me, Harry Stebbings. And for this Thanksgiving special, I wanted to bring some of the best minds in AI together for an amazing panel."
This quote introduces the special Thanksgiving episode where Harry Stebbings brings together leading AI experts for a panel discussion.
"I think that there's only going to be five or six foundation model companies in the world in three years." "It's basically quality of conversation. And does it fail any of our hallucination tests?" "I certainly think we will [see commoditization], and this may not be a popular position."
Emad suggests market consolidation in foundational AI models, Des emphasizes the importance of quality in LLMs, and Jeff Seibert predicts the commoditization of LLMs.
"The reality is no models that are out today will be used in a year." "First, you don't need those models to be very large to work really well." "It is super important. You just cannot train a single model for all of these different tasks with a small model." "Models are not a moat. Models eventually don't matter."
Emad speaks about the fleeting relevance of current AI models, Jan suggests smaller AI systems could be more practical, Richard stresses the necessity of large models, and Chris focuses on the value of the model builders.
"It's very simple. It's because no outfit as powerful as they may be, has a monopoly on good ideas." "OpenAI has this very deep understanding of how people want to use language models basically nobody else has."
Jan argues for the benefits of open AI systems in leveraging worldwide contributions, while Richard recognizes OpenAI's unique position in the market due to its expertise and scale.
"I mean, certainly you can't deny that OpenAI is ahead by a lot. I predicted that we'll have a GBD four equivalent model before the end of the year. That's open source."
This quote highlights the speaker's view on OpenAI's current lead in the AI industry and their prediction for an open-source model equivalent to GBD-4 emerging soon.
"So my prediction was for the version we had like a few months ago. But I actually think that with models like llama two from Facebook and everyone there, I do think open source will take over a lot of use cases."
The speaker clarifies that their prediction was based on the state of AI models a few months prior and expresses confidence that open-source models will become more prevalent in various applications.
"So in web two, if you take the top three clouds and you look at their market cap, so AWS, GCP and Azure, it's about a $2.1 trillion market cap just for the cloud businesses."
This quote presents an analysis comparing the market capitalization of the top three cloud service providers, indicating the significant value in the infrastructure layer.
"And then if you take the top 100 publicly traded cloud companies, both on b to c and b to b sides of Netflix and ServiceNow, they have equivalent market cap, about 2.1 trillion for both."
The speaker compares the market cap of the top 100 cloud companies with that of the infrastructure providers, highlighting the value distribution across the industry.
"I think the way to encapsulate would be this idea of selling the work, not the software, and that we'll move from a paradigm where you might think of it as moving from what we see right now as a copilot, and moving to what I think about as a control center, where we'll sell an SLA on work, not an SLA on uptime."
This quote introduces the concept of a shift in AI business models, from selling software to selling the work performed by the software, with service level agreements (SLAs) based on work efficiency rather than system uptime.
"And so we'll move from a world where the users are doing all this work to a world where the application is doing a lot more of the work, right, where the AI, the notion of agents inside of it, et cetera, is doing it."
The speaker envisions a future where AI applications take on more of the tasks traditionally performed by human users, indicating a fundamental change in how software is used and valued.
Look at demographic problem. Like most of the people I talked to, even if they wanted to replace the workers they have in their yet service centers today, one to one, they can't. Because that generation of young people, they don't want to go in and sit in front of a computer and type formulas into workday every day.
This quote highlights the demographic challenge of replacing an aging workforce with younger workers who are disinclined to perform monotonous tasks, suggesting a need to innovate the nature of work.
So this may be just me, but I very much see AI as a tool, not a product. And so it's a technology. It's like your database. It's like Memcache back in the day.
Jeff Seibert expresses the view that AI is a technological tool similar to databases and should not fundamentally alter how companies structure their pricing models.
I think copilot is an incumbents strategy. Incumbents own distribution, they own data, they own the UX, and they own a business model that all aligns to a Copilot Copilot as GitHub Copilot.
Miles Grimshaw argues that copilots align with the strategies of incumbent companies, which control the necessary resources and infrastructure to make copilots effective.
Let's imagine a future where everyone can talk to their intelligent assistant. That system will have pretty close to human level intelligence, probably more accumulated knowledge than most humans.
Jan envisions a future where intelligent assistants become the main gateway to digital information, surpassing the capabilities of current search engines and becoming integral to daily life.
You have to assume Apple's a really well run company, and you have to assume that there's a head of AI in there, and you have to assume that they're training llms and they're looking for llms that can possibly run on the hardware natively and not even have to talk to the cloud.
The speaker expresses confidence in Apple's ability to innovate in AI, suggesting that the company is likely developing advanced language models that can function independently on devices.
I didn't believe that chat would replace search, but I think for many use cases it will. And I think Google had a rude awakening where, I don't know, for 2025 years, they were uncontested.
The speaker suggests that AI chat could replace traditional search functions for many use cases, presenting a significant challenge to Google's long-standing dominance in the search market.
"I feel like Bard, unfortunately, felt like we had to release this because chat GBD was getting a lot of traction." "They need to have that sort of jayz, like, allow me to reintroduce myself moment where they come back and they say, like, google too is here."
These quotes reflect the speaker's view that Google's response to emerging AI technologies like ChatGPT may have been reactive rather than proactive, and that a significant transformation or "reintroduction" of Google's search capabilities is necessary for staying relevant.
"I'd be scrambling to find ways that companies can sponsor injections into the LLM." "I could totally imagine Google doing a thing where they give away effectively free Android phones powered by the fact that they now control the intent layer."
The speaker proposes a controversial strategy of incorporating sponsored content into Google's AI as a new revenue stream, and leveraging control of consumer intent through free Android phones to maintain market dominance.
"I think Google's by far the most vulnerable because, again, their business model is pretty binary." "It is way more effective to kill your own golden goose than let and watch someone else do it."
The speaker highlights the urgency for Google to innovate in AI, emphasizing the risk of not adapting quickly enough and the strategic advantage of self-disruption over allowing competitors to overtake their market position.
"We don't see Google change and become a chat first search engine." "You don't just willy nilly change most of that page and you get rid of the five, six ads that are on top of that page."
The speaker points out the difficulty Google faces in making a drastic change to its main search experience, which is a major source of ad revenue, suggesting that financial considerations may hinder rapid innovation.
"Amazon have moved faster than I think they moved before." "Jeff Bezos said for his first hundred billion in revenue, he envisioned half of it being proprietary and half it being marketplace."
The speaker acknowledges Amazon's quick adaptation and innovation, drawing a parallel between its business strategy and its approach to new technology sectors.
"No economist believes we're going to run out of job because no economist believes that we're going to run out of problems to solve." "AI is going to bring a new renaissance for humanity, a new form of enlightenment."
The speaker dismisses fears of AI leading to mass unemployment and instead emphasizes the potential for AI to enhance human capabilities and creativity, advocating for sensible regulation without stifling research and development.
"I would love to hear your thoughts. Did you enjoy the show?" "We have so many more of these that we can do, whether it's on price sensitivity for venture deals, reserve decision making, best and worst investments."
Harry Stebbings invites listeners to share their opinions on the show and suggests potential topics for future discussions, indicating a listener-centric approach to content creation.