I recently attended Entrepreneur First’s AI Startup School, a series of evening lectures with some of the most interesting figures in the (Parisian) AI/ML startup space. Speakers included builders (Eiso Kant from Poolside, Gabriel Hubert from Dust, Arthur Mensch from Mistral, Emad Mostaque from Stability, Karim Beguir from InstaDeep, …) and investors (Alexis Robert from Kima, Antoine Moyroud from LightSpeed, Matt Clifford from Entrepreneur First) which gave a mix of perspectives and many topics to cover. This blog post is my attempt to distill down some of the main learnings I took from this great lecture series, whose structure and contents went far beyond what I write here. For context, it was aimed towards recent technical graduates who are interested in entrepreneurship - some of the advice and analysis of the space was made with the audience in mind.

What to think about

As the name of the lecture series suggests, we often discussed what to think about when building an AI startup. To us, an “AI” startup is one that builds in a few of the broad layers making up the AI stack: hardware, foundational models, infrastructure/tooling, and applications. Of these parts of the stack, the foundational and application layers were by far the most represented during the lecture series, and will also be overrepresented in this post.

So, what are the main questions you should ask yourself if you’re trying to build one? For both the model and application layers, any AI startup takes a stance on whether they want to build vertically or horizontally. Then, regardless of that choice, all speakers agreed that focusing extensively on your data is probably a good idea. Finally, let’s say you have a good model trained on great data, you want to think about how you monetize it.

Up or across?

If you’re building a foundational model, going by the definition of “foundational”, it might seem you’re building something horizontal in nature. While this is true, many speakers working on foundational models didn’t seem to agree that we are converging towards “one model to rule them all”. Instead of having a single model that outperforms humans on all intelligence-related tasks, we will likely have lots of more specialized models that are vertically integrated, each beating more general models in their specific area of training. As a foundational model provider, you take a stance on whether you believe models should be vertically integrated versus focusing on generalist models.

For companies like Poolside, this translates to focusing heavily on code as an input data modality for their LLMs. This allows them to build specialized training paradigms such as “Reinforcement Learning via code execution feedback”, where they leverage unit tests, compiler checks, and other code-specific validity signals as inputs to a reward model. For Stability, this means focusing heavily on image, speech, and non-text data modalities, while taking it a step further and fine-tuning models for specific use cases such as a partnership with Bollywood. To them, if humans spend years specializing and curating their input data, it seems reasonable to assume models will also function as specialists with narrow domain expertise. For Mistral, this means developing (mostly) open-source LLMs without imposing an editorial tone, allowing developers and customers to fine-tune models on their vertical-specific data. Of course, companies like Mistral and Stability are better described as building horizontally as they focus on one of the core technologies of this AI wave, but I would argue they err on the side of specialization and vertical integration as they sell modularity as a feature. OpenAI, on the other hand, seems to be pushing for a world where a single model will do all of these things, which strikes me as less likely.

Whether you can build a successful AI startup focusing on the core technology itself versus the application layer is mostly a function of your credibility, capabilities, and capital - with the former requiring a higher dose of each. This means that, as a young entrepreneur, you might be better off building with a specific industry in mind. Though it may be criticized as building a “ChatGPT wrapper”, focusing deeply on a specific use case means you get to know your training data, and perhaps, more importantly, you can understand the real problems your industry of choice faces - which is where you should start. In this vain, many traditional industries face challenges where AI can offer new insights, but incumbents don’t adopt new technologies quickly. If you know an industry’s problems deeply and can understand where AI offers value in the solutions and why existing companies aren’t adopting it, you’re off to a decent start.

A great example during the lecture series was Spore.bio, which got started with the observation that microbial monitoring in food factories is often performed using old technologies (think of the Petri dish you used in high school). It turns out newer imaging techniques combined with computer vision algorithms can help you monitor microbe presence more efficiently. Although it’s still early days, this company, much to their credit, identified a traditional industry’s inefficiency, got to know the specific data deeply, and applied AI thoughtfully. A final interesting note is that verticalized solutions are often less prone to competition from big tech giants. Matt explained this with a simple thought experiment: “Say you’re Satya, Tim, Sundar, etc. You wake up in the morning and what do you think about? Mostly about opportunities that can bring $10B + incremental revenue per quarter. You’re not thinking about legal tech.” Given the number of failed launches and retracted products by Google, there must be some truth to this.

Data

To learn, any machine learning model needs access to good data. As we’ve mentioned, a focused AI application developer can reap the benefits of accessing and obsessing over data specific to their use case of interest. But even for companies like Mistral, data represents a significant part of their efforts and competitive advantage. So, a universal rule of any model to application layer AI startup is: get good data and lots of it. This can seem difficult at first, especially when you’re operating in an industry where data is siloed or held by incumbents. Paradoxically, as an early-stage startup, asking these companies for samples of their data with a clear explanation of the problem you want to solve offers useful signal for your project. If you’re trying to solve an important pain point of theirs, they will likely share their data with you. Even if they don’t, you can tell whether they won’t share data because they’re not interested in what you’re doing or because they consider that data to be a core asset of theirs. Either way, you’ll know if they find your work interesting.

You can also try to generate some of your data. For “verifiable” data modalities such as code, since you can (more or less) check the output of your model you could argue that using synthetic data becomes more viable - as you’ll be able to filter out poor generations. However, Arthur Mensch argues that regardless of the filtering, training on generated outputs does not increase the complexity of the generative process from an information theoretic perspective. This means you can’t hope to train on your outputs and continue increasing model capabilities across the board. What was left unmentioned is that filtering outputs and using these as training data to increase performance on certain narrow tasks is effectively tightening the output distribution of your model, which could be interesting for application developers. Although it’s an unsatisfying answer, using synthetic data is dependent on how you generate/filter them and what the end goal is.

Finally, even when you have access to lots of high-quality data, you can play on the data mix when training your models to, literally, optimize for specific model behavior. Upsampling code is a bet Poolside has taken, with 50% of their LLM training input data (at the pretraining phase) being code. Most foundational model builders seem to upsample reputable data sources in their input training data during pre-training or fine-tuning stages. This generalizes to all AI startups training model: you need to be thoughtful of the input data mix, which is often an experimental science.

LLM-onomics

Now that you have some workable model, how do you make money? For foundational model providers, both API usage fees and subscription-based models can be viable. When it comes to your average end user, it seems the subscription-based model OpenAI launched with GPT-4 accessible for $20/month has become the de-facto industry standard (we could still see ad-based business models take off, but we’ll just have to wait and see how that plays out).

On the developer side, Emad Mostaque didn’t seem to believe in a durable API $/token business model. First off, developers calling models locked behind APIs have close to no switching costs when deciding to switch out one model provider against another. This results in the strongest model to date being used at the prototyping phase and then more lightweight, cost-efficient, models being used in production. Following the absence of switching costs, competition between model providers to offer the lowest $/token brings margins down. On the other hand, a subscription-based model where you can download the latest models against a recurring fee solves some of these problems. Developers like having access to the models to be able to fine-tune and customize them, and being able to serve them locally is a must for heavily regulated industries where you can’t send private data to a closed API, no matter how clean the terms and conditions are. Integrating models in your code base, and serving them on your infrastructure, close to your data, also means you’re less likely to switch to tomorrow’s next state-of-the-art model. So, giving developers more control also leads to higher switching costs, which seems like a win-win for everyone involved.

More specialized AI application developers will likely default to tried and tested business models based on their end users. Dust offers a per-user monthly pricing plan like most SaaS companies do. My (uneducated) guess is other companies in the space are following suit - innovating on the product is hard enough.

Dealing with venture capitalists

If you’re building an AI startup, you may need money to get you started. If you do, one possible way of getting some is speaking to venture capitalists. Of the chats I found most interesting during this series, in the sense that it was on a topic I knew the least about, was one with VCs Antoine and Alexis from LightSpeed and Kima. Understanding VCs well should be a prerequisite to accepting any money from them, so let’s try to grasp their side of the equation.

VC, the asset class

At a high level, venture capital is an asset class. Taking investopedia’s definition, “an asset class is a grouping of investments that exhibit similar characteristics and that may be subject to the same rules and regulations”. This essentially means VC is a specific type of investment. As an investment, venture capital is expected to make more money than what was initially invested. Its specificity, or grouping characteristic, is that it mainly targets high-growth projects such as startups. Generally, these are considered high-risk high-reward investments, so any good VC firm is expected to return a multiple of what was initially put in.

But, where does it get the money it invests? Usually, it takes money from larger investors, then invests it in startups (while taking in a certain % in management fees), before giving the investors their money back + profits. This characterization highlights the split in a venture capitalist’s job: they (i) find and secure money and (ii) find, research, and secure deals. Additionally, they make sure the deals they took part in work out i.e. they try to help founders they backed. What’s important to understand in all this is that the venture capitalists’ main job is to make more money than what was given to her. This might all seem very obvious, but it’s crucial to understand the main incentives driving the person you’re talking to. When a VC is sourcing, researching, or securing deals (or helping after a deal), their main focus is making more money than what they invested. Of course, there are slight deviations from this job description in VC land. Kima Ventures is a good example, as they have only one investor: Xavier Niel. This means that they focus heavily on (ii) since (i) is secured - but this doesn’t change their underlying mission.

VC investment is a market

The second part of a VC job is what’s most important to founders, namely investing in startups. Put simply, investing in a startup means buying shares of that startup against money. So, VCs are buyers and startups are sellers - and when there are buyers and sellers we usually refer to it as a market. Although startup shares are a very specific type of good, the usual laws of supply and demand apply. A good example is the so-called “ZIRP” (zero-interest-rate policy) era from 2020 to 2022 which led to plentiful demand (lots of buyers); this meant startups were able to gain huge valuations (great prices) with shaky business models and bad fundamentals. Market dynamics explain many of these phenomena, as too many buyers mean they’re more worried about losing deals than the product they’re buying. Put more fancifully, the amount of due diligence depends on current market conditions.

As a seller (founder), your product (startup shares) is going to be judged against other similar products available in the market around a similar period. Execution speed, initial business metrics, etc. will all be judged relatively to other actors in the market, which means it’s your job to make sure you’re doing better than the average i.e. that you have a good product. But even with a good product, it’s also your job to sell it well - at the best possible price. As with traditional goods such as sneakers, you can create a sense of urgency and exclusiveness to what you’re selling. And it turns out VCs respond to FOMO very well: use this and create that waiting line in front of the hypothetical startup share store…

Judging startups

Yes, you’ll be judged relatively, but what exactly makes a great product for a VC to buy in the first place? It seems Kima has a standard set of questions addressed to founders and designed to assess this. These boil down to: how did you meet? how & when did you have the idea? when did you start full-time? what is the current status? what are the next steps?

The goal behind these questions is to judge the speed of execution, and the plans/ambition for the startup. Speed of execution and learning are indeed core criteria on which you are judged as a founder. Execution speed is so important that Alexis further broke down how execution speed can be measured. There is the speed of how you pitch, as your idea should be clear in 1-2 sentences. There is the growth of your north star metric, and note that growth is much more important than any absolute value at these early stages. There is also execution speed on operational metrics such as hiring. If you can show some of the dots going up and to the right, the extrapolation will be done for you.

Advice & conclusion

By now, we have a rough idea of some of the questions (and an even rougher understanding of possible answers) we should be asking ourselves if we’re building an AI startup and if we want to raise venture capital money. Of course, plenty has been left unasked and unanswered - it will have to be lived.

In the guise of a conclusion, I’ll leave here bits and pieces of advice that were offered to us over these few weeks. As Karim Beguir mentioned, great advice requires enough context; much of this advice is, however, worth engaging with:

  • If you want to build a product, read the Mom test
  • To get better at hiring, read Who: The A Method for Hiring
  • Read as much as possible about the entrepreneurs you admire. Track what they said and what they achieved
  • Worth repeating: do something you’re passionate about, that you’re world-class at, and that solves a real problem
  • If you have the above, you don’t need to worry too much about funding, it will come
  • You need to be ready to “go through the motions”, and doing something you’re passionate about is a great way to not quit
  • Deep tech usually requires deep expertise or deep pockets
  • If you have a genuine urge to found a company, you might not need that PhD - the learning curve is steeper with the former
  • Everything is a power law in a startup: outcomes, hiring, etc.
  • You need to use whatever unfair advantages you have
  • Speak to your customer
  • Don’t be afraid to be ambitious, and don’t be afraid to communicate that ambition clearly and unapologetically

There you have it! Thanks EF for organizing this lecture series, it’s inspiring to see the Parisian startup ecosystem flourish - we all learned a lot.