OpenAI COO Brad Lightcap on the Future of AI | Ep. 46
We talked about the history of OpenAI, the shift in AI from chat to agents, where new startups can endure, Codex, FDEs, working with Sam, and more.
Brad Lightcap serves as OpenAI's COO, overseeing its business, operations, and strategic partnerships across Research, Applied AI, and go-to-market. He also manages the OpenAI Startup Fund. Previously, Brad was part of Y Combinator Continuity and led finance and operations initiatives at Dropbox.
We discussed the shift from chat-based AI to agents that can take action, and what that means for software and the broader economy. We also covered how these systems are being built and deployed, how tools like Codex are changing how work gets done, and what this next phase of AI unlocks for startups and incumbents alike.
Timestamps:
(0:00) Intro
(0:39) The early days of OpenAI
(3:47) A research centric culture
(7:32) Post-ChatGPT chapters
(11:54) Sci-Fi future or good software
(15:26) AI’s impact on rural communities
(18:57) Codex and coding of the future
(24:04) Doing a lot of things at once
(27:55) What VCs should invest in
(35:43) The software sell off
(38:23) Using Codex over ChatGPT
(42:32) FDEs and Private Equity
(44:53) Working with Sam
Links:
https://x.com/bradlightcap
https://x.com/jaltma
Watch on YouTube; Listen on Apple Podcasts or Spotify
Transcript
Disclaimer: Transcript generated with AI assistance and lightly edited for clarity and accuracy.
Joining OpenAI
Jack Altman
Brad, thanks for doing this with us. I’m excited.
Brad Lightcap
Me too.
Jack Altman
Do you have enough drinks? Would you like one more?
Brad Lightcap
I’ll take whatever I can get.
Jack Altman
I really appreciate you making time for this. I’ve been really looking forward to it. Here’s what I wanted to start with actually. I was thinking about this last night. You joined OpenAI in 2018. It was a research lab. You guys are beating Dota and then four years in, ChatGPT launches. It’s this whirlwind that’s been, I guess three years, but I’m sure it feels like a lot more.
I was curious if you could share your narrative or recollection of what the journey’s been like. What are the chapters? What’s your experience been like as you look back on this so far?
Brad Lightcap
Chapters is the right word. The journey of OpenAI—which I think tracks the journey of AI as a field, as an industry—has been broken up into these weird periods. When I joined, no one had really heard of OpenAI. Our work was relegated mostly to small niches of San Francisco tech culture that followed such things as us beating the best Dota players in the world. Really, I didn’t have anyone to talk to about it. Everyone was like, “What are you doing there? What do you do there?”
Jack Altman
You were the CFO when you joined, right?
Brad Lightcap
I was our CFO.
Jack Altman
What were you thinking when you joined? What did you expect it was going to be?
Brad Lightcap
I didn’t know. I was 27, and maybe I should back up a minute. I was at Y Combinator prior, working with Sam. I was starting to spend a lot more time with what I call our hard-tech portfolio in YC. All the companies that are building everything that wasn’t pure SaaS and internet, consumer internet. So I was spending a lot of time with everything from nuclear fusion to satellites to biotech, anything that would fit outside. OpenAI was kind of in that camp. AI was one of those things that was promised as this future technology, but I wasn’t really sure who’s actually building this.
OpenAI started as a YC research project and so it was in the family. Sam had called me and was like, “Hey, I need someone to help basically do everything that isn’t just the research at this company. Do you know anyone that would be good?” I tried to help them find someone, couldn’t find anyone, and so I was like, I’ll just help you myself on the side.
But I started spending a lot of time with Greg and Ilya and the team that was there at the time. I realized that they had these crazy properties that apply to AI, which now we understand to be basically the scaling laws. Consistently the field was starting to discover that when you make things bigger, the results just get predictably, consistently better.
At that point, this is just a compute problem, and intelligence basically can just be bootstrapped from scaling up very basic general architectures that can turn into a more general intelligence. I was like, I don’t know if this is true, and I don’t know if this will hold. I’m certainly not qualified to judge that. But if it does, and these guys seem convinced that it is true, it’s going to be the most important thing ever. At 27, that just seemed more interesting than investing in tech.
From Research Lab to ChatGPT
Jack Altman
So you started doing that and then what happened in those early years? People are building things that are working, beating the game and a lot of other projects. What were you seeing on the inside from 2018 to 2022?
Brad Lightcap
I would say it was much, much more of a research-centric culture. OpenAI is still highly research-centric. I feel like people think post-ChatGPT it became much more this product-centric culture. But research really drives everything. I think that started because of how much that was cemented in that period as the cultural foundation of the company.
I spent a lot of my time really just trying to figure out what researchers needed to be successful. That spanned from the capital that we need to invest in supercomputers to working with partners to do the supercomputer design and build out, to things as trivial and pedestrian as, “our robots keep breaking” and “it takes too long to drop ship parts from this one supplier that sits in some small town in England. How do we tighten that loop and go faster?”
So it was this very diverse set of problems early on that were really just about pure research acceleration. Obviously now it’s both research and deployment and our business. But early on it gave me an appreciation. I just spent all my time with researchers and so it gave me a firsthand understanding of what was happening before I think anyone else really appreciated it.
Jack Altman
So then there was ChatGPT at the end of 2022. Did you guys on the inside feel like, “Oh, this is going to be something?” When you were playing with it before it got released, was the vibe inside that this is another cool thing, it’s a playground? Or were people like, this is something?
Brad Lightcap
There’s a word that sometimes people use in AI to describe when there’s an indication of something that’s happening, but it’s not quite happened yet, but you get these little sparks. That was how I would describe the pre-ChatGPT period: there were a lot of sparks. You could see that the models were now starting to get good enough that they could emulate humans in a conversational format. You could see that there was an interest that people had in directly prompting the model.
People forget that this was not the way that we originally engaged with language models. We thought of language models as completions engines. You start a text string and then it basically takes that as an input and continues the text string on. This more conversational, dialogue-based format is not the original invention of language models.
But what we were seeing is we had an API that was a completions API, and we had an interface that basically let people put text into an interface that would then show a preview of what the model would actually produce as an output. But people were trying to use that interface in a more dialogue, conversational turn-based format. You could see it. If you paid attention, you listened, you could see that people wanted to talk to the model. That was the natural, intuitive way that people wanted to engage with it, but it wasn’t actually quite built that way.
The other thing that we saw ahead of time was that we had trained an early version of DALL-E. It was our first image model. It wasn’t very good, but it was really a breakthrough at the time. For the first time you could now generate images. We had seen some adoption of that model in a more consumer prompt-based format.
So we had guesses leading up to ChatGPT that it was going to be something important, but we didn’t appreciate the scale. My guess at the time—we all took guesses because we had to do the compute planning—was at peak there’d be a million concurrent users, and obviously we were very wrong.
Jack Altman
So what are the chapters since, if you look back the last three years? What are the phases? If you were describing to a friend, here’s the phases of my journey post-ChatGPT, how would you bucket it?
Brad Lightcap
There’s phases of the company’s life, and then there’s phases of the industry and the technology. On the technology side, I would say obviously there is this proto period of research just starting to work. I call that the scaling period, where we just realized that you actually could go from something that was unusable to something that was usable across most model formats. That was before mass consumer adoption. That was 2018 to 2022.
I think 2022 to 2024 was really the period of chatbots, where all of a sudden it was generative AI. It was people realizing that you actually could have something that was useful, but it was not totally clear exactly what it was useful for. It was new and novel. The utility was still not totally there. It was a slightly better version of search.
Then the next chapter, and I think the one that we’re in now, is this period of agents, which is AIs that actually can go do things for you. They run asynchronously. You can give them instructions and they can take an arbitrary amount of time and tokens to go off and think and figure it out. They can use tools. I think we’re in the middle of that period. I think that started for me in December of 2024 with the release of o1 and then through 2025 and into 2026.
Jack Altman
You think we’re in the middle of that now?
Brad Lightcap
Yeah, I think so. I think weirdly, in each of these things—because the utility quotient on the models goes up by some enormous factor—I actually think there’s almost more time it takes in each of these eras to explore the full potential of the model. I’ve always said to our customers and partners all the time, you could stop progress right now and I still think there’s a 10 or 20 year diffusion and innovation cycle that just comes—
Jack Altman
Just to get it into the economy.
Brad Lightcap
Just to get it into the economy and for people to realize what these things are capable of. With chatbots that maybe would have been five years. With agents it’s probably some multiple. Obviously the technology will progress much faster than that. So that dissonance of the diffusion period being much longer than the actual innovation cycle is going to be something interesting to watch.
The Age of Agents
Jack Altman
How far away are we from the completion of what agents can do? Is it the beginning of a thing that will never end? Are we halfway up an S-curve? What is the current sentiment for what the endpoint of agents’ capabilities will be?
Brad Lightcap
I personally feel totally unmoored here. I don’t know. The historian and technological economist in me wants to think that everything has to fit into these very nice S-curve-shaped paradigms and the innovation cycle will look exactly as it always has.
Jack Altman
Even if there isn’t a script, that we could be right here.
Brad Lightcap
Yeah. The Carlota Perez thing, like this will all be the way that it has been. But there’s a lot of meta levels to this. We don’t quite understand that when you’ve got systems that now have in some sense their own agency, there’s almost infinite levels of things that can happen. They can now start directing other agents.
Jack Altman
They can work together.
Brad Lightcap
You have the temporal aspect. They can think and work for longer, as long as they can cohere the context basically through that period, which is something that I think will get solved. Even basic primitives like memory and other things that are core to very long-horizon work and work that you would do over multiple sessions… All of those things haven’t even yet been sorted out, but are starting to get figured out.
Jack Altman
I’ve always thought, in the last year, why are we not going to get to a place where you can just prompt, “Build me a business, make no mistakes.”
Brad Lightcap
Exactly. Yes.
Jack Altman
I don’t see why you couldn’t be like, “Hey, can you go make me a million dollars please?”
Brad Lightcap
You play it out in the limit and you’re like, I don’t know, maybe that’s possible. If you go back and say, even if you pause progress right now, maybe it’s longer. Maybe it’s 40 years or something, or 50 years of progress that will come from this, just on the basis of this step of the cycle.
The Sci-Fi Paradox
Jack Altman
One of the interesting things that I’ve experienced is right after ChatGPT, I think a lot of the conversation around AI was living in sci-fi land. Are we going to have the next species take over? Are there Dyson spheres? It was very big.
Then what I’ve experienced over the last few years is that it’s been extremely commercial in a good way, but in a very down-to-earth way. It’s in the economy, operated by humans, it doesn’t feel scary. It just feels like insanely sick software. But there’s still this lingering thing in the background that I think gets talked about a little bit less.
Is there sentience? Does it go to this other place? Is that still a conversation that matters? Is it something that’s still thought about, or is it just, “Hey, we actually feel now this is just really good software, there’s nothing to be worried about, it’s just an insane technical revolution”?
Brad Lightcap
This is a really interesting question. I think in some sense the better the technology gets and the more it pushes toward that sci-fi future, the more we actually end up having the conversation about it almost diminishing to it just being a tool. It’s a weird paradox.
I’ve noticed the same thing because I used to sit at the OpenAI that was very much having the conversation about Dyson spheres, because in 2018 that was all you could talk about. You basically had something that was barely working at the beginning, and then you could try and see—
Jack Altman
You think about the whole thing. Once you’re in the middle of it, you think about the steps right in front of you.
Brad Lightcap
Yeah. There’s a local linearity that starts to set in where you’re a little bit like, okay, I appreciate that this thing is a gazillion times better than what it was in 2018, and the capabilities are multitudes more than what they were even two years ago.
Jack Altman
As an example, you talked about DALL-E. When that came out, I was like, oh, that’s cute. But now, just a few years later, I can’t tell if a video’s fake or real half the time. That’s going to get all the way there where you’ll have no idea.
Brad Lightcap
And I think in some sense there will be these parallel conversations that happen. There will be the enterprise productivity conversation because that is something that people are actually thinking about, want to talk about. Everyone’s going to glom on to what the narrative is there. It’s just—
Jack Altman
Are we waking up a new God or are we helping lawyers be more productive?
Brad Lightcap
I think we’re doing both. I think the parallel track of this insane level of empowerment of an individual person to do things that would have been inconceivable even a couple years ago… You’re already seeing examples of it. That to me is the weird sci-fi future.
There was the story over the weekend of a guy in Australia who is curing his dog’s cancer, who has no background, as I understand it, in biology. He basically just had GPT-5 effectively try and come up with some sort of RNA-based vaccine that could treat his dog. And then he worked with a lab to do the design of the treatment and they sent it back and it seems to be working. It happened for $3,000 and in a matter of a few weeks. It’s a crazy thing. That to me would qualify as a spark of a sci-fi outcome.
Jack Altman
It’s just crazy how fast we adjust to anything. We could learn that there’s aliens tomorrow and next week we’d be like, “Yeah, of course.” One of my takeaways with this whole thing is people just adjust to any new surrounding. You just think it’s normal in no time.
Brad Lightcap
A hundred percent. That’s been my experience. Things are novel for about three seconds. The next day it’s, “Okay, what have you done for me lately?”
The Optimist’s Case for AI
Jack Altman
On this topic of “what is the thing”, I’m watching it all and I’m from St. Louis. Now I’m living in Silicon Valley. There’s a very different perception of AI in the St. Louises of the world versus Silicon Valley. I think here the general sentiment is, this is amazing, thank goodness this happened. Around the country, maybe the world, there’s real skepticism and anxiety and fear.
People here have that too, but it’s this interesting reckoning where you’re grappling with, simultaneously, “Oh my God, that’s amazing, that’s awesome,” versus “Oh my God, that’s amazing, that’s a threat.” How do you think about the right way to interpret this? What are the genuine concerns and fears that we’re going to need to work through, and what are the things that you think are misunderstandings that will actually just be really positive?
Brad Lightcap
No one knows the future exactly. So I think everything here is speculation on all sides. I come at this from more of an economics, history-of-markets background, which was more where I spent my time in college. I’m still trying to understand the world through that lens. First of all, I think it is really a bummer that the world’s view of AI is what it is. I blame no one other than the industry for that. I think we as an industry have done a horrible job of being able to paint for people a picture of a future that is way better than the world we live in today.
The crazy thing is I actually think that is the reality. The stories of the guy who is curing his dog’s cancer are going to become much more commonplace. I tend to find a lot of comfort in the idea of individual empowerment. Anyone anywhere on Earth can have an idea, and the time to value from conception of idea to thing that exists in the world starts to collapse to zero, not only from a time-to-value perspective, but also a cost-of-creation perspective. I think amazing things are going to happen when you reduce that friction and you increase that access. People are incredibly innovative, they’re incredibly creative. Everyone is motivated by their own set of circumstances and the problems that are in front of them to want to improve the world they live in.
I think 99% of it is a tools problem. They historically had no means to be able to do that. When you give people something that now enables them to start a business, do research, create a new thing, build a new service, serve customers more efficiently or cheaply, only good things can happen in my mind. Now obviously there are things that come with that. We have to be thoughtful about what the technology presents in terms of the flip side, because it’s as capable of doing harm in some cases as it is of doing good.
But I tend to think that we will figure that out. We are resilient and equally creative as a species. Whenever we’ve been confronted with the opportunity to create something that has potential for greatness, we also have been really thoughtful about how we build institutions that protect against the downsides.
So I have a more optimistic view. I think the industry has more of a duty to help people appreciate and understand what’s happening, and to help people live the experience of it, to use these tools to do the types of things we’re talking about.
Coding and AI
Jack Altman
An interesting instance of this conundrum is in coding. This is something that’s easy for us to talk about because we’re very familiar with it, and it’s one of the best applications of AI so far. AI is really good at coding. So then you could bump that up into the real world and say, are we going to have more developers? Are there going to be more people doing more things? Is it going to replace people? I think the data I’ve seen so far is actually that there’s more engineering jobs being posted every month than ever before.
But I’m curious how you think about this, with coding as an example of what’s going to happen when it bumps up into the real world of people doing stuff.
Brad Lightcap
This is where I try to come back as rationally as I can to this economics-based, market-space view of how things have worked in the past. You have distortions in supply, demand, and costs that create these weird inflection points in human productivity. If you reduce the cost of software engineering, for example, to virtually zero on the margin, the simple thing to think would be, okay, software engineers won’t exist anymore.
The thing we’re seeing in reality with tools like Codex and other things is, actually, when you reduce the cost of something to zero, the demand for it goes up significantly. The job of the people who were previously described as software engineers, who were hand-typing every character of code—
Jack Altman
They’re now guiding agents.
Brad Lightcap
They’re now just doing a slightly different version of the job.
Jack Altman
I think some of this is that the cost is lower, but it’s not zero. Which is a good thing, I think, because between two companies that are competing for a new market—let’s say they’re doing AI for construction—if you have two companies, even if engineering got much cheaper, if one still decides to spend ten times more than the other, presumably those people are not going to do nothing to improve the product. So I think we’re just going to get better software rather than fewer people working on it.
Brad Lightcap
Software is wildly underpenetrated in the world. I think if you actually zoomed out and said, of all the places where software—and good software, not just software—
Jack Altman
By the way, there’s still so much bad software everywhere. If you go to a hotel and you look behind their screen, you’re like, “What are you typing on?” There’s a lot of work to do.
The Codex Breakthrough
Brad Lightcap
It’s crazy. And that to me is also, by the way—you want to talk about risks—that’s actually where I think the risk surface exists. It’s the software systems that hospitals use, that power grids use, that store customer information through a hotel. These are all fairly archaic systems for institutions that actually span meaningful percents of the world’s GDP.
I would look at this as, in some sense, almost the greatest thing to ever happen. You’ve now got systems that can help update all of that software, that can bring software into places where there’s zero percent penetration, that can help reinforce and harden systems that are exploitable or vulnerable. In terms of how much we actually needed software relative to how much we’d penetrated, I think if you could actually measure that, we’d be at 1% today.
So I have a maybe slightly different view of this, a personal view of course. If you have AI that can write really good and obviously safe software, I think that is going to be one of the greatest gifts to the world. The speculation around whether there will be software engineers in the future or not is the wrong question. There are going to have to be people who oversee the design, implementation, and maintenance of what could be 10,000x the amount of software and code that gets written in the world. That is going to create a unique demand cycle that may not look exactly like what we do today in software engineering, but it’s going to be important.
Jack Altman
Absolutely. What was the breakthrough that happened for you all recently with Codex? It seems like some step-function thing changed in the last few months in the industry, and for Codex in particular.
Brad Lightcap
It’s a few things. One is the focus of the team at OpenAI building Codex. I’ve been at OpenAI a while, as you said, and the work that team is doing to drive that product with the amount of focus and intensity that they’re doing it with is a singular and unique effort in the history of the company. They are obsessive about the quality of the product, obsessive about the quality of the model. Because of where we are in terms of how models are trained, the cycle time on how fast we can drive improvement is starting to collapse. That’s why you’re seeing these jumps from GPT-5.1 to GPT-5.2 to GPT-5.3 to GPT-5.4.
Now it’s not surprising that you get a model like GPT-5.4 that, as of today—here we are in mid-March—the model’s a few days old and is doing a billion-dollar run-rate revenue, doing 5 trillion tokens a day. It is now far and away our most dominant model of our set of API models, and is also driving Codex growth at the rate it’s going. I think that’s only going to increase this year. By the end of the year, I think we’ll look at the models that power Codex and our APIs today and think they’re pedestrian.
Jack Altman
Obviously OpenAI started in chat and then moved into all these different things. Over time I think it has become probably one of the most unique companies. Included in that uniqueness is that you guys have done a lot of things. How are you thinking about that now? The market is starting to somewhat mature. You’ve had new companies come out, spin out of OpenAI, and focus on areas that have turned out to be really productive. I’m sure that’s changing the way you guys are thinking.
So I’m just curious, the state of the union in early 2026. When you look at where you are, what’s around you, what matters now, what do you care about? What do you say got you here, and what’s going to get you there? What’s the focus?
Brad Lightcap
One of the cool things about OpenAI is it has a very wide aperture on how it looks at what its ultimate mission is. The lines that people drew in the world prior—you’re B2B or you’re B2C, or you’re hard tech or you’re software—all of the things that the VC ecosystem segments themselves by—
Jack Altman
“Got to have a lane.”
Brad Lightcap
Yes. We don’t see those walls. We see AI as this enabling technology that is going to drive innovation cycles across all of the above. It could be in the enterprise, it could be in consumer, it could be in creativity, it could be in robotics, it could be in hardware.
What we want to understand is, what do each of those bets look like? OpenAI has an operating model that has been tried and true for us really since the company started. It’s able to be experimental, able to try and iterate, able to be very model-forward in how we think about a problem, and not really feeling like we have the incumbency of the last generation. And then we try to see if we can build the thing that we think is possible. If it works, you build an effort around it. If it doesn’t work, you shut it down and you recycle those people back into a new thing.
That was really the way that OpenAI operated early on. It still somewhat is, this expansion-contraction model in research where you’ve got maybe 20 projects that are all trying different things going on at the same time. Maybe two or three of them will really work. You scale those up, you consolidate people back into those projects, and then over time, as you shift into a next paradigm. You spread back out again and take more bets.
I think that’s going to be how this goes. Everything is, in my mind, downstream of research. If that’s the cycle of how research is working, the product and deployment cycle should look similarly.
Jack Altman
I also feel like I can tell from the way the product’s feeling, it’s a unified model. It’s going to all just be a unified thing at some point here soon. It’s already going that direction. That thing will just be used by people whether they’re at home or work. People use Google at home and at work, and it just becomes the tool.
Brad Lightcap
We need the models to start doing more work for users is what I would say. If there’s been one really big gap in the consumer experience in AI so far, it’s been that users have to do too much work. You’re promised this future of these really smart models that can solve all your problems very dynamically. Yet here we are with 18 things in a model picker. Do you want thinking fast mode, do you want pro thinking hard mode?
Jack Altman
It’s crazy. It’s time to move on.
Brad Lightcap
It’s time to move on. That to me feels like the direction you’re describing, this more consolidated experience. I just don’t want to think about it. I just want intelligence, and I’m going to let the model decide how to allocate that on a token level most efficiently.
Where Startups Should Build
Jack Altman
Okay. I want to move the conversation to a selfish place now. You’ve been an investor before. My question is, what should I invest in? Maybe to put a little framing around it, there’s a frequent worry among founders of OpenAI releasing something and getting their face blown off. What’s safe from AI? What will or won’t the models do? Where can a startup predictably add value?
Sam talked about how you should build your company such that you’re planning for the models to get smarter. If the models getting smarter is good for you, that’s a good thing. If the models getting smarter is bad for you, that’s going to be really tough. Can you unpack it a little bit more now, just as months and years have gone on, what are the safe places for a startup to try to do work that they can expect to still be available to them in three years? Or should they just all join OpenAI?
Brad Lightcap
I don’t think they should all join OpenAI. First of all, the level of energy in the ecosystem right now is nothing I’ve ever seen. The quality of founders and the—
Jack Altman
And the effort.
Brad Lightcap
The effort. There’s this intensity and this urgency.
Jack Altman
Do you remember the startup ecosystem right before ChatGPT? After ZIRP, we had come down from the SaaS glory moment. That was tough. I don’t know where we’d be right now without it. It would be not fun.
Brad Lightcap
I was at YC in 2016 to mid-2018.
Jack Altman
That was good.
Brad Lightcap
The front end of that was a fun time to invest in growth. We were fortunate enough to invest in the growth rounds of a lot of the companies that had been built in the last five years prior to that. Then weirdly, it just got less fun, I think in 2017, 2018, I don’t know what it was, it just felt like the ecosystem was tired. It didn’t feel like there were a lot of new ideas.
Jack Altman
I think a lot of the obvious stuff had happened at that point. I think without a new technology shift… There’s always more to do, but at some point the first 80 of the 80/20 gets done. Now you’re rooting around in the 20.
Brad Lightcap
I think that’s right. But it feels firmly now there’s this entirely new cycle, and the urgency and the excitement is very much there. Also just the ambition of the companies that we engage with, it’s stunning to me sometimes. I’m like, you’re going to do what?
Then you realize there’s an enablement factor. As soon as you get models, for example, that are good enough at software engineering that they can start to design and write in new programming languages, or that they can speed the time from being able to take old code bases, refactor them, and then rewrite them into new and modern frameworks that enable another company to exist and serve an area that was historically underserved… You realize that there’s an entire industry here that didn’t exist that’s about to get built. Then you’ve got a founder who sees that and they’re like, I’m going to go after that.
That’s partly the answer to the first question. If you think of model capability as dropping successively larger rocks in the pond, the ripples from those rocks reverberate wider and further and it creates more and more surface area around the circumference. I think the way I would look at it is you don’t want to be right under the rock dropping. You’re going to drown. That’s a very hard place to be. But you want to really be right out on that outer edge, on that surface of what is the thing now that is enabled by this advancement in capability that wasn’t previously workable before, in a very specific and opinionated area on a very hard problem that has historically been underserved.
Jack Altman
I guess to stick with your metaphor, I feel like some of the fear is that the next rock you drop is going to be bigger than the circumference of the ripple of the last rock. So things that were at the edge before are now squarely in the center of the model.
Brad Lightcap
I think there’s no substitute though for being familiar with a user, a problem, how the existing industry serves that problem or doesn’t serve that problem, and just being very close. YC always had this thing, basically just talk to users. It’s simple advice. Sounds trivial, but not enough people do it. When you actually get into it you realize the world is gigantic. 99% of people get to use bad tools or don’t have any tools at all.
The quality of experience of the people that exist as their customers and users is not very good. Everyone’s lived that in some capacity. Everyone has lived the bad experience of going through modern life and dealing with the things that we have to deal with. I just think if you’re sitting there lamenting the idea that there’s no more good ideas and no more new ideas, it’s just lazy.
Jack Altman
I feel like there’s at least two other things that can give you comfort as a founder. One is that I don’t think any company, no matter how great it is, can do everything. There might be 10,000 people working at the labs, but there’s millions of people in other places and you just can’t do everything.
The other thing that has surprised me is that some of these markets are just so ridiculously big. There are eight things that are all doing well around, let’s say, code gen and website building and internal tool creation and whatever. You could do that probably straight out of Codex, but you can also use other products that are great, that are based on Codex and things like that. So I think some of it is just that these markets are hard to appreciate, how big they are.
Brad Lightcap
Again, there’s no substitute for being able to talk to users and being able to identify what people really want. At OpenAI, our focus is really on trying to improve the models and do the best research we can possibly do. But for someone in a very specific area of the world who has a very specific set of needs, who wants to do one thing and they want to do it really well, there’s probably some alpha there.
The New Way to Build a Company
Jack Altman
I do think it changes the way you need to build a company versus in the past though.
Brad Lightcap
I agree.
Jack Altman
What I’ve noticed is a lot of the great founders today seem very willing to just rip everything out that they’ve done up till this point and keep only their team knowledge, customer relationships. But if the product we built so far is wrong, we’re going to just trash it in a way that I think people were much more precious about before.
Some of this goes to there’s a new ephemerality to a lot of these things. When software is super easy to build, I can make a UI that works for me today, but I’ll throw it away because I can just make a new one tomorrow. I think that’s an interesting trend too.
Brad Lightcap
I have seen, a handful of times now, founders of companies that were built in that period between, call it 2008 and 2016 or something like that—who are the canonical darlings of software from the last decade or so—who have founders who are still running the company, who have basically decided, I’m effectively restarting the company.
They have taken it on themselves to fork off of the mainline effort to basically go figure out what the second chapter of this company looks like in a world where the primitives and the tools and the assumptions have changed.
Jack Altman
Which is a hard thing to do. There’s just so much sunk cost to it all.
Brad Lightcap
Yes.
Jack Altman
But I think the people who are able to adapt to that, it’s a huge advantage it seems like.
Brad Lightcap
Totally. You can iterate so fast now. You can explore the action space so quickly. And you have the benefit of legacy customer relationships. You’ve got the benefit of existing teams. So in some sense you almost are starting with a head start. The way I see it is you can learn faster. Whereas if I were to start a new company tomorrow, I’m starting with no customers and no funding, no product and no team.
Jack Altman
I guess related to this, how do you feel about the selloff in public markets? Obviously outside of the big companies which have done great, public software companies have taken a pretty bad beating. When you think about the work that you’ve been doing with them and what you’ve been seeing, are you watching that and you’re like, this makes sense, or are you like, actually this is a misunderstanding and you’re feeling bullish about those companies?
Brad Lightcap
Hard to comment specifically on the market. The market is a very frenetic thing. Here’s what I live day to day. We work with basically every company that sits in the Nasdaq that you could imagine. A, all of these companies are as motivated and moving as quickly as any startup. B, they’ve got amazing customer relationships. They’ve got amazing depth of understanding of the problems they’re trying to solve, the areas that they serve. Obviously they’ve got years and years of perspective that have been built. I think now in some sense they’re able to leverage and benefit from the same tools that anyone else is.
So the conversations we’re having with them are really about them starting to rethink, end to end, their entire customer experience, their product, starting to think about how they serve adjacent markets, starting to think about ways that they can pass capability through to their users. Creating entirely new experiences that weren’t possible before. So I think you could take the other side actually. I think you could take a very long view here.
Jack Altman
In some ways the software itself is the easiest thing at this point. Having all the relationships, the team, the trust with all the customers… That’s actually the hardest pole of the tent to have now.
Brad Lightcap
If that class, if that segment was asleep, I would say okay, maybe that concern is more warranted. But—
Jack Altman
But they’re not.
Brad Lightcap
No. And it’s happening at the CEO level and the founder level in some cases where everyone is as motivated to figure this out and figure out how to create value for their customers and their business as anyone else. So I think it’s the beginning of a new cycle, that’s my guess.
You’re always going to get new companies that form that are trying to take a fresh and new approach. Often the benefit that those new companies have is that the incumbents don’t realize what’s going on and are too slow to move. Here, you actually don’t have that dynamic. You’ve got everyone running, trying to run at the same speed. So I think that’s exciting. I would say if you’re long AI and long startups, then it might even make sense—maybe as a contrarian opinion—to be long legacy software too.
Codex as a Daily Driver
Jack Altman
I don’t know if you’re experiencing it one way or another. It doesn’t have to be founders, but even people joining OpenAI from some older company that had not been AI-native. How do you help people reset? What does it take for people who have lived in the pre-AI era to work the new way?
Brad Lightcap
I think you have to see it firsthand. If you’re not playing with Codex every day, I think it’s hard to intuitively grok just how disruptive and crazy it is. Codex for me has replaced ChatGPT on a daily-driver basis, and I’m not even technical. I don’t write software for a living, but it has a general capability. I’m specific enough about the set of things that I want, and I’ve developed enough familiarity with it.
Jack Altman
What are you doing with it? What’s a daily quick use case?
Brad Lightcap
My life is basically a daily struggle of things that I would like to see get done.
Jack Altman
I thought you were going to just end it with “my life is a daily struggle.”
Brad Lightcap
Well, that too. But things that I would like to see get done, and then how fast our team can mobilize and operationalize to get it done. At a busy, fast-growing company, sometimes those timelines drag. When those timelines drag then the thing that I want to see us do starts to drag. Everything elongates into this thing where if everyone were a hundred percent focused on this thing it would take two days, now it takes basically a month.
One of the things I’ve started using it for is supplementing that. It gives me a first version of everything. For example, we’re building a fairly substantial forward-deployed engineering org, which we can talk about, but recruiting for that has been challenging. Recruiting’s hard.
Jack Altman
You’re using it to recruit?
Brad Lightcap
I’m using it to basically go figure out lists of people that we’re thinking about recruiting, how do you navigate and stack rank among that list before you start getting into the candidate engagement? It’s crazy because everyone today has this online presence and a lot of people have blogs and X accounts and all that.
I just told Codex, “Take this list and basically go figure out what public presence any of these people have. Come back to me and effectively read their online content and score it against how you think about some of the technical elements of our work and the job descriptions of the things that we’re doing.” It works even for a non-technical task like that. It basically writes a program and will figure out how to efficiently look at each of these profiles and come back and give me scores on how good it thinks each of these candidates’ online writing has been.
It’s cool because it actually surfaced for me three or four candidates who I couldn’t have picked off the list staring at a list of 200 names, but where I was like, “Okay, let me go double-click on this.” Now it gives me an opportunity to really look into that candidate’s profile and their blog and whatever and start to just get to know them better. That process would’ve taken a busy recruiter probably a couple weeks. It’s a lot of names. Here it just collapses down to 20 minutes.
Jack Altman
By the way, I bet a lot of this is what is going to be needed for people to just broadly be excited about AI, not frustrated about it. Using it and realizing that it’s super empowering. Versus thinking, “Oh, all these other people are using it to be empowered.” No, just start using it. I guess a lot of that is you getting the tools to a place where it can be adopted super easily by everybody.
Brad Lightcap
For sure. One of the things that I feel is the story that hasn’t really diffused into more mainstream conversation is just how general these tools are. You don’t have to be a software engineer to use Codex.
Jack Altman
It’s fascinating that you prefer Codex over chat for a lot of your work. It’s cool.
Brad Lightcap
The Codex app is amazing. If you haven’t used it, check it out. The terminal-based use is maybe a little more intimidating if you’re not technical, but in an app interface it just looks like chat, and I think it’s got much more general agent capabilities.
Forward-Deployed Engineering
Jack Altman
On the topic of the forward-deployed stuff and private equity, what’s the thinking there?
Brad Lightcap
The thinking is very much what I was talking about earlier. If you think about the way that software is going to get built in the future, in some sense now any specific problem within any company, in any part of their process, historically it would not have made sense economically to have spent a lot of time thinking about how to solve that one corner of a problem. It’s too expensive to hire a bunch of people, to build a bunch of software, and for that software to then have to be maintained. Obviously for the most important problems in most large enterprises, you could hire people to do that type of thing, and there’s entire industries that have gotten built around that.
But for 99% of problems, for 99% of businesses, that’s totally out of reach. You’d have to either decide that you wanted to hire a couple people to try and build something on their own that maybe didn’t work super well, or you look to see if the market offers a solution. But the problem is that the solution doesn’t necessarily fit exactly what the shape of your problem is.
So now you’ve got people contorting themselves trying to figure out how to adopt the thing off the shelf that wasn’t really built for their company. It was just built as a general-purpose tool. I think that entire era is over. Now you actually can reason about how almost every problem inside of a business can have solutions that are custom-built for it. It goes back to this weird paradox of what do you think is going to happen with jobs.
We wouldn’t be wanting to hire FDEs as aggressively as we would if it felt like software engineering jobs were going away. The jobs of those FDEs are different. If you’d hired an FDE five years ago, they’d be doing something different than what they’re going to do in the future.
But the amount of demand and the amount of opportunity that we see, to be able to go address surgically every area in a business that could benefit from solution design—and not solution design that happens on the order of 18 months, as is the industry norm, but solution design that happens on the order of maybe 18 days, if not faster—that to me is an incredibly large opportunity that I think will be the story somewhat of how the next few years go. The FDEs we’re hiring are really to help address that.
Working with Sam
Jack Altman
Last question I have. Just your reflections working with Sam. It’s funny, I obviously know him as a brother. You know him as someone you’ve worked with for a long time now. I’m curious what the evolution you’ve seen has been like, now that he’s obviously gotten to a different place in the public sphere. There’s this whole public persona, and then you obviously work with him on a daily basis. What’s the whole experience like for you with him?
Brad Lightcap
We’ve worked together for 10 years. 10 years in January.
Jack Altman
And the first year or two was YC.
Brad Lightcap
First two and a half years was YC and then I got to OpenAI before he did. I would say I recruited him to OpenAI.
He’s a remarkable individual. I wish more people could spend more time with him off the record. I think he’s not innately someone that enjoys being a public face of things. I think it certainly feels like an unnatural thing for him. He’s someone who much prefers spending his time sitting in a huddle of five people talking about the future and having a deeply technical conversation about some niche topic. That’s who he is internally at OpenAI. It’s what I’ve always known him to be. I think if more people could spend more time with him, you’d realize he’s an infinite optimist.
Jack Altman
That’s crazy. Because the way I experience it, it’s almost like this sacrifice to have put himself out so publicly, which is a requirement to make all of this happen and show the world that by accumulating talent, compute, and all these ideas in one place—that’s what made all of this possible—then everybody can see it. But that’s such an uncomfortable thing to have done.
Brad Lightcap
It’s interesting because he thinks on a timescale that’s more like a decade-plus, and I think the world struggles to think beyond a quarter forward. I’ve always felt there’s this kind of mismatch.
Jack Altman
There’s a total mismatch. He’ll say something and everybody’s like, “That’s crazy.” Three years later, it’s exactly where we are. Sometimes sooner than that. Then there’s no reconciliation backwards. It’s just like, now he’s said a new crazy thing, and people are like, “Oh, you’ve been crazy all along.” That’s a weird thing to watch and there’s no way to tie that together really.
Brad Lightcap
Everyone’s trying to figure out what’s happening right now, because I think in some sense the whiplash is so real. I have a lot of empathy for that. I spend a lot of time with our customers, with friends, family that are looking at me and calling me being like, “What is going on? What is happening? What is this Codex thing?” I think in Sam’s head we are already so far beyond that point, in terms of what’s coming, that it’s trying to bridge for people where we’re going relative to where we are. I think it’s disorienting.
Jack Altman
It’s really an insane thing that you all have done and continue to do to pull all these pieces together. I think this has got to be the most hard-mode company of all time. It’s very, very impressive. I’m sure you just get used to it all, but hopefully you appreciate what a ridiculous feat you guys are pulling off.
Brad Lightcap
I appreciate that. I very much feel like it is far from complete. It’s highly incomplete. It’s interesting. When we formed the company early on, the mission orientation of the company was very strong. But I always tell people, in a very literal sense, a lot of companies have these high-level, lofty missions that you can’t really actualize. No shade on anyone specifically, but it’s “don’t be evil.” Okay, that seems like a good thing. Or it’s “make the world more connected.” Seems good.
Jack Altman
It’s also like, so if the plan is “don’t be evil,” then what? It’s very debatable from there.
Brad Lightcap
How do you actualize that? What do you do? One of the interesting things about OpenAI is the mission from day one is this very actualizable mission. We try and run everything that we do through the lens of, “Okay, is this consistent with the outcome that we are trying to create?”
I always used to joke at OpenAI: there is a world where we do the thing we say we’re going to do, and then we go home and we’re done. That’s the end of the story and we all go back. In practice, is it going to work that way? I don’t know. I don’t think so. But maybe.
It is a company that has a very specific orientation toward a very specific goal. I think amid all the craziness of all the things that are happening, it’s very focusing to be like, “Okay guys, there’s still this one thing that we are really trying to deliver.” It’s very easy to come back to that mission and say, “Is this something that drives toward that outcome or not?” If it’s not, we’re just not going to do it.
Jack Altman
Love it. This was really fun, Brad. Thanks for taking the time to do it.
Brad Lightcap
Good to see you.

