Today, Iâm talking with Runway CEO and cofounder Cris Valenzuela. This oneâs special: Cris and I were live at an event in New York City last month hosted by Alix Partners, so youâll hear the audience in the background from time to time.
Runway is one of the leading AI video generation platforms. The basic concept is familiar by now: you start with a reference image â either something youâve created using Runwayâs own model or something you upload â you type in a prompt, and Runway spits out a fully formed video sequence.
But whatâs most interesting to me about Runway is that while the AI hype is at a fever pitch right now, thereâs a little more depth to the company. Cris founded the company back in 2018, so heâs been through some boom-and-bust periods in AI, and youâll hear that experience come through as we talk about the technology and what it can and canât do. When Cris began to more seriously explore AI video generation, as a researcher at New York University, we still mostly referred to AI as âmachine learning,â and youâll hear him recount how primitive the technology was back then compared to now.Â

Listen to Decoder, a show hosted by The Vergeâs Nilay Patel about big ideas â and other problems. Subscribe here!
That said, the AI hype really is out of control, and Runway is on the same collision course with creators, artists, and copyright law as every other part of the AI industry â and youâll hear Cris and I really get into all that here.
One theme youâll hear Cris come back to again and again in this conversation is that he does not see Runway as a disruptive outsider to filmmaking, but rather as an active participant in the art. He sees Runway as a tool that will bring filmmaking and other forms of artistic expression to many more people, and not as an apocalyptic force thatâs going to hit Hollywood like a wrecking ball.
Youâll hear him say Runway is working with many of the biggest movie studios â publicly, it has already struck a deal with Lionsgate and AMC Networks. In the AMC announcement, Cris said embracing AI video generation was a âmake-or-break momentâ for every entertainment company.
But cozying up to Hollywood doesnât mean Runway is off the hook in the AI vs. art debate. In fact, Runway itself is part of an ongoing class-action lawsuit over the use of artistic works in AI training data. Last year, it was revealed Runway had trained on huge swaths of copyrighted YouTube material, including The Vergeâs own YouTube channel.Â
So I asked Cris as plainly as I could whether Runway had in fact trained on YouTube and how the industry might survive a world where all these companies are made to pay substantial amounts of money to creators if even one of these big AI copyright lawsuits doesnât break their way. I think youâll find our discussion on this to be pretty candid, and Cris articulated some of his own defenses for how the AI industry has approached this topic and what might happen next.Â
Itâs Decoder, so of course we also talked about Runwayâs structure. Cris has a lot to say about Runway functioning as a research lab, and the tension that exists between releasing and refining real products and then putting them into the hands of professionals, all while working on new models and tools that might make the current tech obsolete.
Okay: Runway CEO Cris Valenzuela. Here we go.
This interview has been lightly edited for length and clarity.
You started Runway before the big AI boom. We were joking earlier that the URL is Runway.ml because people were calling it machine learning before. Whatâs changed since the boom in that approach? Have you had to rethink, âOkay, everyone understands what training a model is now, and the market for GPUs is more expensive.â What are the changes?
A lot has changed. I think we started the company in 2018. Machine learning was the way we referenced the field of AI broadly. I think a few things have changed. First of all, models have become really good. I mean, itâs obvious to everyone. I hope everyone here has used an AI model by now. Iâm assuming that has happened. Seven years ago, no one had. I think consistency, quality, and overall output of models across the board have gotten really good, and that has just changed peopleâs experiences with AI.Â
I think the second thing that is becoming more real is the value of these models and how useful they are. Itâs becoming more evident to many people. A couple of years ago, it was more theoretical about how they could potentially be used. There are still many avenues where we donât entirely know how AI will change things. We just know it well. In some others, it has really changed many things.Â
In learning and education, itâs pretty clear that pretty much every student out there, from now into the future, will start using AI models to learn. But I think that has happened. Then competition, of course. Now, everyoneâs paying attention to this. When we started, there was really no one trying to build. If you had this same conversation eight years ago, and I told you weâre going to have AI models that can render video in hyperrealistic ways, people would think we were crazy. Now, itâs an obvious direction, and there are a lot of people also trying to solve the same problem.
Was your ability to actually do the work constrained by the amount of compute you had at the beginning? Is it just scaling laws that brought you to where you are today?
So, scale is one of the main things. I think weâve realized, as an industry, that scale matters. I guess the lesson that weâve seen over time is that if you just scale computing, then models work really well. I think at the beginning, it wasnât that obvious. It became more obvious over the last couple of years. And then more compute definitely helps, but more compute and more data, and also better algorithms. So itâs not just one single ingredient. Itâs not just that if you get more compute, suddenly, things get better. I think itâs a combination of different things.
Just put this into practice for me. When you guys first started, how long would it take to render a frame of video versus how long now?
When we started, you couldnât. Thatâs the thing. The first thing we ever did was a text-to-image model that produced 256-pixel-wide images. If youâve ever seen a Mark Rothko painting, it was very abstract. Thatâs the closest it could get. So if you wanted to render a face, a house, or whatever, the result was in the range of colors, but it was very off. We went from that pixelated, very low-res image to 4K content thatâs 20 seconds long with very sophisticated movement and actions. I think itâs the realization that at that time, video was not even in the scope of what we thought was possible.
Then, over time, it became really feasible. Now I think we joke that weâre consistently moving the goalpost, where the feedback we get from Runway is like, âGreat, Cris. You can generate that bouncing ball on Mars, but in frame 27, the ballâs direction is slightly off.â Iâm like, âGreat, thatâs a great piece of feedback,â because we will solve it. But also, you donât realize that a year ago, you just didnât think this was possible.
One of the reasons that I see the big platform companies are so invested in video generation, in particular, is that theyâre pointed at the advertising industry. You mentioned you have advertising clients. Mark Zuckerberg is not even subtle anymore. Heâs like, âIâm going to kill the advertising industry.â He just says it out loud.Â
I think he also said something similar at Stripe Sessions a couple of weeks ago. His pitch was something like, âYou donât even have to do anything. Just come to us and tell us how many customers you want, and maybe some ideas about what your product is. Iâll generate video advertising, and Iâll stick it in the feeds, and you just watch the money roll in.â This is a very Mark Zuckerberg way of thinking, but that is the first big market where you see weâre going to bring the cost of making the ads down, and that will result in some return. Is that where the demand is coming in for you as well?
I think thatâs a very appealing concept and world for many people who have never had the chance of making ads in the first place. There are many businesses out there that just canât afford to work with an agency to get a production team to shoot a AAA film or ad. I think part of it is like, âWell, if you can actually help others do that, I think thatâs great.â It definitely increases or raises the bar for many because now anyone can do it. I think itâs less about killing the ad agencies; I think thatâs an overall simplification. I think itâs more about reducing the time it takes to make something.
The cost of making any piece of content will, hopefully, go down to the cost of inference. So if youâre good at making things and conceptualizing ideas, youâre going to have systems that can aid you in generating whatever you need, but you still need to have a good idea. So you will still have agencies, youâll still have talent and creatives, but perhaps the time it takes to make things is just going to be dramatically reduced. Hopefully, that opens the door for many other folks to do this work.
Yeah, I mean, I think Mark wants to kill the ad industry.
[Laughs] Yeah, we should ask him, I donât know.Â
Heâs a very aggressive human being. But the reason I ask that question is because I see so many of these products and so many of these capabilities, and they havenât yet connected to business results. There was a study from IBM last month stating that 25% of the AI investments they had seen in companies had returned on that investment. Itâs a low number. Everyoneâs trying stuff and figuring it out. I get it in advertising. I understand thatâs just the cost of acquiring customers. Have you seen places in film studios and other places where just bringing the cost down is worth the investment?
Yeah, absolutely. I was just on a call with a studio right before this, and we were going through a script that they wanted to test with Runway. I donât know if you guys have ever worked in film, but you develop the script, and the common thing to do next is a storyboard. So, you basically take the storyboard and someone spends a week or two weeks just drawing. This is for a scene or a couple of scenes, not for an entire film. Itâs really long, really expensive, and time-consuming. So, when they were reading me through the part of the script where they needed our help with Runway, I was generating the storyboards on the fly.
By the time they finished, the storyboard was done. So, I think the first thing was that they couldnât realize or fully understand what was going on because they had never worked at that velocity, that speed. For them, speed is also cost. If you have to compound the time it takes to make all of those storyboards by hand and they have the screenwriters doing it in real time, then it shrinks the time and the whole project gets developed and worked on. So, you have all these moments and gaps where AI can really just help you accelerate your own work, specifically in creative industries where things are still very manually done.
I actually want to ask you about that because I know you think a lot about the creative industries and the act of creativity. The counterargument to that is the gap between the screenwriter and the storyboard artist, and the time it takes to communicate and translate is where the magic happens. Having the AI collapse that into a mechanical process, as opposed to a creative process, actually reduces the quality of the creative. How do you feel about that?
Yeah, I donât think I fully agree with that. I think part of it is, I think, that we sometimes obsess about the process of how we make things. The goal of the screenwriter is to get the ideas that he wants in his mind or his world out there. The most obvious ways you work with the set of technologies and tools around you, if youâre able to do it faster, I think thatâs great. You can iterate on concepts faster. You can understand your ideas faster. You can collaborate with more people, and you can make more. One of the bigger bottlenecks of media these days is that you have people working on one project for three or four years, then you might actually work on it, and the studio might actually try to kill it for many different reasons.
So, if you think about it, you spend four years of your life working on a thing that never saw the light of day because it happened to be killed for whatever reason. I think the idea will be that you donât have to work on one project. You can work on many more. So, thatâs also the quantity prospect of it that becomes a component we should consider. Because right now, weâre bound by the way weâre working. Itâs very slow, and itâs very constrained by all these processes. If you can augment that, then people can start doing more and more and more. I think thatâs great.
Is that the model for you? Is it that quantity will drive the business?
I think quantity leads to quality. As an artist, the more you make, the better things youâll do. No artist has drawn once and thought, âOh, suddenly, Iâm a master.â Picasso painted hundreds of thousands of paintings, and many of you have never seen all of them. You just see the 1%. The same goes for musicians. People are there playing every single day until they hit something that actually works. I think tools should be like that. They should be able to augment how you work so you can do more, and then youâre the one choosing what youâre doing.
But look, I started the company because I always wanted to make films. I grew up in Chile, and Iâve never had the means of even buying a camera in the first place. I got my camera when I was 27 years old. It was pretty late, and part of it was very expensive. I couldnât afford Adobe software because it was very expensive back then. I probably wouldnât have become a greater filmmaker, but it wouldâve been great if I had the chance to tell the stories that I had in my head. I think it was a technical barrier that prevented me from doing so. Now we have kids in every part of the world using Runway and making those ideas, which I find just fascinating. Itâs great.
How does the pricing of Runway work? Where does your revenue come from? Whatâs the model?
Itâs very simple. Itâs a subscription. You just pay for the product, and you get access to different parts of it. We have a free tier, so you can also just use it for free. Then we work with schools. Thereâs a course at NYU, the NYU Film School, that teaches students how to use Runway. So, instead of going to film school and giving you a camera, they give you Runway. Weâre doing that with a few other schools as well. For all of those, we just give access for free.
The studios you partner with, do they pay a lot of money, or are they subsidizing it for users?
No. For businesses, we charge. I mean, students can pay, but also, they pay because itâs useful. If it helps you do something, then sure, the value is worth it.
Are you profitable yet?
No, weâre growing, and I think a part of what weâre doing is just investing in research more than anything else.
Whatâs your runway?
[Laughs] Weâve been obsessively working on this. I would say over the last 12 to 18 months, the models got to a place where you can actually do very good things with Runway. I think thereâs always an optimization function that companies have to run, which is, âDo you want to optimize for whatever is working now, or do you want to keep on growing?â I think for us, we really want to keep on growing. Thereâs a lot of research we can invest in and a lot of areas of growth that we can keep on going. So, I think the tension right now has always been like, âDo we want to optimize for this, or whatâs next?â I think we want to lean into whatâs next. I think there are a lot of things we havenât actually fully discovered that we could do that we want to do.
One question I ask everybody on Decoder: How is Runway structured? How do you organize the company?
Itâs very lean. Someone thought the other day that we were 1,000 people, and I thought that that was the best compliment that you could give me. Weâre like 100 people or so. Itâs very flat, and very focused on autonomy more than anything else. What we do is less of objectives and we actually donât believe in objectives. We have a way of working where we just set boundaries and where we want people to do research or explore because a lot of what we do has never been done before. So, if I tell you how to get there, Iâm probably wrong because weâve never done it.
So, itâs research. You have to experiment and fail. What we do is we set their constraints and the boundaries on where we want you to experiment. The best outcomes of the research weâve done have been about setting the right boundaries and then letting people go, letting people work on their own, and figuring out on their own how to do it.
So are you full holacracy, no org chart?
I mean, thereâs some org chart in some way, but people collaborate. We have a studio, an internal studio with creatives, producers, and filmmakers working along with research. Those people are sitting at the same table, speaking the same language. They come from different backgrounds, but they managed to collaborate. So, yeah, thatâs when you want to promote.
One of the reasons Iâm interested in asking that question, particularly of AI companies of your size, is that there is a deep connection to the capabilities of the model, the research thatâs being done, and the kinds of products you can build. I havenât seen a lot of great, focused AI products. Runway actually might be one of them. But in the broad case, thereâs ChatGPT, which is just an open-ended interface to a frontier model, and then weâre going to see what happens. Do you think that as you get bigger, the products will get more focused, or do you think you still need the connection between the team building the model and the product teams themselves?
I think the connection between product and model helps the product team better understand whatâs coming. So, you need to understand that the way tech used to work was in much lower cycles of R&D. Now, research tends to move in very fast cycles. So, the issue with product, and I think product is one of the hardest things to do right now⌠You scope the area of product that we work on, design it, and start building it. By the time you build it, itâs obsolete. Youâve basically lost six months of work, or however long it takes you. So, product needs to behave like a research organization.Â
The way we tell our team is like, look, we have research scientists working on research, but everyone in the company is a scientist because everyone is running experiments. So, before you spend too much time doing something. Run an experiment, build a simple prototype, and understand if itâs worth it. Then, check with research to see if they think the thing youâre working with is going to become useful, or avoid getting submerged by the next generation of models. What happens a lot is that our customers are coming to us with specific questions like, âHey, the model does this, but it doesnât do this. Can you build a specific product for that?â
We could build a product just for that, or we could wait for the next generation of models that would just do all of that on the fly. So, thatâs the tricky part because youâre always trying to play catch-up. I think companies that understand research are much better positioned than companies that are trying to catch up.
Thereâs a comparison I keep making here that youâre not going to like, but Iâm going to make it anyway. I started covering tech a million years ago, now with gray hair and a beard. When Bluetooth came out, everybody knew what the product was going to be, right? Everybody saw the headsets. Every real estate agent in America had a giant Motorola headset, and itâs like, âOh, you want AirPods? We want AirPods.â But the standard was just not ready for another decade, and then Apple had to actually build a proprietary layer on top of the standard to make AirPods.
That took a full decade. It was just not ready. There was a real dance there between, âWhat do we want to build? Whatâs the product? Can we build it, and does the technology support our goals?â Youâre describing the same dynamic. The thing that gets me about what youâre describing is, well, the modelâs just going to eat the product over and over and over again. How do you even know what products to build?
Yeah, itâs very hard.
Because everyone can see the AirPods, right? Everyoneâs like, âThe computer is going to talk to me, itâs going to be fine.â
Yeah, but I think thatâs more than just âthe computer will talk to me.â I think there are parts of how it will talk to you, and when it uses emotion. Thereâs a lot of product that goes back into research. I think no one really knows, to be honest, what the future product experience would look like because a lot of the interactions weâre having, we just never thought we could have. So, youâre only going to realize by having people use it. I think that happens a lot in research, where researchers spend so much time retaining and doing all the work, then you put it out, and in two minutes, someone figures out how to use it in a completely different way.
Actually, I think thatâs great. It points to the fact that I think the previous generation of software was based on this idea of you choosing a vertical and just going there. I think the next generation of software is based on you choosing a principle of how you want to operate in the world, and you build models towards that. Our principle is that more of the pixels that you will watch will be generated or simulated. Thatâs the surface that weâre operating on. Therefore, you can go into many different products based on that idea. So, itâs the difference between choosing a vertical and choosing a principle in which you want to operate.
But right now, as youâre deciding what products to build, youâre getting market feedback from users. You have studios using the tool and agencies using the tool. Youâve got to make some decisions.
We do.
Where are we going to fill the gaps of the product, and where are we going to wait? How do you make those decisions?
We focus a lot on research and on understanding whatâs coming and whatâs worth building. I think thereâs always a trade-off, specifically with startups, where if you spend too much time working on the wrong thing, it might actually kill you. I think we listen to users, but sometimes users donât really know what they want. They know the problems really well, but they canât articulate the exact solution for it. So, you donât find yourself building exactly what theyâre describing because they canât describe the thing that they donât know itâs coming.
So, I donât know. I think itâs like art, I guess. You become just really good at intuition and being like, âOkay, that thing, even if it could be a great deal now, weâre not going to do it right now.â So I think companies overall build intuition, and thatâs just experience of doing it enough times and then saying no. You have to say no a lot of times. Customers come with great ideas, but just say no. Not because you donât think you can solve for them, but again, because it will trap you into the wrong thing for the wrong reason.
This is the other question I ask everybody broadly. How do you make decisions? Whatâs your framework?
How do I make decisions? What kind of decisions?
All of them.
I think there are different decisions. There are decisions that are much more long-term and irreversible, and decisions that are much more reversible. I think weâre very much of the idea that, again, run experiments and be willing to understand if you are wrong in your assumptions. If you need to make a decision, do it because youâre confident it will work.
If it doesnât, you can change your mind. Sometimes, product decisions come from that taste component. I think overall taste has become a good way of directing the company, I would say, from how we operate in marketing and how we hire. I donât think thereâs one particular framework, but just the overall idea of taste and intuition has become clear in how we make decisions.
Do you think youâre going to have to change that as you hit the next set of scale? At 100 people, you can be like, âJust listen to me.â With 1,000 people, maybe not.
Thatâs the thing we keep referring to is the idea of a âcompany company.â We donât want to be a âcompany company.â A âcompany companyâ is a company that behaves like a company because thatâs the way companies behave. Youâre like, âNo, donât do that. Be a company thatâs focused on solving a problem, a research constraint, or a user need. Donât focus on the things that are superficial that youâre supposed to be doing just because youâre a company.âÂ
Because the moment you lose that, youâre dead. Youâre going to stop innovating. Youâre going to focus on the wrong things to optimize for. I think just culture, maybe, reinforces this to the team. I still interview everyone in the company. Iâm still pretty much involved in how we make decisions on product. Organizations tend to seek slow velocity if theyâre not constantly pushing all the time.
Do you think thereâs going to come a point where the split between the capabilities of the underlying model slows down, and that you have to put more into product?
Maybe, but I donât think weâre close to that. Even if we stop research now, like we decide collectively to stop research, I think there are 10 to 20 years of innovations that are just there, latent, waiting for someone to discover them. I donât think weâre at that point yet where you can say, âHey, this is enough,â because I think thereâs just too much space to grow and have models to think. We just released a model two weeks ago, and Iâm not kidding. Every day, I open our users on Twitter and Instagram, and thereâs a new use case. Now, just before coming here, someone was using it for clothes.
So, you can try on anything. You basically go to any shop online, like an eCommerce site, upload a photo of yourself, and see yourself wearing that in a hyperrealistic manner. I just never thought you could use it for that, and you can. So, yeah.
I was talking to Kevin Scott, the CTO of Microsoft, and he made the same point in a slightly different way. He said there are more capabilities in the models we have today than anyone knows what to do with.
I agree.
To me, itâs like, âWell, then we should start building products that make sense.â But then the tension is whether the next-generation models are just going to eat my product. When does that get stable enough so anybody can make products that are good?
So hereâs a great example. Thatâs a great distinction between verticals and principles. If you think about a vertical, then youâll choose a solution and youâll build towards that. If you think about a principle, you should assume that many of the things that weâre trying to build into the product will eventually become features of new models. Therefore, your product should be many layers ahead if you want to spend time on it. So, their principles should be, for example, image generation, zero-shot.
So, zero-shot learning (ZSL) means if you want to model to do something, you donât have to train it. You need to just show it examples. You can widely expand the range of things models can do if you have the right examples. So, maybe a good idea is to find and collect examples of things you can teach models for, and then it changes the way you can approach product. I think that the distinction between principles and verticals is relevant for that.
One of the big trends in the industry is that the cost of every new model is getting exponentially higher. Sam Altman is touring the capitals of the world, being like, âCan I have $1 trillion?â Maybe heâll get it. You never know. He might get it.
Yeah, maybe.
Are you on the same cost curve where every new model is that much more expensive?
Do you have $1 trillion?
Is the answer yes?
If you have one. So, I think AI tends to move in two ways. Thereâs an expansion wave and an optimization wave. Expansion is like, well, weâre discovering what we could do. If you think about the models from two or three years ago, yeah, they were expensive. Now, most of those models can be trained on your laptop because models have gone into a state where you can optimize them. One thing engineers love is optimizing things. So, if you tell them, hereâs the thing that works, optimize it, people will go very hard on it. For some models that are two or three years old, now thatâs the case.
Theyâre very cheap to train from scratch. I think there are new models that are still in the expansion phase. We havenât figured out exactly how to optimize them, but we will. But the thing that happens is the same thing if you spend too much time optimizing them; the trade-off is going to stop working on the new expansion. I think most companies these days are betting on expanding. So, theyâre betting on paying more for the sake of expanding that and not falling behind, rather than trying to optimize and reduce the cost of the thing that works.
Where are you?
I think weâre on the expansion side. Having the ability to expand that, having the ability to innovate on that, itâs way harder. And then having the ability to just catch up and play the optimization game is easier. I think our bet is like, well, this is the advantage point where you can keep on moving things and just pushing boundaries.
The big platform companies, Microsoft, Google, Amazon, and OpenAI â which has a deal with Microsoft â run their own hyperscalers. Is that a competitive threat to you? Is that an advantage to you?
Well, Google is an investor, so we work closely with them. Again, theyâre different functions of businesses. If youâre a hyperscaler, youâre probably in the business of optimizing things. You need to make things cheap and scalable for everyone. Itâs a different function from a research lab, which is building new things. So, again, itâs probably good to pair the two. Because if you have a good research lab without optimization, then thereâs a transfer you can make technology-wise that will allow companies to just run on the things, sell them, and then get feedback. This is while the other part of the company is working on the next thing, which is where we are.
If Googleâs an investor, youâre running on [Google Cloud Platform]?.
Thatâs correct.
So do you just let them buy the Nvidia H100s? Do you worry about that at all?
Nvidia is also an investor.
The AI industry is full of this, by the way. Itâs very obvious.
Well, I think itâs people who have seen this, and I think you want to provoke this. Many of the things weâre discussing now werenât that obvious eight years ago until many people started to make the right bets on it. I think again, depending on where you are, it might be a good function to partner with people who get it and who want to work with you long-term. I think the people we work with can help us get to that point. Yeah.
I think Nvidia as an investor is one of those things about the AI industry that is very funny, right? Theyâre investing in the applications that drive the usage of their chips and all these places. Maybe some of them will pay off, and maybe they wonât. Thatâs the nature of investing, but at some point, everything has to add up to actually deliver a return for Nvidia. Do you feel that pressure that Runway has to be a big enough business to justify all of the infrastructure expenses?
I think that justification comes from the value you see with customers and the adoption that you see. I think thatâs how you see AI in products go from zero to many millions of revenue in a couple of weeks or months, something that was unseen before. Itâs because it is such a different experience, itâs such a different value that if youâre ambitious about it. I think yeah, it will definitely get there. Weâre already seeing this. Still, video, for example, is very early. Gen-4, our latest model, is literally a month and a half old. So, most of the world hasnât experienced it yet. Itâs also a distribution problem. How do you get to everyone out there who can use it?
Are you at millions in revenue?
Yeah, more than that.
Do you have a path to billions in revenue?
We hope, yeah, over the next couple of years.
Iâm asking because all these companies have to generate billions in revenue for all these investments.
I think they will. Many will. I mean, again, think about different first principles. If youâre in the business of ads or movie-making, youâre spending hundreds of millions of dollars to make one movie. If I can take that process and help you do it for a couple of million, then all the delta, I can literally charge for whatever delta Iâm helping you improve. Hopefully, I can charge you way less, so you can actually do more. If you expand that, then youâre also not helping them, but youâre expanding the window of who can do that thing in the first place.
Because if you think about professional filmmaking, itâs a very niche, small industry, mostly because itâs very expensive. Well, if I have something that makes it cheaper, then I can expand their definition of who can get into the industry in the first place. From a market perspective, thatâs great because youâve got many more people who can do something that they never thought they could.
The film industry is really interesting. Itâs under a lot of pressure, so much pressure that HBO Max just keeps renaming itself every six months to get whatever attention it can. Itâs great.
It works, I guess.
But fundamentally, theyâre competing with TikTokers and YouTubers, right? Netflix knows this. Netflix knows that YouTube is its biggest competition. The cost to make a YouTube video is already only a fraction of the cost to make a Marvel movie, and that has basically put the movie industry under a ton of pressure. Do you think AI can actually shrink that gap and keep the quality high?
Yeah, so I think thatâs the point. I think the last frontier was low-quality content that anyone could make. I think thatâs TikTok and YouTube. There are billions of people out there making everything. The difference between that and a high-production studio is the quality of the content, the output, and how good the output of the pixels and the videos is. That, for me, is mostly a technical barrier. Itâs not a storytelling one. Itâs not an idea one. Making a high-end science fiction movie is really expensive because you have to hire so many people and work with software that is very expensive. So the last frontier I would say for us, and I think many media companies, is billions of people making high-end content.
That is the one idea that I think if youâre in the traditional business of media and you havenât realized that yet, youâre probably very scared because then youâll compete with anyone in any part of the world who has a small budget, very good ideas, and can make amazing things. Weâre already seeing this. The Academy Award for animation this year, I donât know if youâve seen it, went to a a movie called Flow. Very small budget, I think less than $10 million. It was just a very good group of people working with great software, and they won the Academy Award against $100 or $200 million productions. Itâs just because you have very smart, talented people working with the right software tools.
So the flip side of this is those studios are also jealously protective of their IP. Thatâs the thing that they monetize. They window it into different distribution channels and into different regions. They sue pirates who steal it on BitTorrent. You trained on a lot of this content. Thereâs reporting that Runway trained on a bunch of YouTube channels, including The Vergeâs, by the way. Thereâs your $1 trillion.
This is, in my mind, the single greatest threat to the already exorbitant cost structure of the AI industry. There are lawsuits everywhere that might say you have to pay all of those creators for their work. Have you thought about that risk?
I think itâs part of how we analyze and how we work. Weâve worked with different studios and companies to understand how to train the models for the needs that they have and what they want to do. Still, itâs crucial for me to help everyone understand what these models are actually doing. A lot of the assumptions that we get around AI video are that you type in a prompt and you get a movie. Now it happens less often, but I used to get a lot of scripts in my inbox where people would say, âHey, Iâm a producer or a writer. Iâve been working on this show. I have the whole script done. Itâs great. I heard you do AI videos. So hereâs the script, make my movie.â
Iâve realized a lot of people thought that what AI video, AI pixel generation, or making videos with AI meant was that you type in a prompt and you get the entire movie that you thought you were going to get. No, it doesnât work like that. It will probably never work like that. Youâre still pretty much involved. You need to tell the model how to use it. You need to tell the model the directions and the inputs you want to use. I think part of it is that perhaps most peopleâs experiences with AI over the last 12 months have been through chatbots. So the idea of AI has been condensed to this idea of chatbots.
If you have a chatbot, you have AI, and those things are summarizing a huge field into a very oversimplified concept. So when you think about copyright and you think about creating things, I think all the weight is still in what you are making. Youâre still in control, and these are not tools that will make things on their own. You are the one deciding how to make them in a way. So you have to be responsible in how you use them. Thatâs basically the point.
But to train the model, you need to ingest a huge amount of data. The two things that make the models more effective in an expansion mode are more compute and more data. Have you thought about whether youâre going to have to pay for the data you ingested into the model?
So weâve done partnerships to get data that we need in particular ways, but again, itâs really important to understand that these models are not trying to replicate the data. I think the common misconception is that people make is that you can type in a scene of a movie and you get the scene of that movie in Runway. These are not databases. Theyâre not storing the data. Theyâre learning. Theyâre students learning about data, getting patterns within that data, and they use that to create something net new. So the argument that I think is really important to consider is that these systems are creating net-new things, specifically for videos. Theyâre creating net-new everything pixels.
The way you use them should be in a responsible way, of course. The models are not trying to store anything. So that for me is the distinction because it changes the argument of how you think about training models in the first place. If you think about them as databases, youâre going to have a set of different assumptions, use cases, and concerns than if you think about them as general-purpose tools like a camera. I always think of Runway as a camera. A camera allows you to do anything you want. Itâs up to you how you want to use it. You can get in trouble for using a camera, or you can make a great film by using a camera. So, you choose.
Itâs shockingly easy to get in trouble for using a camera.
[Laughs] Yeah, I know. I grew up in Chile. There are a lot of films I didnât manage to see [in theaters], and the way I saw them was that I bought them as bootlegs on street corners. I donât know if youâve ever seen one of those where people stand in the theater and just record the thing. I mean that was a bad use of cameras, but I think the overall assumption as a society was like, âLetâs not ban cameras. Letâs actually have a norm in theaters where you canât do that. If you do, youâre going to get in trouble.â I think we all agree that thatâs a good thing to do.
That argument is weaving its way through the legal system right now. There are lots and lots of court cases. The last time we went through this, it was basically Google that won a bunch of court cases about building databases. But Google was a friendly young company that had slides in the office; people wore beanies when they went to work.
The inherent utility of Googleâs structure was very obvious to every judge. The inherent utility of YouTube, which got in a lot of trouble, was very obvious to every judge. They horsepower their way through it. They had to pay some money to some people, and they had to win some cases. They had to invest a lot into litigation, and they won because they were cute and they were Google. It was a very different time.Tech companies are not broadly thought of as young and cute anymore. No one thinks of Meta, Amazon, and Google as adorable companies that should build the future the way that they were at the time.
Have you thought about the risk that they might lose these cases and what that would do to your business? Because this dynamic youâre talking about â whether this is a non-infringing use, whether thereâs broad utility here â this argument goes back to the Betamax case in the â80s. Itâs all there, but it doesnât have to go the way that it always did, right? Judges are just a bunch of people, as weâve discovered here in America. They just make decisions. What if it doesnât go your way?
Yeah, again, itâs hard for me to have an opinion on every single case out there. I think itâs more complex than that. I think Google has had a great impact on the world at large. I think itâs hard to disagree on that. I think the world has gotten way more expansive. Information has become more accessible to many. I think thatâs hard to disagree with, right? I think there are definitely new challenges with every new technology. I donât disagree with that. I mean, you are putting really powerful technology in the hands of everyone, which means everyone, right? So there are use cases around AI that you should be preventing, and you should try to make sure you have systems of regulation and safety on top. I think every company is different.
One thing Iâve really learned about tech, and I mentioned this as an artist⌠I went to art school, and I started working on tech mostly as a way to develop my vision of how art should work with tech. That was my idea. So I still consider myself an outsider to tech, and I think one thing I would consider is that not everyone operates in the same way. I think not all companies are the same. Companies tend to be different in how they operate, and I think there are different ways of managing through this change. Itâs hard for me to group everyone in the same group and say, âYeah, all tech companies are basically doing the same thing.â
Let me try this a different way. You trained on YouTube channels, right?
We train on a variety of different data sets, and so we have teams working on image, video, text, and audio. We donât disclose how we train our models because thatâs unique to, I guess, our research.
Did you train on YouTube?
Again, we have a variety of different data sets that we use to train our models, depending on the task. Itâs not about, âDo we train on this, on that?â We have agreements with different companies. We have partnerships with others. The way we train is very unique to us. Itâs very competitive over there, so weâre probably never going to tell how we do it because itâs very unique to how we train our models.
YouTubers own the copyrights to their videos. If it comes out that you trained on YouTube and hundreds of YouTubers come asking you for money at whatever rates, is the financial model of Runway still tenable?
I guess it goes back to what these models are doing, right?
Well, Iâm saying that if OpenAI loses its case against the New York Times and training on the Timesâ content is found to be infringing, the floodgates will open. It is not clear if OpenAI will win or lose. If Meta loses its cases against the book publishers â and itâs not doing great in the past couple of weeks â the floodgates are open. If those floodgates open, is your business tenable?
I think again, summarizing the entire AI industry as chatbots and what one company is doing, I think, is a mistake. I think, again, video and media work very differently, and there are a lot of other considerations. A lot of the assumptions around how AI works that Iâve seen about video are based on opinions about cell phones in 1992. Youâre just probably very early on seeing the impact of how that technology will change the industry, and probably youâve never experienced it before. So, I think part of what is going to happen over time is that a lot of these ideas around concern for copyright and other considerations will start to change as people understand how this actually works. Iâll give you an example.
I was at a dinner with a producer of a major show, one youâve all probably seen. He was like, âIâm very anti-AI.â I said, âOkay, why are you anti-AI?â Heâs like, âWell, because it works like this and it does this.â I was like, âNo, it doesnât. Let me show you how it works.â Then we showed him how it works, and he was like, âYeah, now Iâm on board.â It took me like 25 minutes. I think he was very adamant about his position of being very against AI because I realized he just had the wrong expectations about what it did. I think it was a minute of like, okay, let me show you what it does. Itâs like youâve never experienced this before.Â
We forgot this, but we all had to go through training to send our first email. People were just telling you how to send an email, and you have to go through it. You donât just understand it, and so you start using it. You understand the limitations of it and the constraints of it, and then you start using it. I think a lot of the hard takes on AI these days are based on just the right expectations and the wrong assumptions of what it actually does.
That gap between how artists feel about AI and how much they actually use it seems like itâs getting bigger every day. It shows up on our site at The Verge. By the way, The Verge is built on the very foundation that I was right about my opinions about cell phones in 1992.
[Laughs] One of few.
But we see it, right? The people read the articles. I talk to product people at other companies. With Adobe, for example, the usage rate of generative AI in Adobe products is basically 100%. Generative fill is used as often as layers, which means everyone uses it every day, and then the audience is like, âI hate this. Make it go away.â Thereâs just this gap. Itâs a moral gap. Itâs a psychological gap, whatever it is. Thereâs a gap between how people are using it, how they talk about it, and how they feel about it, particularly with creatives and artists. I know you spend a lot of time with creatives. How are you closing that gap? Is it possible to close that gap?
I donât see that gap that often. I think in film, thereâs the idea of below the line and above the line. If you speak with a VFX artist, someone whoâs actually moving the pixels on a screen, they donât have weekends. Theyâve never had a weekend off because when youâre on a project, itâs a very tough timeline with very small budgets. The director comes with notes, and you have to take the notes. Itâs a Friday, and there goes your weekend. Youâre going to be working on pushing those edits every day, and youâre doing it by hand. So, if you have a tool that allows you to do it faster, of course, you will use it. Itâs great.
It will get you where you need to go faster. I think the gap there is not as big as some people might think because the actual creative minds, the producers, the editors, and the VFX artists, are already embracing this. It is very valuable, and I guess Iâm not surprised about your stats and numbers. I think still⌠Above the line, the people who think about creatives as, âOh, I have never had the experience actually working and seeing it,â might have a different assumption of how it works. Again, I think part of it is just that we need to show you how it actually works. Something we do is⌠We have a film festival here in New York, by the way, if anyone here wants to go. Weâve done it for three years now. Itâs in the Lincoln Center. Itâs a major event. It gathers filmmakers from all over the world.
We started the festival with 300 submissions. This year, we got 6,000 submissions. We work with the American Cinema Editors, which is one of the guilds of the editors, and we work with the Tribeca Film Festival, so the industry partners. Itâs a great way of understanding how itâs actually being used in real production use cases and how valuable it is for not only the insiders but also the new voices. I think part of the gap is that you need to go to a film festival to experience it, and youâll probably get a sense of how useful it is.
The concern from that class of people that we hear all the time is, âThis is great. It made everyoneâs life a little bit easier. It also puts half of us out of work.â Do you see that as a real threat or as a real outcome?
I understand the concern, but I think the obsession should be on people more than jobs. We used to have people who pressed buttons in elevators. That was a job. I donât know if you guys remember this. That was a job. There was a job of people throwing stones to wake you up before alarm clocks were invented. I think no one is saying we should protect people who throw rocks because of their job. We should have alarm clocks, and the person whoâs throwing rocks to wake you up should be taught how to do something else. So, you focus on the people and how you upskill, upgrade, learn, and teach people to do new things rather than like, âHey, letâs keep this thing because we need people pressing buttons in elevators, and thatâs a job.â
I think that has happened in Hollywood many times. In the beginning, Hollywood was silent. There were silent movies. Talkies came around. It was a major breakthrough where you could actually have sound in movies. The industry revolted. Charles Chaplin was one of the biggest advocates against films with sound because he said that sound would just kill the essence of filmmaking. An argument that they had was like, âWhoâs going to pay the orchestras that are playing in the theaters?â
Well, itâs true. Yeah, we donât need orchestras in theaters anymore. But also, the technology gave birth to an entirely new industry of artists. Hans Zimmer, that was the beginning of an entirely new industry given by technology. I think this is, for me, very similar, where yes, weâre going to lose some jobs. Our job should be to train those people to do new things with technology.
Last question. If you had to spin that all the way out, youâre successful; the AI industry can pull this off. The models get the capabilities you want them to have. What does the film industry look like 10 years from now?
I think it looks very much likeâŚ
Itâs not just TikTok? Are we just going to do Quibi?
[Laughs] No, I mean, if someone likes making that, I donât think thereâs anything wrong with it. I think there are many independent voices out there who have never had the chance to tell their stories because they donât have the means to tell them. Our vision of Runway is that the best stories have yet to be told. We havenât heard from the greatest storyteller in the world because maybe they just werenât born in LA.Â
That probably is the case, and so I think weâre going to see a much more democratized version of film. Weâre going to have a version of storytelling thatâs for everyone, and the bar for it will be the ideas. It wonât be who you know in the industry or how much money you have. Itâll be how good the thing you want to say is and how good you are at saying it.Â
Well, Cris, this has been amazing. Youâre going to have to come back on Decoder soon.
Of course. Thank you for having me.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!