During an impromptu generative AI film jam at Sandbox HQ in Paris, I and my friends Eugene Chung and @Takyon236 created a Taylor Swift music video in just one day using GenAI tools. It was an intense experience, but we learned quite a bit by confronting our usual AI workflows to the constraints of production. To see more examples, check out our AI Projects page.
Table of Contents
Creating a Generative AI Film in Just One Day
A film jam or hackathon is always an intense experience, but the AI twist made it even more unpredictable and fascinating. It was a great opportunity to explore the possibilities and limitations of generative AI film and to determine whether AI can work in a production environment with tight deadlines. My take is that, yes, generative AI can work – our little Taylor Swift music video is proof that it is possible, and who knows what we could have come up with in a full week?
However, the tools are still far from production-ready. AI is a chaotic environment due to the technical complexity of it all, the crashes, and the incessant stream of new tools that push creators to constantly revolutionize their workflow. It’s a promising technology, but it definitely requires more refinement before it can be used for mainstream production. That, and being able to adopt and understand the tools before they are replaced each week by a new, better, improved version that will take you two days to properly understand.
Is GenAI ready? Looking Beyond the Demos and FOMOs
GenAI for film and animation is a powerful technology no doubt, but don’t be fooled: the demos that we see on social media are the result of days and days of tinkering around. They are the only survivors of a process involving hundreds if not thousands of aborted pieces. The gorgeous result is more often than not simply presented like the natural output of the model, but anyone who jumps in quickly realizes that the setting do not make sense, that the model does not work, that the prompts yield strange results, that the output is more limited than you thought, and so on and so forth. You go into a notebook with hopes of resuscitating Van Gogh, only to realize that it is only decent with an anime style.
Not only can the incessant stream of new tools be overwhelming – it takes a lot of time to set up and configure the software and hardware to work together smoothly. Trial and error is a big part of working with generative AI – we are talking days, weeks of playing around with obscure settings to create a few seconds of footage. It’s exciting, but quite unreliable to say the least. At the moment, generative AI is an interesting tool in itself to create AI art, but integrating it into a pipeline will take a fair amount of work.
The Rocky Road to Generative AI for Film
The field of AI-generated content is interesting in that it confronts the technology to the constraints of production. The ability to produce AI-generated content at scale, on-time, and within predefined budgets is a critical step in its development. While the technology is undoubtedly powerful, it is not yet ready for production use, where deadlines, predefined hardware, and software are a given. The erratic results will give nightmares to the most competent producer, unable to know if a shot will take an hour, a day or a week to complete. Especially in the animation industry where you are only as good as your track record and where your cost per minute reigns supreme, the integration of GenAI might not be easy. And let’s not even mention the anti-AI sentiment.
The most important next step in the development of AI-generated content – and one that is rarely talked about – is defining a pipeline. A pipeline is a sequence of processes or stages that take input and produce output; defining a pipeline for AI-generated content involves breaking down the process of creating AI-generated content into manageable stages, each with its input and output. A well-defined pipeline streamlines the creation process, making it efficient and less time-consuming. Pipelines enable the creative industry and the scaling up of production, whereas most of what we see today are solo creators tinkering around with whatever tools are better suited at that moment.
GenAI Films are Already Here
Despite these challenges, the potential benefits of generative AI in film and animation are too great to ignore. The technology can greatly reduce the time and effort required to create content, and it has the ability to generate new and creative ideas. This is particularly important in an industry where creativity is highly valued, and where deadlines and budgets can be tight. The speed at which the tech moves is overwhelming, and things that might seem impossible today might very well be fully automated and pretty much plug and play in three months.
In fact, some areas of film production and animation seem quite ready to accomodate generative AI. After using these tools extensively, I cannot imagine doing any pre-production without them, and I believe I will use AI in every future pitch I make. In just a few days, anyone is able to present a rough edit with still shots, and now with text-to-video even a glimpse of what the final result will look like. Previz will probably disappear in favour of “pitchviz”, entirely stylized with the art direction of the project. Provided, of course, that the mammoth in the room is adressed and copyright issues are somehow settled without giving the visual artists community a giant technological middle finger.
While generative AI is not yet production-ready, the potential benefits of the technology for film and animation are too great to ignore. The industry needs to continue to explore ways to integrate the technology into their workflows, while also addressing the challenges and limitations of the technology. Defining a pipeline that can integrate generative AI into the production workflow and investing in the necessary hardware and software will be key to unlocking the full potential of this technology. The journey may be challenging, but the rewards are worth the headache.