OpenAI’s Sora and other AI video makers look amazing in their demos – why won’t they let us try them?


I was intrigued and impressed when OpenAI first demonstrated Sora earlier this year. There seemed no limit to the films the AI video model could produce out of a text prompt. Sora could effortlessly transform descriptions into immersive, realistic videos, and OpenAI coyly hinted at a general rollout in the near future. Months later, only professional filmmakers partnered with OpenAI have any real access (a recent, brief leak doesn’t count). The same goes for other much-hyped AI video generators, including Meta’s Movie Gen and Google’s Veo.
Many great AI video generators have emerged since Sora blew people away, but it’s hard not to feel like a kid with their nose pressed up against the glass of the toy store, wondering why we can’t play with the toys just a little bit. Here’s why I think OpenAI and the rest of the reticent AI video creation models are still locked away.
Movie trailers always lead to disappointment
Maybe I’m just a skeptic, but I find it odd how OpenAI, Meta, and Google all seemingly couldn’t wait to show off demos of their respective AI video generators without even a vague sense of a rollout date. It makes me think of movie teaser trailers that come out a year before a film and promise far more than the final cut can deliver. I wonder if Sora, Movie Gen, and Veo might have more than a little cooking left to do before we get our hands on them.
The meticulously curated demos might not only be the best examples of the AI models but also the only ones worth showing to the public. Sora’s standard output might be more fever dream than celestial vision. Perhaps asking for a “serene sunset over a lake” only occasionally nets a tranquil evening on the water. If nine out of ten Sora clips depict a lake melting into a neon green abyss under a sun flickering like a haunted strobe light, I wouldn’t blame OpenAI for holding Sora back for now.
Ethics (or legal exposure)
The companies behind AI tools for making images and videos usually make a point of highlighting their ethical training and output controls where they can. Sora is no exception, but the lines of ethical model limits get a lot blurrier for videos compared to images, especially since the video is essentially a huge number of images strung together.
Unapproved data scraping to make deepfakes of real people without their knowledge and producing films with trademarked characters and logos without permission open the gates of a vast legal and ethical minefield. Working with professional filmmakers and commercial video directors eliminates those problems because the tech company can closely watch the AI’s output and prevent casual infringement.
Where’s the profit?
As much as OpenAI, Adobe, Google, and Meta might like showing off their technology, the people controlling the purse strings want to know where the return on this investment comes from and when. The goal is a polished, marketable AI video generator, not a cool toy. A free-for-all AI video playground to experiment and make mistakes is a step in the path, not the destination.
While we don’t know the exact cost, it’s possible high-end AI videomakers are just as expensive to run compared to Runway or Dream Machine. The processing power required is certainly staggering compared to AI text composition. Scaling it up without restrictions might cause a server meltdown. Allowing bored students to make short clips of a dog playing the violin in a submarine may not seem worth the expense of running Sora around the clock for millions of users. Limiting access to approved professionals gives the companies more control.
OpenAI is almost certainly working on strategies for making money from hobbyists, smaller marketing firms, and film studios willing to pay for ongoing access to advanced AI video generators like Sora. But until they are as accessible as the subscriptions for premium versions of ChatGPT, Gemini, and other AI chatbots, only the deepest of filmmaking pockets will likely get access to Sora and its sister models. Until then, we’re just spectators.
You might also like…
I was intrigued and impressed when OpenAI first demonstrated Sora earlier this year. There seemed no limit to the films the AI video model could produce out of a text prompt. Sora could effortlessly transform descriptions into immersive, realistic videos, and OpenAI coyly hinted at a general rollout in the…
Recent Posts
- Volvo ES90 will charge faster, drive farther than other Volvo EVs
- The truth about GenAI security: your business can’t afford to “wait and see”
- How Claude’s 3.7’s new ‘extended’ thinking compares to ChatGPT o1’s reasoning
- ‘We’re nowhere near done with Framework Laptop 16’ says Framework CEO
- Razer’s new Blade 18 offers Nvidia RTX 50-series GPUs and a dual mode display
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010