OpenAI Just Made an App for Sharing Hyper-Realistic AI Slop

Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.


Last year, I wrote that we should all be scared of Sora, OpenAI’s AI video generator. Sora’s initial rollout promised hyper-realistic videos that, while exciting to some, terrified me. While AI fans see a future of AI-generated movies and shows, I see a future where no one can tell what’s real or fake. To me, the only destination for this technology is mass disinformation.

In the year and a half since, these AI-generated videos haven’t only become more realistic; they’ve also become more accessible, as companies like Google make their tools readily available to anyone willing to pay. That’s the situation we find ourselves in with OpenAI’s latest announcements: Sora 2, a new AI model for generating video with audio, as well as a new Sora app for creating and sharing your AI-generated products.

Sora 2

OpenAI is marketing Sora 2 as a massive upgrade over Sora—comparing the two to GPT-3.5 and GPT-1, respectively. The company says the new model can generate complex videos that earlier models could not. That includes, specifically, an Olympic gymnastic routine; a man performing a backflip on a paddleboard that “accurately” models water physics; as well as a skater performing a triple axel with a cat on their shoulder.

One common flaw with AI video models is their lack of understanding of real-world physics. The visual might look realistic, but elements may morph together randomly, while others may disappear and reappear without rhyme or reason. OpenAI says Sora 2 doesn’t make these mistakes as often. A basketball that misses the hoop won’t magically reappear there; it will, instead, bounce off the backboard as you’d expect it to. The company warns the model is still imperfect, but is improved. Building on this, the model is better at continuity across different shots: Taking OpenAI at their word, your videos should maintain consistency between takes, and you should be able to dictate different types of styles, including “realistic,” “cinematic,” and “anime.”

Perhaps the biggest leap with Sora 2 is the ability to add real world elements into the model, a feature OpenAI calls “Cameo.” You can put real people into the Sora 2 model, and ask the AI to generate them into any video you want. OpenAI shows a number of examples of their staff adding themselves to various videos, and while the quality is inconsistent, it’s a gargantuan leap from the days of JibJab.

Like Google’s Veo 3 model, Sora 2 can generate video with realistic audio. The announcement video shows this off: An elephant roars; a skater swooshes on the ice; water splashes on the ground. But, more impressively (and concerningly), people speak. An AI-generated Sam Altman explains the new model and app in this video, and while it’s rather obvious to those of us in the know that this is AI, I can imagine many people would have no idea this isn’t the real Altman in the clip.

Sora app

OpenAI says the Sora app came about as a “natural evolution of communication.” The company sees this as a way for people to create and remix other users’ AI generations, especially with the ability to upload your own face and likeness to the model.

At the moment, the app is invite-only, though you can download it for free from the App Store today. You can get a sense for the experience, however, from both the demo video OpenAI dropped on Tuesday, as well as posts from the people who already have access.

This first example OpenAI demos is of a dual Cameo of OpenAI research scientist Bill Peebles and Sam Altman. The video contains an establishing shot of the two men having a conversation, which cuts to a close up of Peebles speaking rapidly about the app’s revenue, then to a close up of Altman taking in the rant, before closing on the original establishing shot. On the surface, it’s the type of video you might expect to scroll past on a TikTok or Reels binge—but this video is entirely AI-generated.

The OpenAI staff show off a series of other pre-generated examples, including a Cameo that turns into a cartoon, another that switches the effect to anime, and another that generates a “news” report of one of the staff member’s addiction to ketchup. (That last one is quite gross, I might add.) They also demonstrate remixing videos you find in the feed, as you’re able to prompt Sora to adjust the video however you want. One video shows Peebles in an “ad” for a Sora 2 cologne, but others have remixed it to be of toothpaste instead, or entirely in Korean.

These videos are quite realistic: In one, you think you’re simply watching a clip of a tennis match, but it turns out to be a Cameo with OpenAI’s Rohan Sahai. After “Sahai” wins the match, the video cuts to his “interview,” in which he thanks the haters. Others are more obviously AI—though, again, not enough so that most people scrolling by may notice.

Safety and security, according to OpenAI

Cameos sound like a privacy and security nightmare, though OpenAI has some protections in place. You can’t simply use anyone’s face for any videos, and you’re only able to upload your own face to the platform. Setting up the Cameo feature on the app is straightforward, if not extremely off-putting. The app will scan your face, sort of like setting up Face ID on an iPhone, and will then send the data to OpenAI’s “systems,” for “tons of validation” to block impersonators, or users who might want to create Cameos of you without your consent. Once approved, you choose who can create Cameos of yourself, including all users, friends, users you specifically approve, or just you.

As for videos themselves, the Sora app applies a visible watermark to any clip exported out of the app. If you’ve seen any of these videos on the internet already, you’ll notice a small “Sora” stamp on each, similar to the watermark you see on TikTok clips exported to other platforms. There are also reasoning models under the hood to block users from generating “harmful” content, especially with respect to Cameos.

If you’re a teen using the Sora app, you won’t be able to scroll forever. After scrolling for a while, there will be a cooldown period to keep you from spending hours scrolling through these AI videos. While adult accounts won’t have this restriction, the app will “nudge” you to take a break.

Who asked for this?

With all due respect to OpenAI and its safety team, this app sounds like it’s going to be a disaster, for so many reasons.

For one, OpenAI has made it as easy to generate hyper-realistic short-form videos as it is to ask Siri about the weather. I appreciate that these videos all come with watermarks, but it won’t take much skill to edit those out—at least in a way that most people won’t notice. As soon as this is widely available, all of our social media feeds will be plagued with this content. And, seeing as much of it comes with video and audio that are quite realistic, a lot of people are going to be fooled by a lot of content.

It’s bad enough when that involves silly videos, like bunnies jumping on a trampoline. But what happens when it’s “politicians” saying something egregious, or a “celebrity” stealing something from a store? One viral Sora video shows Sam Altman trying to run off with a GPU at Target, before being stopped by a security guard. How many more Sora videos will show Sam Altman, and anyone else who approves of their Cameos being remixed, committing crimes, or simply doing something embarrassing? Those with enough power or fame may be able to debunk the videos, but by then, it’ll be too late: Most people who saw it will take it as fact.

To that point, it’s great that there are security measures in place to stop people from remixing other users’ Cameos without permission, but the risk here for abuse is supreme: What happens if someone figures out how to “scan” someone’s face from a video, or crack the settings that block others from using their original face scan? If they can bypass OpenAI’s security measures, they can then remix that person’s face into any video approved by the platform. At that point, the cat’s out of the bag.

Look, I’m chronically online. I’m not going to pretend like I don’t enjoy a good AI-generated meme when it comes across my feed. But I’m not about to spend my free time scrolling through nothing but AI-generated brain rot. I’m sure people will find creative ways to make funny videos using Sora, or have a good time making Cameos with their friends, but that’s the point: Beyond the sheer novelty of the tech, there’s nothing good to come from this.

It’s time to stop believing in anything you see online: Someone might’ve just cooked it up in an app.

Leave a Reply

Your email address will not be published. Required fields are marked *