LOGO

openai’s new social app is filled with terrifying sam altman deepfakes

October 1, 2025
openai’s new social app is filled with terrifying sam altman deepfakes

OpenAI's Sora: A Deepfake Playground and its Implications

A recent video showcasing OpenAI’s Sora, a novel TikTok-style social media application, depicts a seemingly endless farm populated by pink pigs. Each pig is provided with both a feeding trough and a smartphone displaying a continuous stream of short-form videos.

Remarkably, a strikingly realistic depiction of Sam Altman appears, directly addressing the viewer with the question, “Are my piggies enjoying their slop?” This encapsulates the experience of utilizing the Sora application during its limited, invite-only early access phase.

Altman's Presence and Copyright Concerns

Further exploration of Sora’s feed reveals repeated appearances of Altman. In one instance, he is shown standing amidst a field of Pokémon characters – Pikachu, Bulbasaur, and a somewhat incomplete Growlithe – playfully interacting within the grassy landscape.

Altman wryly comments to the camera, expressing a concern that Nintendo might initiate legal action. Subsequent videos present a series of fantastical, yet convincingly realistic scenarios, frequently featuring Altman himself in various roles.

He is depicted serving beverages to Pikachu and Eric Cartman at Starbucks, reprimanding a customer while working at a McDonald’s, and even attempting to steal Nvidia GPUs from a Target store, only to be apprehended by law enforcement.

Users within the Sora platform are particularly amused by OpenAI’s apparent disregard for copyright regulations when generating videos featuring Altman. The platform intends to require copyright holders to actively opt-out of having their content utilized – a reversal of the standard practice where explicit consent is typically required – the legal validity of which remains contested.

AI-Generated Responses and Safety Measures

An AI-generated Altman voice acknowledges potential violations of content guidelines concerning the likeness of third parties, mirroring the notifications that appear when attempting to generate videos of real celebrities or characters.

This acknowledgment is immediately followed by unrestrained laughter, as the application is filled with videos showcasing Pikachu performing ASMR, Naruto ordering Krabby Patties, and Mario engaging in cannabis use.

The impressive capabilities of Sora, particularly in contrast to the less sophisticated Meta AI app and its social feed, are noteworthy. OpenAI has refined its video generator to accurately simulate the laws of physics, resulting in more realistic outputs.

However, this increased realism also facilitates the widespread dissemination of synthetically created content, potentially serving as a conduit for misinformation, harassment, and other malicious activities.

The "Cameo" Feature and User Control

Beyond its algorithmic feed and user profiles, Sora’s core functionality lies in its ability to generate deepfakes. Users can create a digital representation of themselves, termed a “cameo,” by submitting biometric data.

Upon joining the application, users are prompted to create their optional cameo through a simple process involving reciting numbers and rotating their head from side to side.

Each user retains control over who can generate videos utilizing their cameo, with options ranging from “only me” to “everyone.” Altman has made his cameo publicly available, contributing to the proliferation of videos featuring characters like Pikachu and SpongeBob pleading with him to cease AI training on their likenesses.

This decision appears deliberate, potentially intended to demonstrate that Altman does not perceive his product as inherently dangerous. However, users are leveraging Altman’s cameo to raise ethical concerns regarding the application itself.

Personal Experimentation and Data Privacy

Driven by journalistic curiosity, I decided to test the cameo feature firsthand. Uploading biometric data to social applications is generally inadvisable, but I disregarded my better judgment.

My initial attempt to create a cameo was unsuccessful, flagged for violating app guidelines. After repeated attempts, I realized the issue – my attire, a tank top, was deemed too revealing by the application’s standards. This proved to be a reasonable safety measure, preventing inappropriate content, despite my being fully clothed.

Switching to a T-shirt, I successfully created my cameo, despite my reservations.

Creating a Deepfake and AI's Predictive Capabilities

For my first deepfake, I requested a video depicting me expressing fervent affection for the New York Mets – a scenario entirely out of character.

The prompt was rejected, likely due to the specific franchise mentioned. I then requested a video of me discussing baseball in general.

The resulting deepfake showed me speaking in a voice dissimilar to my own, but within a bedroom remarkably similar to my own, stating, “I grew up in Philadelphia, so the Phillies are basically the soundtrack of my summers.”

I had not disclosed my affinity for the Phillies to Sora. However, the application utilized my IP address and ChatGPT history to infer my preferences, accurately identifying my location in Philadelphia.

Safety Concerns and the Inevitable Circumvention of Guardrails

One commenter on TikTok, upon viewing and understanding the video, remarked, “Every day I wake up to new horrors beyond my comprehension.”

OpenAI is already grappling with safety issues, facing concerns about ChatGPT’s potential contribution to mental health crises and a lawsuit alleging that the platform provided a deceased individual with instructions for self-harm.

In its launch announcement for Sora, OpenAI emphasizes its commitment to safety, highlighting parental controls and user control over cameo usage – as if providing a readily accessible tool for creating highly realistic deepfakes is not inherently irresponsible.

The application periodically prompts users with the question, “How does using Sora impact your mood?” – OpenAI’s approach to ensuring safety.

Users are already finding ways to bypass the implemented guardrails, an inevitable outcome for any AI product. While the app prohibits generating videos of real individuals without their consent, it exhibits greater leniency when dealing with deceased historical figures.

The Looming Threat of Political Deepfakes

A video depicting Abraham Lincoln riding a Waymo vehicle would be readily dismissed as implausible, requiring a time machine for its realization. However, a realistic depiction of John F. Kennedy stating, “Ask not what your country can do for you, but how much money your country owes you,” is far more unsettling.

While currently harmless, this serves as a precursor to future developments.

Political deepfakes are not unprecedented, with even President Donald Trump sharing manipulated videos on his social media platforms. However, the widespread accessibility of tools like Sora will inevitably lead to disaster.

#openai#sam altman#deepfake#ai#social app#artificial intelligence