the fixer’s dilemma: chris lehane and openai’s impossible mission

The Challenge of Maintaining OpenAI’s Image
Chris Lehane possesses a remarkable talent for managing unfavorable publicity. Having served as Al Gore’s press secretary during the Clinton administration and subsequently as Airbnb’s chief crisis manager, navigating complex regulatory challenges even in Brussels, Lehane is adept at shaping narratives. Currently, he is tackling what may be his most demanding assignment yet: as OpenAI’s VP of global policy, he is tasked with convincing the public of the company’s genuine commitment to democratizing artificial intelligence, despite its increasingly conventional behavior as a technology corporation.
A Difficult Conversation in Toronto
I recently had the opportunity to engage with Lehane for 20 minutes on stage at the Elevate conference in Toronto. The aim was to move beyond prepared statements and address the inherent contradictions impacting OpenAI’s carefully cultivated public persona. While not entirely successful, the conversation revealed Lehane’s skill and approachability. He presents himself as reasonable, acknowledges uncertainties, and even expresses concerns about the ultimate benefit of this technology to humanity.
Beyond Good Intentions
However, positive intentions hold limited weight when a company is actively issuing subpoenas to its critics, depleting the resources of economically vulnerable communities, and resurrecting deceased celebrities for commercial gain.
The Sora Dilemma and Copyright Concerns
The recent launch of Sora, OpenAI’s video-generation tool, is central to these issues. The tool debuted with content appearing to incorporate copyrighted material. This was a daring move, particularly given ongoing lawsuits filed by The New York Times, the Toronto Star, and numerous other publishers. From a marketing perspective, it proved remarkably effective. The exclusive-access application quickly climbed the App Store charts as users generated digital representations of themselves, OpenAI CEO Sam Altman, popular characters, and even late icons like Tupac Shakur.
Sora as a “General-Purpose Technology”
When questioned about the inclusion of these characters, Lehane characterized Sora as a “general-purpose technology,” akin to the printing press, empowering individuals lacking traditional creative skills or resources. He asserted that even he, identifying as creatively unskilled, could now produce videos.
A Shifting Approach to Copyright
He sidestepped the fact that OpenAI initially allowed rights holders to opt-out of having their work used for Sora’s training, a departure from standard copyright practices. Following positive user response to the use of copyrighted images, the company shifted to an opt-in model. This is not iterative development; it is an assessment of permissible boundaries. (Despite objections from the Motion Picture Association, OpenAI appears to have largely avoided legal repercussions.)
Fair Use and the Economics of AI
This situation naturally echoes the concerns of publishers who allege that OpenAI has utilized their content for training purposes without equitable compensation. When pressed on this issue, Lehane invoked the doctrine of fair use, a U.S. legal principle intended to balance creator rights with public access to information, framing it as a key factor in U.S. technological leadership.
A Replacement for Original Content?
I recently interviewed Al Gore, Lehane’s former employer, and realized that individuals could simply consult ChatGPT instead of reading my article on TechCrunch. I pointed out that AI is “iterative,” but also potentially “a replacement.”
Acknowledging the Uncertainty
Lehane paused and abandoned his prepared remarks. “We’re all going to need to figure this out,” he conceded. “It’s easy to suggest new economic models, but I believe we will.” (The implication was that the approach is evolving as they proceed.)
Infrastructure and Resource Consumption
Another critical, yet often avoided, issue concerns infrastructure. OpenAI is currently operating a data center campus in Abilene, Texas, and has begun construction on a large facility in Lordstown, Ohio, in collaboration with Oracle and SoftBank. Lehane has likened the adoption of AI to the introduction of electricity, noting that those who adopted it later are still catching up. However, OpenAI’s Stargate project appears to be targeting economically disadvantaged areas to establish facilities with substantial demands for water and electricity.
Benefits vs. Burden for Local Communities
When asked whether these communities would genuinely benefit or simply bear the costs, Lehane discussed energy requirements and geopolitical considerations. He stated that OpenAI requires approximately one gigawatt of energy weekly, while China added 450 gigawatts and 33 nuclear facilities last year. He argued that if democracies aim to develop democratic AI, they must compete. “The optimist in me believes this will modernize our energy systems,” he said, envisioning a revitalized America with upgraded power grids.
The Energy Cost of Video Generation
This was an inspiring vision, but it did not address whether residents of Lordstown and Abilene would experience increased utility bills while OpenAI generates videos featuring The Notorious B.I.G. It is important to note that video generation is the most energy-intensive application of AI.
The Human Cost and Ethical Considerations
The human impact was underscored the day before our interview when Zelda Williams posted on Instagram, pleading with strangers to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”
Processes and Partnerships
When asked about reconciling this kind of harm with OpenAI’s mission, Lehane discussed processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”
A Recognition of Responsibility
Lehane demonstrated vulnerability, acknowledging the “enormous responsibilities” associated with OpenAI’s endeavors.
A Complicated Picture Emerges
Whether these moments were strategically crafted for the audience, I found them credible. I left Toronto believing I had witnessed a masterclass in political communication – Lehane navigating an impossible situation while avoiding questions about company decisions he may not even support. However, subsequent events complicated this assessment.
The Subpoena and Intimidation Tactics
Nathan Calvin, an attorney specializing in AI policy at the nonprofit Encode AI, revealed that while I was interviewing Lehane in Toronto, OpenAI dispatched a sheriff’s deputy to Calvin’s home in Washington, D.C., during dinner to serve him a subpoena. They sought his private messages with California legislators, students, and former OpenAI employees.
Targeting Critics and SB 53
Calvin asserts that this action was part of OpenAI’s intimidation tactics related to California’s SB 53, a proposed AI regulation. He claims the company leveraged its legal dispute with Elon Musk as a pretext to target critics, suggesting Encode was secretly funded by Musk. Calvin also stated that he opposed OpenAI’s stance on California’s SB 53, an AI safety bill, and found OpenAI’s claim of “improving the bill” laughable. He labeled Lehane the “master of the political dark arts” on social media.
An Indictment of OpenAI’s Mission?
In Washington, this might be considered a compliment. However, for a company dedicated to “building AI that benefits all of humanity,” it appears to be a damning critique.
Internal Conflict and a Crisis of Conscience
More significantly, even OpenAI’s own employees are expressing reservations about the company’s trajectory.
Employee Concerns After Sora 2
As reported by my colleague Max, numerous current and former employees took to social media after the release of Sora 2, voicing their misgivings. Boaz Barak, an OpenAI researcher and Harvard professor, wrote that Sora 2 is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”
A Public Questioning of OpenAI’s Direction
On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable regarding Calvin’s accusations. He prefaced his comments by acknowledging the “possible risk to my whole career,” and then wrote of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”
A Crystallizing Moment
This statement is particularly noteworthy. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one” is far more significant than criticism from competitors or inquiries from reporters. This is an individual who chose to work at OpenAI, believes in its mission, and is now acknowledging a profound internal conflict despite the potential professional consequences.
The Real Question: Belief in the Mission
This is a pivotal moment, with its contradictions likely to intensify as OpenAI progresses toward artificial general intelligence. It leads me to believe that the central question is not whether Chris Lehane can effectively promote OpenAI’s mission, but whether those within the company – and crucially, its employees – still genuinely believe in it.
Related Posts

openai says it’s turned off app suggestions that look like ads

pat gelsinger wants to save moore’s law, with a little help from the feds

ex-googler’s yoodli triples valuation to $300m+ with ai built to assist, not replace, people

sources: ai synthetic research startup aaru raised a series a at a $1b ‘headline’ valuation

meta acquires ai device startup limitless
