Inside OpenAI: A Former Engineer's Experience

Former OpenAI Engineer Details Internal Dynamics
Approximately three weeks prior, Calvin French-Owen, an engineer formerly employed by OpenAI on a key emerging product, tendered his resignation from the organization.
He has recently released a detailed blog post recounting his experiences during a year with the company, notably including the intensive development period for Codex. Codex represents OpenAI’s innovative coding agent, positioned as a competitor to platforms such as Cursor and Anthropic’s Claude Code.
French-Owen clarified that his departure wasn’t prompted by any internal conflicts or “drama,” but rather by a desire to return to a role as a startup founder. Previously, he co-founded the customer data company Segment, which was acquired by Twilio in 2020 for a substantial $3.2 billion.
Certain aspects of the OpenAI work environment he described would likely come as no surprise, while other observations challenge prevailing perceptions of the company. (Attempts to reach him for further comment were initially unsuccessful.)
Rapid Expansion and Internal Challenges
Significant Growth: During his tenure, OpenAI experienced a dramatic increase in personnel, expanding from a workforce of 1,000 to 3,000 employees, as he reported.
This rapid growth is justified by the company’s success. It is currently the fastest-growing consumer product in history, and its rivals are also experiencing accelerated expansion. In March, OpenAI announced that ChatGPT had surpassed 500 million active users, with that number continuing to climb.
Operational Complexity: “Scaling at this rate inevitably leads to breakdowns in communication, reporting structures, product delivery, personnel management, and hiring procedures,” French-Owen explained.
Similar to a nascent startup, employees retain considerable autonomy to pursue their ideas with minimal bureaucratic obstacles. However, this also results in duplicated efforts across different teams. “I encountered at least six separate libraries developed for functions like queue management or agent loops,” he noted as an illustration.
Varied Skillsets and Technical Infrastructure
The level of coding proficiency also varies considerably, ranging from experienced engineers from companies like Google capable of handling systems serving billions of users, to recent PhD graduates. Combined with the flexibility of the Python programming language, this leads to a central code repository – referred to as “the back-end monolith” – that he characterized as “somewhat disorganized.”
System failures are frequent, and certain processes can be excessively time-consuming. However, senior engineering leadership is cognizant of these issues and actively implementing improvements, he stated.
A Culture of Speed and Transparency
Entrepreneurial Spirit: OpenAI, despite its growth, maintains a mindset akin to a startup, operating largely through Slack. He likened the atmosphere to the early, fast-paced environment of Meta (formerly Facebook), noting a significant number of OpenAI hires originate from Meta.
French-Owen detailed how his team – comprising approximately eight engineers, four researchers, two designers, two go-to-market specialists, and a product manager – successfully built and launched Codex in just seven weeks, working with minimal rest.
The launch itself proved remarkably effective. Simply activating the product resulted in immediate user adoption. “I have never witnessed a product achieve such rapid uptake simply by appearing in a sidebar, but that demonstrates the influence of ChatGPT,” he observed.
Secrecy and External Awareness
Information Control: Given the intense scrutiny surrounding ChatGPT, the company maintains a culture of secrecy to prevent information leaks. Simultaneously, OpenAI actively monitors activity on platforms like X (formerly Twitter), responding to viral posts. “A colleague quipped that ‘this company operates on Twitter sentiment,’” he shared.
Addressing Misconceptions
Common Misunderstanding: French-Owen suggested that a prevalent misconception about OpenAI is that it prioritizes safety less than it should. This contrasts with criticisms from AI safety advocates, including former OpenAI employees, regarding its safety protocols.
While concerns about existential risks to humanity are present, the primary internal focus is on practical safety issues such as “hate speech, abuse, manipulation of political biases, creation of bioweapons, self-harm, and prompt injection,” he explained.
OpenAI is not disregarding long-term implications, he added. Dedicated researchers are investigating these potential impacts, and the company recognizes that hundreds of millions of individuals are currently utilizing its LLMs for diverse purposes, including medical guidance and therapeutic support.
Government agencies and competitors are closely observing OpenAI, and the company reciprocates this observation. “The stakes are exceptionally high.”
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
