Tesla Dojo: The Story of Elon Musk's AI Supercomputer

The Demise of Tesla's Dojo Supercomputer
For a considerable period, Elon Musk has discussed the potential of Dojo, the AI supercomputer envisioned as central to Tesla’s artificial intelligence strategy. Its importance was underscored by Musk’s statement in July 2024, indicating the AI team would intensify efforts on Dojo leading up to the unveiling of Tesla’s robotaxi, which occurred in October.
A Shift in Direction
Following six years of considerable anticipation, Tesla made the decision last month to discontinue the Dojo project and dissolve the team responsible for the supercomputer by August 2025. This reversal occurred just weeks after projections suggested that Dojo 2, Tesla’s second supercluster utilizing the company’s internally developed D2 chips, would achieve operational scale by 2026. Musk subsequently characterized the project as “an evolutionary dead end.”
Initially, this article aimed to detail Dojo’s functionality and its potential to facilitate Tesla’s advancements in full self-driving capabilities, autonomous humanoid robotics, semiconductor autonomy, and related areas. Now, it serves as a retrospective examination of a project that led numerous analysts and investors to perceive Tesla not merely as an automotive manufacturer, but as a significant AI enterprise.
What Was Dojo?
Dojo represented Tesla’s specifically engineered supercomputer, created to train the neural networks powering its “Full Self-Driving” system.
Enhancing Dojo was intrinsically linked to Tesla’s objectives of achieving full self-driving functionality and launching a robotaxi service. FSD (Supervised) is Tesla’s sophisticated driver assistance system, currently deployed in hundreds of thousands of Tesla vehicles, capable of performing certain automated driving tasks, though it still necessitates attentive human oversight. It also forms the technological foundation for the limited robotaxi service Tesla initiated in Austin this June, utilizing Model Y SUVs.
The Rise of Cortex
Despite Dojo’s intended purpose beginning to materialize, Tesla did not attribute its self-driving achievements – however debated – to the supercomputer. In fact, Musk and Tesla had largely refrained from discussing Dojo over the past year. In August 2024, Tesla began promoting Cortex, described as the company’s “giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI,” with Musk asserting it would provide “massive storage for video training of FSD and Optimus.”
Tesla’s Q4 2024 shareholder update included information regarding Cortex, but contained no mention of Dojo. The impact of Tesla’s Dojo shutdown on Cortex remains unclear.
Reactions to the Shutdown
The response to Dojo’s termination has been varied. Some view it as another instance of Musk overpromising and underdelivering, particularly amidst declining electric vehicle sales and a less-than-stellar robotaxi launch. Others suggest the shutdown wasn’t a failure, but a strategic shift away from a high-risk, independent hardware approach towards a more efficient path relying on external partnerships for chip development.
Lessons Learned
The story of Dojo illuminates the stakes involved, the areas where the project encountered difficulties, and the implications of its closure for Tesla’s future trajectory.
- The project highlighted Tesla’s ambitions beyond automotive manufacturing.
- Its cancellation signals a potential recalibration of Tesla’s AI strategy.
- The shift towards partnerships may prove crucial for future innovation.
The Discontinuation of Tesla’s Dojo Project
In mid-August 2025, Tesla made the decision to dissolve its Dojo team and cease operations on the Dojo project. This restructuring coincided with the departure of Peter Bannon, the project’s lead, and approximately 20 other employees.
These former Tesla workers have since established a new venture, DensityAI, focused on the development of AI chips and associated infrastructure.
Industry observers note that the loss of crucial personnel can significantly impede the progress of any project, particularly those involving highly specialized, internally developed technologies.
The project’s termination followed shortly after Tesla finalized a $16.5 billion agreement with Samsung for the supply of their next-generation AI6 chips.
Tesla is positioning the AI6 chip as a versatile solution, capable of supporting applications ranging from Full Self-Driving (FSD) capabilities and the Optimus humanoid robot to large-scale AI training within data centers.
Elon Musk’s Explanation
Elon Musk explained the decision via a post on X (formerly Twitter), the social media platform he owns. He stated that with the convergence of development towards the AI6 chip, continuing Dojo was no longer viable.
Musk indicated that Dojo 2 had become an obsolete path forward, and that the core concepts of Dojo 3 are now integrated into the design of the AI6 systems-on-a-chip.
He further clarified that the functionality previously envisioned for Dojo 3 is now embodied within a substantial number of AI6 chips integrated onto a single board.
- Key Takeaway: The shift to Samsung’s AI6 chip led to the closure of the Dojo project.
- Personnel Impact: The departure of key personnel, including Peter Bannon, contributed to the decision.
- Future Focus: Tesla will now concentrate on leveraging the capabilities of the AI6 chip across its various AI initiatives.
Tesla’s Dojo Origins
Elon Musk has consistently maintained that Tesla’s identity extends beyond that of a car manufacturer, or even a provider of solar and energy solutions. He positions Tesla as a fundamentally AI company, asserting it has successfully developed self-driving capabilities by replicating human perceptual processes.
The majority of companies engaged in the development of autonomous vehicle technology depend on a combination of sensors – including lidar, radar, and cameras – alongside detailed, high-definition maps for vehicle localization. Tesla, conversely, aims to achieve complete autonomy through the utilization of cameras alone.
This approach involves capturing visual data and employing sophisticated neural networks to analyze it, enabling rapid decision-making regarding vehicle operation. The anticipated outcome is the deployment of Dojo-trained AI software to Tesla owners through over-the-air updates.
The extensive scale of Tesla’s Full Self-Driving (FSD) program allows for the collection of substantial amounts of video data, which is then utilized to refine the FSD system. The core concept is that increased data acquisition will bring Tesla closer to realizing true full self-driving functionality.
Challenges to the Data-Driven Approach
However, certain industry analysts suggest that a purely data-intensive strategy may encounter limitations. Simply increasing the volume of data does not guarantee improved intelligence.
“There’s an inherent economic limitation, and the cost will eventually become prohibitive,” explained Anand Raghunathan, a professor at Purdue University, in an interview with TechCrunch. He further elaborated, “Some argue that we may exhaust the supply of truly informative data for model training.”
Raghunathan clarified that while more data doesn’t automatically equate to more useful information, the current trend towards larger datasets is likely to persist in the near future. Consequently, greater computational resources are required for both the storage and processing of this data, essential for training Tesla’s AI models.
This demand for increased processing power was the primary impetus behind the development of Dojo, Tesla’s dedicated supercomputer.
Understanding Supercomputers
Dojo represented Tesla's dedicated supercomputer infrastructure, conceived as a development environment for artificial intelligence, with a primary focus on Full Self-Driving (FSD) capabilities. The system’s designation draws inspiration from the traditional training halls of martial arts.
The architecture of a supercomputer relies on a massive network of interconnected computers known as nodes. Each node is equipped with both a CPU and a GPU. The CPU manages the node’s general operations, while the GPU undertakes computationally intensive tasks, such as dividing problems into smaller components and processing them concurrently.
GPUs are fundamentally important for machine learning processes, including the simulations used to train FSD. Their capabilities also underpin large language models, contributing significantly to Nvidia’s current market dominance in the era of generative AI.
Notably, Tesla itself procures Nvidia GPUs for the purpose of AI training, a point that will be elaborated upon further.
Key Components of a Supercomputer
- Nodes: Individual computing units that form the supercomputer.
- CPU: Manages the overall operation of each node.
- GPU: Handles complex, parallel processing tasks.
The parallel processing power of GPUs is what allows for the rapid iteration and improvement of AI models. This is particularly crucial for tasks like simulating real-world driving scenarios for FSD development.
The Necessity of a Supercomputer for Tesla
The primary driver behind Tesla’s requirement for a supercomputer was its exclusive reliance on a vision-only system for autonomous driving. The neural networks powering the Full Self-Driving (FSD) capability necessitate extensive training using massive datasets of driving scenarios to accurately identify objects and make informed driving choices.
Essentially, Tesla aims to replicate the functionality of the human visual cortex and brain within a digital framework.
Achieving this goal requires the storage and processing of all video data gathered from its global fleet of vehicles. Furthermore, millions of simulations must be executed to effectively train the AI model using this data.
Initially, Tesla depended heavily on Nvidia for the computational power of its Dojo training computer. However, the company sought to diversify its resources, particularly considering the high cost associated with Nvidia chips.
Tesla aspired to develop a superior system, one that would enhance bandwidth and minimize latency. Consequently, the automaker’s AI department initiated a custom hardware program designed to improve the efficiency of AI model training compared to conventional systems.
Central to this program were Tesla’s internally developed D1 chips, which the company asserts are specifically optimized for AI-related tasks.
D1 Chips and Their Role
D1 chips represent a significant step towards Tesla’s goal of self-sufficiency in AI hardware. These chips were engineered to overcome the limitations of existing solutions.
The development of D1 was motivated by the need for increased processing speed and reduced delays in AI computations. This allows for faster iteration and improvement of the FSD system.
Benefits of In-House Hardware
- Cost Reduction: Developing its own chips allows Tesla to potentially lower the overall cost of AI training.
- Performance Optimization: Custom hardware can be tailored specifically to the demands of Tesla’s AI models.
- Supply Chain Control: Reducing reliance on external suppliers mitigates risks associated with component availability.
Tesla’s investment in a supercomputer and custom hardware like the D1 chip underscores its commitment to advancing autonomous driving technology. The ability to efficiently process vast amounts of data is crucial for the continued development and refinement of its FSD system.
Exploring Tesla's Custom Chip Development
Similar to Apple's integrated approach, Tesla believes optimal performance is achieved when hardware and software are engineered in unison. Consequently, Tesla initiated a shift away from conventional GPU hardware.The company’s goal was to create proprietary chips specifically for the Dojo project. This would allow for greater control and optimization of their AI capabilities.
Tesla first presented the D1 chip, a palm-sized silicon component, during their AI Day event in 2021. Production of the D1 chip commenced approximately in July 2023.
The manufacturing of these chips was entrusted to the Taiwan Semiconductor Manufacturing Company (TSMC), utilizing a 7 nanometer process. According to Tesla, the D1 chip contains 50 billion transistors.
Its die size measures 645 millimeters squared, indicating a substantial and complex design. These specifications suggest the D1 was intended to deliver significant processing power and efficiency.
However, it's important to note that the D1 chip did not initially surpass the capabilities of Nvidia’s A100 chip in terms of raw performance.
Tesla was concurrently developing a subsequent generation chip, the D2, focused on resolving limitations in data transfer speeds. A key innovation of the D2 was its design.
Instead of interconnecting separate chips, the D2 aimed to integrate the entire Dojo tile onto a single silicon wafer. This approach promised to streamline data flow and enhance overall system performance.
The precise number of D1 chips ordered and received by Tesla remains undisclosed. Furthermore, the company did not publicly announce a definitive schedule for the deployment of Dojo supercomputers utilizing the D1 chips.
What did Dojo mean for Tesla?
Tesla initially envisioned Dojo as a means to gain control over its own chip manufacturing process.This strategy aimed to enable the rapid and cost-effective expansion of compute power for AI training programs. A key benefit was reducing dependence on external suppliers.
The company sought to avoid future reliance on Nvidia chips, which were becoming increasingly costly and difficult to procure.
Currently, Tesla is prioritizing collaborations with companies like Nvidia, AMD, and Samsung for the production of its next-generation AI6 chip.
During the Q2 2024 earnings call, Elon Musk highlighted the substantial demand for Nvidia hardware.
He expressed concern regarding consistent access to GPUs, stating that securing them when needed was proving challenging.
Musk emphasized the necessity of increased investment in Dojo to guarantee sufficient training capabilities.
The Dojo project was inherently a high-risk undertaking, with Musk acknowledging the possibility of its failure on multiple occasions.
Tesla considered the potential for a novel business model centered around its AI division.
Musk suggested a pathway to compete with Nvidia in the AI market through Dojo during the Q2 2024 earnings call.
The initial D1 chip was specifically designed for Tesla’s computer vision labeling and training needs.
This made it particularly useful for advancing projects like Full Self-Driving (FSD) and the Optimus robot.
However, its utility beyond these specific applications was limited.
Future iterations would require adaptation for broader, general-purpose AI training, as Musk indicated.
A significant hurdle Tesla potentially faced was the prevalence of AI software optimized for GPUs.
Training general-purpose AI models using Dojo chips would have necessitated substantial software rewriting.
Alternatively, Tesla could have offered its compute resources as a service, similar to cloud computing providers like AWS and Azure.
This concept resonated with analysts, who saw significant potential for new revenue streams.
A Morgan Stanley report from September 2023 estimated that Dojo could contribute $500 billion to Tesla’s market capitalization.
This valuation was based on the prospect of revenue from robotaxis and software services.
In essence, Dojo chips served as an insurance policy for the automaker, with the possibility of substantial financial returns.
The Status of Tesla's Dojo Project
Elon Musk frequently shared updates regarding Dojo’s development; however, a significant number of the initially stated objectives were ultimately not achieved.For example, in June 2023, Musk indicated that Dojo had been operational and executing beneficial tasks for several months. Simultaneously, Tesla projected that Dojo would rank among the top five most powerful supercomputers by February 2024.
The company initially planned to attain a total compute capacity of 100 exaflops by October 2024, a goal that would have necessitated approximately 276,000 D1s, equivalent to around 320,500 Nvidia A100 GPUs.
Tesla did not subsequently release any updates or data to suggest that these targets were ever met.
Numerous commitments were made by Tesla and Musk concerning Dojo, encompassing both technical and financial aspects. In January 2024, Tesla pledged a $500 million investment to construct a Dojo supercomputer at its Buffalo, New York gigafactory.
As of a 2024 report, $314 million of this allocated funding had already been expended.
Following Tesla’s second-quarter 2024 earnings call, Musk shared images of Dojo 1 on X, stating it would possess approximately 8,000 H100-equivalent units of training capacity by year-end.
He characterized this as “not massive, but not trivial either.”
Despite this level of activity – particularly Musk’s communications on X and during earnings calls – discussion surrounding Dojo ceased abruptly in August 2024, with the focus shifting to Cortex.
During the fourth-quarter 2024 earnings call, Tesla announced the completion of Cortex deployment, describing it as “a ~50k H100 training cluster at Gigafactory Texas.” The company stated that Cortex played a crucial role in enabling Version 13 of its supervised Full Self-Driving (FSD) capability.
In the second quarter of 2025, Tesla reported an expansion of its AI training compute with the addition of 16,000 H200 GPUs at Gigafactory Texas, increasing Cortex’s total capacity to 67,000 H100 equivalents.
During that same earnings call, Musk anticipated the operational status of a second Dojo cluster “at scale” by 2026. He also alluded to potential areas of overlap and streamlining.
“Thinking about Dojo 3 and the AI6 inference chip, it seems like intuitively, we want to try to find convergence there, where it’s basically the same chip,” Musk explained.
However, just weeks later, this direction was reversed, and the Dojo team was disbanded.
Late August 2025, TechCrunch verified that Tesla still intends to invest $500 million in a supercomputer in Buffalo, though it will no longer be named Dojo.
This article was originally published on August 3, 2024, and has been updated through September 2, 2025, to reflect Tesla’s decision to discontinue the Dojo project.
Related Posts

OpenAI, Anthropic & Block Join Linux Foundation AI Agent Effort
Alexa+ Updates: Amazon Adds Delivery Tracking & Gift Ideas

Google AI Glasses: Release Date, Features & Everything We Know

EU Antitrust Probe: Google's AI Search Tools Under Investigation

Microsoft to Invest $17.5B in India by 2029 - AI Expansion
