Nvidia's Rise to $4 Trillion: The Key Research Lab

Nvidia's Research Lab: From Humble Beginnings to AI Powerhouse
In 2009, when Bill Dally became affiliated with Nvidia’s research division, the team consisted of approximately twelve individuals. Their initial focus centered on ray tracing, a technique utilized for rendering in computer graphics.
This comparatively small research group has since grown to encompass over 400 employees. These researchers have been instrumental in Nvidia’s evolution from a graphics processing unit (GPU) startup in the 1990s to a company currently valued at $4 trillion, and a key driver of the ongoing artificial intelligence boom.
Current Focus: Robotics and AI
Currently, Nvidia’s research lab is concentrating its efforts on the development of technologies required to advance the fields of robotics and AI. Evidence of this work is already being integrated into tangible products.
The company recently announced a new suite of AI models, libraries, and infrastructure specifically designed for robotics developers.
Bill Dally's Journey to Nvidia
Bill Dally, now serving as Nvidia’s chief scientist, initially began consulting for the company in 2003 while maintaining his position at Stanford University.
As he approached the end of his tenure as chair of Stanford’s computer science department, Dally had planned to take a sabbatical. However, Nvidia presented an alternative proposition.
David Kirk, who led the research lab at that time, and Nvidia CEO Jensen Huang, advocated strongly for Dally to accept a permanent role within the research division. Dally shared with TechCrunch that they launched a concerted effort to persuade him.
“Ultimately, it proved to be an ideal alignment with my interests and capabilities,” Dally stated. “I believe everyone seeks a position where they can contribute most significantly to the world, and for me, that place is undoubtedly Nvidia.”
Expansion and Strategic Direction
Upon assuming leadership of the lab in 2009, Dally prioritized expansion. Researchers quickly broadened their scope beyond ray tracing, venturing into areas such as circuit design and VLSI (very large-scale integration), the process of integrating millions of transistors onto a single chip.
The lab’s growth has been continuous since then.
“Our aim is to identify areas where we can have the greatest positive impact on the company. We are constantly evaluating promising new fields, but determining potential for substantial success can be challenging,” Dally explained.
Early Investment in AI
A significant focus became the development of enhanced GPUs for artificial intelligence. Nvidia recognized the potential of AI early on, initiating experimentation with AI GPUs in 2010 – over a decade before the current surge in AI interest.
“We recognized the transformative potential of this technology and its capacity to reshape the world,” Dally said. “We committed to intensifying our efforts in this area, and Jensen supported this vision. We began specializing our GPUs and creating supporting software, while actively collaborating with researchers globally, even before its relevance was fully apparent.”
Nvidia's Expansion into Physical AI
With a dominant position secured in the AI GPU market, Nvidia is now actively exploring emerging areas of demand extending beyond traditional AI data centers. This strategic shift has directed the company's focus toward physical AI and the field of robotics.
Jeff Dally articulated the company’s vision, stating that robots are poised to become a significant force globally, and Nvidia aims to be the primary provider of their core intelligence. Achieving this necessitates the development of crucial underlying technologies.
Sanja Fidler, Nvidia’s Vice President of AI research, is central to this initiative. Fidler initially joined Nvidia’s research division in 2018, bringing with her prior experience in developing robot simulation models alongside students at MIT.
During a researchers’ event, she presented her work to Jensen Huang, sparking his interest. “I could not resist joining,” Fidler explained to TechCrunch. “The alignment with Nvidia’s objectives and the company culture were exceptional. Jensen invited me to collaborate *with* him, rather than *for* him or *with* the company.”
Following her arrival, Fidler spearheaded the creation of a research facility in Toronto, known as Omniverse. This Nvidia platform is dedicated to constructing simulations specifically for physical AI applications.
A primary hurdle in creating these realistic simulated environments was the acquisition of sufficient 3D data, according to Fidler. This involved sourcing a substantial quantity of images and developing the tools to convert them into 3D representations usable by the simulators.“We invested heavily in a technology called differentiable rendering, which essentially adapts rendering processes for use with AI,” Fidler clarified. “Traditionally, rendering transforms 3D models into images or videos; we are focused on reversing that process.”
World Models
In 2021, Omniverse initially launched its GANverse3D model, capable of transforming images into three-dimensional representations. Subsequent efforts were then directed towards achieving the same functionality with video content.
According to Fidler, the creation of these 3D models and simulations was facilitated by utilizing videos sourced from robots and autonomous vehicles. This process leverages their Neural Reconstruction Engine, initially unveiled in 2022.
These technologies form the foundational elements of the company’s Cosmos family of world AI models, which were formally introduced at CES in January.
Currently, the primary focus is on accelerating the processing speed of these models. Real-time responsiveness is crucial for applications like video games and simulations, while even faster reaction times are required for robotic systems, Fidler explained.
“A robot doesn’t require perceiving the world at the same temporal rate as the world itself operates,” Fidler stated. “It can process information at a rate 100 times faster.”
“Therefore, substantial improvements in model speed will unlock significant benefits for robotic and physical AI applications.”
Nvidia recently announced a new suite of world AI models at the SIGGRAPH computer graphics conference. These models are specifically designed for generating synthetic data used in robot training.
Alongside these models, Nvidia also revealed new libraries and infrastructure software tailored for robotics developers.
Despite the advancements – and the current enthusiasm surrounding robotics, particularly humanoid robots – the Nvidia research team maintains a pragmatic outlook.
Both Dally and Fidler indicated that the widespread availability of humanoid robots in homes remains several years away. Fidler drew a parallel to the initial expectations and projected timelines for autonomous vehicles.
“Significant progress is being made, and AI has been a key enabler,” Dally commented. “Visual AI is enhancing robot perception, and generative AI is proving invaluable for task planning, motion control, and manipulation.”
“As we address individual challenges and expand the datasets used for training our networks, these robots will continue to evolve.”
We are committed to continuous improvement, and your insights are valuable to us! Please take a moment to complete this survey to share your feedback on TechCrunch’s coverage and events, and you’ll have a chance to win a prize!
Related Posts

OpenAI, Anthropic & Block Join Linux Foundation AI Agent Effort
Alexa+ Updates: Amazon Adds Delivery Tracking & Gift Ideas

Google AI Glasses: Release Date, Features & Everything We Know

EU Antitrust Probe: Google's AI Search Tools Under Investigation

Microsoft to Invest $17.5B in India by 2029 - AI Expansion
