Lightmatter Raises $80M to Advance Photonic AI

The Rise of Photonic Computing: Lightmatter's $80 Million Boost
Artificial intelligence is now a core component of numerous products and services. However, the demands placed on data and processing power by AI systems are continually increasing.
Lightmatter is developing a novel approach to overcome the limitations of traditional computing. Their goal is to surpass the constraints of Moore’s Law through the use of exceptionally fast photonic chips specifically designed for AI applications.
A recent $80 million funding round positions the company to bring its light-based computing technology to the commercial market.
Early Beginnings and Technological Development
Lightmatter initially gained attention in 2018. The company’s founders, recent graduates of MIT, had secured $11 million in funding to validate the potential of their photonic computing concept.
Over the subsequent three years, the team focused on building and improving the technology. This process involved overcoming the typical challenges faced by hardware startups and technically-focused founders.
A detailed explanation of the company’s technology can be found in a previous feature article. The core principles of their approach remain unchanged.
How Lightmatter's Technology Works
Lightmatter’s chips excel at performing specific, complex calculations that are essential for machine learning processes with remarkable speed.
Unlike conventional chips that rely on electrical charge, logic gates, and transistors, Lightmatter utilizes photonic circuits. These circuits manipulate the path of light to execute calculations.
While the concept of photonic computing has existed for some time, achieving scalability and practical application, particularly for high-value purposes, has proven difficult until recently.
The ability to efficiently manipulate light for computation represents a significant advancement in the field of AI hardware.
From Concept to Commercialization
In 2018, when Lightmatter was initially established, it remained uncertain whether their technology could effectively compete with and potentially replace conventional compute clusters. These clusters, consisting of thousands of custom units, are utilized by major companies such as Google and Amazon for AI training purposes.
CEO and co-founder Nick Harris explained to TechCrunch that while the theoretical potential of the technology was apparent, numerous complexities needed resolution. He stated, “We faced significant hurdles in theoretical computer science and chip design… and the COVID-19 pandemic presented substantial challenges.”
The pandemic caused disruptions in the supply chain and led to project delays and partnership postponements across the industry. This resulted in Lightmatter falling behind schedule by several months. However, Harris noted that the company emerged from this period with increased resilience.
Image Credits: Lightmatter“What we are undertaking is genuinely ambitious,” Harris conceded. “We are constructing computers from the ground up, encompassing chip design, packaging, card development, system integration, and software creation. This necessitates a company possessing expertise across all these domains.”
The company has expanded from its original founding team to a workforce of over 70 employees, distributed between locations in Mountain View and Boston. Further growth is anticipated as their new product is launched.
Lightmatter’s initial concept has evolved into the Envise, now a tangible product described as a “general-purpose photonic AI accelerator.” This server unit is designed for standard data center racks and incorporates multiple photonic computing units. These units are capable of executing neural network inference at exceptional speeds. (Currently, its capabilities are focused on linear algebra, a crucial component of machine learning, rather than complex logical operations.)
Harris was cautious about disclosing precise performance gains, primarily because these improvements are continually evolving. The company’s website indicates a performance advantage of 5x over an Nvidia A100 unit when processing large transformer models like BERT, while consuming approximately 15% less energy. This combination of enhanced performance and reduced energy consumption is particularly appealing to large AI companies like Google and Amazon, who have substantial demands for both computing power and energy efficiency.
Lightmatter intends to begin testing these units with key customers by the close of 2021, with the goal of refining the product and scaling up production for broader commercial availability. Harris emphasized that this initial release represents a foundational step in their innovative approach.
“Should our assumptions prove correct, we will have effectively created the next transistor,” Harris asserted. This claim holds considerable weight in the context of large-scale computing. While a personal photonic computer remains distant, data centers—predicted to consume up to 10% of global power by 2030—demonstrate “an insatiable demand.”
The Realm of Optical Computing: Utilizing Color and Interconnect
Image Credits: LightmatterLightmatter is pursuing two primary strategies to enhance the functionality of its photonic computing systems. The most innovative of these involves leveraging different wavelengths of light for processing.
Considering the operational principles of these computers clarifies this approach. Traditional transistors, foundational to computing for decades, employ electricity to execute logical operations, effectively opening and closing gates. While manipulating electrical frequencies as waveforms is possible at a larger scale, this isn’t feasible at the transistor level. The fundamental unit remains the electron, and gates operate in a binary state – open or closed.
However, Lightmatter’s devices utilize light traveling through waveguides to perform calculations, offering a streamlined and accelerated process. Light, as established in scientific study, exists in a spectrum of wavelengths, and each can be utilized independently and concurrently on the same hardware.
The same optical principles enabling a signal from a blue laser to propagate at light speed apply equally to red or green lasers, requiring minimal hardware adjustments. Crucially, if these light waves do not interfere, they can traverse the same optical components simultaneously without losing coherence.
Image Credits: LightmatterThis implies that a Lightmatter chip capable of executing a million calculations per second with a red laser can double its capacity to two million by adding another color, and further increase it to three million, with limited modifications. According to Harris, the primary challenge lies in sourcing lasers that meet these requirements. The ability to substantially increase performance – doubling, tripling, or even achieving a 20x improvement – with relatively unchanged hardware presents a compelling development pathway.
This leads to the second key challenge the company is addressing: interconnect. Any high-performance computing system comprises numerous individual computers, often numbering in the thousands, operating in synchronized coordination. Effective communication is essential for these cores to maintain awareness of each other’s activities and collectively tackle complex computational problems. (Intel’s discussion of this “concurrency” issue in exa-scale supercomputer development can be found here.)
“A key learning has been determining how to enable communication between these chips when their speed surpasses the coordination capabilities of traditional computing cores,” explained Harris. The rapid processing speed of Lightmatter chips necessitates a departure from conventional coordination methods.
The solution appears to be photonic in nature: a wafer-scale interconnect board utilizing waveguides instead of fiber optics for data transfer between cores. While fiber optic connections are fast, they aren’t limitless in speed, and their physical size can restrict the number of channels available between cores at chip scales.
“We’ve integrated the optics and waveguides directly into the chip itself; we can accommodate 40 waveguides within the space occupied by a single optical fiber,” Harris stated. “This translates to a significantly higher number of parallel lanes, resulting in exceptionally high interconnect speeds.” (Detailed specifications are available here for those interested in chip and server technology.)
This optical interconnect board, named Passage, will be incorporated into future iterations of their Envise product line. However, like the color calculation enhancement, it remains a feature of a forthcoming generation. A performance boost of 5-10x, coupled with reduced power consumption, is expected to meet the needs of their customers in the immediate future.
Allocating the $80 Million Investment
The first recipients of these new chips will be customers currently managing “hyper-scale” data operations. These entities already possess extensive data centers and high-performance computing infrastructure operating at maximum capacity. Initial test chips are scheduled for delivery later this year. According to Harris, the primary focus of this Series B funding round is to support their early access program.
This funding will facilitate both the production of hardware for distribution – a costly endeavor prior to achieving economies of scale, compounded by current supply chain challenges – and the expansion of their go-to-market team. Significant investment is also being directed towards servicing, support, and the substantial software component integral to this technology, resulting in considerable recruitment efforts.
Viking Global Investors spearheaded the investment round, with contributions from HP Enterprise, Lockheed Martin, SIP Global Partners, and existing investors including GV, Matrix Partners, and Spark Capital. This brings the company’s total funding to approximately $113 million. This total comprises an initial $11 million Series A round, followed by a $22 million A-1 round led by GV, and now this $80 million Series B.
While several companies are exploring photonic computing and its potential, particularly within neural networks, Harris expressed confidence in Lightmatter’s leading position. He noted that few competitors are nearing product shipment, and the market is experiencing rapid growth. He referenced an OpenAI study demonstrating that the demand for AI computing is outpacing the capabilities of current technologies, necessitating larger and larger data centers.
Over the coming decade, increasing economic and political pressures will likely focus on reducing energy consumption, mirroring trends observed in the cryptocurrency sector. Lightmatter is strategically positioned to offer a more efficient and powerful alternative to traditional GPU-based systems.
Harris optimistically believes his company’s innovation has the potential to fundamentally reshape the industry. If successful, there is no immediate pressure to accelerate development – they have effectively secured their position in what could become a significant technological advancement.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
