Published in News

Intel achieves integrated photonics milestone

by on27 June 2024


It is like replacing horses with cars

Intel has announced a significant breakthrough in integrated photonics technology, aimed at enhancing high-speed data transmission.

 At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group unveiled the industry's first fully integrated optical compute interconnect (OCI) chiplet. This chiplet, co-packaged with an Intel CPU, was demonstrated running live data, marking a pivotal advancement in high-bandwidth interconnect technology.

Intel's press release said the development, which replaces electrical I/O with optical I/O in CPUs and GPUs to transfer data, is like going from using horse-drawn carriages to distribute goods, limited in capacity and range, to using cars and trucks that can deliver much larger quantities of goods over much longer distances.

“Optical I/O solutions like Intel’s OCI chiplet emerging bring this level of improved performance and energy cost to AI scaling."

The OCI chiplet represents a crucial development in co-packaged optical input/output (I/O) technology. It addresses the increasing demands of AI infrastructure in data centres and high-performance computing (HPC) applications.

As data movement between servers intensifies, existing data centre infrastructure is nearing the limits of electrical I/O performance.

Intel's tech helps integrate co-packaged silicon photonics interconnect solutions into next-generation computing systems. This integration boosts bandwidth, reduces power consumption, and extends reach, enhancing AI infrastructure's machine learning (ML) workload acceleration.

 Designed to support 64 channels of 32 gigabits per second (Gbps) data transmission in each direction over up to 100 meters of fibre optics, the OCI chiplet addresses the growing need for higher bandwidth, lower power consumption, and longer reach in AI infrastructure.

It enables future CPU/GPU cluster connectivity scalability and novel compute architectures, including coherent memory expansion and resource disaggregation.

The increasing deployment of AI-based applications globally, alongside advancements in large language models (LLMs) and generative AI, underscores the necessity for more efficient and larger ML models.

These models are essential for meeting the evolving requirements of AI acceleration workloads. The need to scale computing platforms for AI is driving exponential growth in I/O bandwidth and reach, supporting larger processing unit (CPU/GPU/IPU) clusters and architectures that utilise resources more efficiently, such as xPU disaggregation and memory pooling.

While electrical I/O (copper trace connectivity) offers high bandwidth density and low power for short reaches (about one meter or less), pluggable optical transceiver modules in data centres and early AI clusters provide increased reach but at unsustainable cost and power levels for scaling AI workloads.

A co-packaged xPU optical I/O solution addresses this by offering higher bandwidth with improved power efficiency, low latency, and extended reach, meeting the demands of AI/ML infrastructure scaling.

The OCI chiplet uses Intel's silicon photonics technology, integrating a silicon photonics integrated circuit (PIC) with on-chip lasers and optical amplifiers with an electrical IC. Demonstrated at OFC, the OCI chiplet co-packaged with an Intel CPU can also be integrated with next-generation CPUs, GPUs, IPUs, and other system-on-chips (SoCs).

This first OCI implementation supports up to 4 terabits per second (Tbps) bidirectional data transfer, compatible with peripheral component interconnect express (PCIe) Gen5. The live demonstration featured a transmitter (Tx) and receiver (Rx) connection between two CPU platforms over a single-mode fibre (SMF) patch cord.

CPUs generated and measured the optical Bit Error Rate (BER), showcasing a Tx optical spectrum with eight wavelengths at 200 gigahertz (GHz) spacing on a single fibre and a 32 Gbps Tx eye diagram illustrating strong signal quality.

The OCI chiplet supports 64 channels of 32 Gbps data in each direction over up to 100 meters (practical applications may be limited to tens of meters due to time-of-flight latency). The co-packaged solution uses eight fibre pairs, each carrying eight dense wavelength division multiplexing (DWDM) wavelengths.

The co-packaged solution is energy efficient, consuming only 5 pico-Joules (pJ) per bit compared to pluggable optical transceiver modules at about 15 pJ/bit. This efficiency is critical for data centres and HPC environments, addressing AI's unsustainable power requirements.

Intel is also advancing its silicon photonics fab process node, offering state-of-the-art (SOA) device performance, higher density, better coupling, and improved economics. Enhancements in on-chip laser and SOA performance, cost, and power are being realised, with more than a 40 per cent die area reduction and over a 15 per cent power reduction.

The current OCI chiplet is a prototype. Intel is collaborating with select customers to co-package OCI with their SoCs as an optical I/O solution. This advancement in high-speed data transmission positions Intel at the forefront of the evolving AI infrastructure landscape, driving innovation and shaping the future of connectivity.

Last modified on 27 June 2024
Rate this item
(2 votes)