• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Lightmatter Unveils Six‑Chip Photonic AI Processor with Incredible Performance/Watt

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,049 (1.08/day)
Lightmatter has launched its latest photonic processor, representing a fundamental shift from traditional computing architectures. The new system integrates six chips into a single 3D packaged module, each containing photonic tensor cores and control dies that work in concert to accelerate AI workloads. Detailed in a recent Nature publication, the processor combines approximately 50 billion transistors with one million photonic components interconnected via high-speed optical links. The industry has faced numerous computing challenges as conventional scaling approaches plateau, with Moore's Law, Dennard scaling, and DRAM capacity doubling, all reaching physical limits per silicon area. Lightmatter's solution implements an adaptive block floating point (ABFP) format with analog gain control to overcome these barriers. During matrix operations, weights and activations are grouped into blocks sharing a single exponent determined by the most significant value, minimizing quantization errors.

The processor achieves 65.5 trillion 16-bit ABFP operations per second (sort of 16-bit TOPs) while consuming just 78 W of electrical power and 1.6 W of optical power. What sets this processor apart is its ability to run unmodified AI models with near FP32 accuracy. The system successfully executes full-scale models, including ResNet for image classification, BERT for natural language processing, and DeepMind's Atari reinforcement learning algorithms without specialized retraining or quantization-aware techniques. This represents the first commercially available photonic AI accelerator capable of running off-the-shelf models without fine-tuning. The processor's architecture fundamentally uses light for computation to address next-generation GPUs' prohibitive costs and energy demands. With native integration for popular AI frameworks like PyTorch and TensorFlow, Lightmatter hopes for immediate adoption in production environments.



View at TechPowerUp Main Site | Source
 
Sounds cool, but since I don't run (or give a rats ass about ATM) AI, LLM's, mAtx ops ect, I have to ask the pertinent question here:

Whatchagonnado4me :)

Also, are these "photonics" related to the holographic beings like the EMH doctor from ST: Voyager, hehehe ?

j/k :D
 
So it’s a photomultiplier with some junctions thrown in and a gain knob to fiddle with to get the right answer when you know the answer to “tune” the system.

It’s like using an old manual radio to tune into a known radio frequency then ??hoping it puts the right answer out with 99% accuracy at the best and a unknown at the least??

Also needs to be noted this is just the “compute” part, there is no memory to load the data from, or store it to, so while it’s very cool all data still must be read from a cell and then transformed into photons then sensed and changed back into electron potential storage. Like a GPU without memory, and based on the lack of “how did we feed 66ish trillion 16 bit words to the device” it’s still pretty novel.
 
Back
Top