• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel and Weizmann Institute Speed AI with Speculative Decoding Advance

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,895 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
At the International Conference on Machine Learning (ICML), researchers from Intel Labs and the Weizmann Institute of Science introduced a major advance in speculative decoding. The new technique, presented at the conference in Vancouver, Canada, enables any small "draft" model to accelerate any large language model (LLM) regardless of vocabulary differences. "We have solved a core inefficiency in generative AI. Our research shows how to turn speculative acceleration into a universal tool. This isn't just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today," said Oren Pereg, senior researcher, Natural Language Processing Group, Intel Labs.

Speculative decoding is an inference optimization technique designed to make LLMs faster and more efficient without compromising accuracy. It works by pairing a small, fast model with a larger, more accurate one, creating a "team effort" between models. Consider the prompt for an AI model: "What is the capital of France…" A traditional LLM generates each word step by step. It fully computes "Paris," then "a", then "famous", then "city" and so on, consuming significant resources at each step. With speculative decoding, the small assistant model quickly drafts the full phrase "Paris, a famous city…" The large model then verifies the sequence. This dramatically reduces the compute cycles per output token.



Why It Matters: This universal method by Intel and the Weizmann Institute removes the limitations of shared vocabularies or co-trained model families, making speculative decoding practical across heterogeneous models. It delivers performance gains of as much as 2.8x faster inference without loss of output quality.¹ It also works across models from different developers and ecosystems, making it vendor-agnostic; it is open source ready through integration with the Hugging Face Transformers library.

In a fragmented AI landscape, this speculative decoding breakthrough promotes openness, interoperability and cost-effective deployment from cloud to edge. Developers, enterprises and researchers can now mix and match models to suit their performance needs and hardware constraints.
"This work removes a major technical barrier to making generative AI faster and cheaper," said Nadav Timor, Ph.D. student in the research group of Prof. David Harel at the Weizmann Institute. "Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models."

The research paper introduces three new algorithms that decouple speculative coding from vocabulary alignment. This opens the door for flexible LLM deployment with developers pairing any small draft model with any large model to optimize inference speed and cost across platforms.

The research isn't just theoretical. The algorithms are already integrated into the Hugging Face Transformers open source library used by millions of developers. With this integration, advanced LLM acceleration is available out of the box with no need for custom code.

View at TechPowerUp Main Site
 
The nice part is that those techniques were already upstreamed into HF's Transformers and used by lots of folks, so the paper is more of a formal explanation of what has been done with actual data from multiple different folks, really cool.
 
Back
Top