Tuesday, April 23rd 2019

Tesla Dumps NVIDIA, Designs and Deploys its Own Self-driving AI Chip

Tesla Motors announced the development of its own self-driving car AI processor that runs the company's Autopilot feature across its product line. The company was relying on NVIDIA's DGX processors for Autopilot. Called the Tesla FSD Chip (full self-driving), the processor has been deployed on the latest batches of Model S and Model X since March 2019, and the company looks to expand it to its popular Model 3. Tesla FSD Chip is an FPGA of 250 million gates across 6 billion transistors crammed into a 260 mm² die built on the 14 nm FinFET process at a Samsung Electronics fab in Texas. The chip packs 32 MB of SRAM cache, a 96x96 mul/add array, and a cumulative performance metric per die of 72 TOPS at its rated clock-speed of 2.00 GHz.

A typical Autopilot logic board uses two of these chips. Tesla claims that the chip offers "21 times" the performance of the NVIDIA chip it's replacing. Elon Musk referred to the FSD Chip as "the best chip in the world," and not just on the basis of its huge performance uplift over the previous solution. "Any part of this could fail, and the car will keep driving. The probability of this computer failing is substantially lower than someone losing consciousness - at least an order of magnitude," he added.
Slides with on-die details follow.

Add your own comment

39 Comments on Tesla Dumps NVIDIA, Designs and Deploys its Own Self-driving AI Chip

#2
FreedomEclipse
~Technological Technocrat~
Ferrum Master, post: 4035168, member: 90058"
This Jim Kellers work again.
But he's a microprocessor engineer?
Posted on Reply
#3
TheLostSwede
Cortex-A72 seems a bit antiquated by now, but I guess that's the downside of designing your own hardware, it takes time, so you might end up with something old by the time it's ready...
Posted on Reply
#4
Xzibit
@btarunr

Its being manufacturer at Samsung Austin Texas not Korea. It was one of the Q/A during their presentation.

TheLostSwede, post: 4035172, member: 3382"
Cortex-A72 seems a bit antiquated by now, but I guess that's the downside of designing your own hardware, it takes time, so you might end up with something old by the time it's ready...
They way they described it during their presentation and during Q/A. Is that it was purpose built for cars rather then having Nvidia general purpose kit. They also said, it would save them 20% per unit/car.
Posted on Reply
#5
Wavetrex
Watched the entire thing live.

Level 5 self driving ( aka, full autonomy ) by next year... would be a miracle.
This falls into the "I want to believe" UFO fanatics...

But who knows, maybe they know something that anyone else doesn't.
If it DOES happen and regulators get such cars into law, I personally couldn't be more happy, because I'm not allowed to drive due to some medical issues. But I would love to buy a car that drives me... especially one that doesn't cost me anything for "fuel" !
Posted on Reply
#6
kastriot
Lol i hope they don't regret it.
Posted on Reply
#7
Bytales
Double die size, 500 squaer milimeters, 7nm, and put 10 to 20 Chips on a board. youll have Level 6 autonomy. Level six will make you breakfast, finish your work frrom your workplace while it drives you there.
Posted on Reply
#8
silentbogo
Tesla Dumps NVIDIA
That's been known for over a year.

TheLostSwede, post: 4035172, member: 3382"
Cortex-A72 seems a bit antiquated by now, but I guess that's the downside of designing your own hardware
Not really. Licensing A72 is probably cheaper than current A75/A76, it gets the job done(especially with so many cores), and probably has lots to do with compatibility w/ previous software stack (e.g. a drop-in replacement for px/cx, which ran on A57).
Posted on Reply
#9
Patriot
The probably are still relying on DGX as those are Servers not in the car and are still needed for training...
They just replaced the inferencing in the cars, the drive AGX platform.

2 Variants: Xavier & Pegasus

Xavier delivers 30 TOPS of performance while consuming only 30 watts of power.
Pegasus™ delivers 320 TOPS with an architecture built on two NVIDIA® Xavier™ processors and two next-generation TensorCore GPUs. (Tu104)

Took a bit of digging as they list twin TU104 gpus but it's actually a super cutdown variant the T4. Only uses 70w ea for 130 Tops/card. Also they don't sell the T4 in the SXM2 format outside the drive platform... Puts platform power at ~200w.
(original announcement said 500w tdp, but that was before the T4 which inferencing specs this matches, full quadro 5000 would be ~265w ea and hit over that 500w tdp and under the inferencing spec)

This FPGA replacement is a 144 TOPS solution, which is likely nearly as efficient as the AGX Xavier but I am not finding the power usage anywhere.
So improvement is not 21x, they quoted nvidia's solution at 20 Tops, not 30 though that might have been the gen1 unit they were replacing.
Posted on Reply
#10
Ferrum Master
FreedomEclipse, post: 4035170, member: 38411"
But he's a microprocessor engineer?
Exactly? He got a paid vacation while working there?
Posted on Reply
#11
BorgOvermind
Bytales, post: 4035177, member: 89173"
Double die size, 500 squaer milimeters, 7nm, and put 10 to 20 Chips on a board. youll have Level 6 autonomy. Level six will make you breakfast, finish your work frrom your workplace while it drives you there.
...and you are the battery for it. :|
Posted on Reply
#12
notb
As usual, a news way outside of TPU's comfort zone... and it shows.
Tesla announced that they'll end partnership with Nvidia in August 2018, but the first official confirmation that they're building an AI chip was in 2017Q4. A long time ago.

And of course they're "dumping" the inference chip - the one put inside a car.
Whatever they use for learning most likely stays the same. It's very unlikely Tesla could develop a functioning GPU, let alone a competitor to DGX-1.

Of course they're looking for a way to save money, since Tesla's is once again burning cash.
Wavetrex, post: 4035174, member: 182738"
Level 5 self driving ( aka, full autonomy ) by next year... would be a miracle.
Not happening.
But who knows, maybe they know something that anyone else doesn't.
No.
If it DOES happen and regulators get such cars into law, I personally couldn't be more happy, because I'm not allowed to drive due to some medical issues. But I would love to buy a car that drives me... especially one that doesn't cost me anything for "fuel" !
It will take decades before we reach Level 5, i.e. cars that can drive themselves in all conditions. Don't hold your breath.

Technologically we're around level 3, but it still needs a lot of testing and, obviously, legislation. That's another 5 years, maybe more.
Posted on Reply
#13
Xzibit
Patriot, post: 4035183, member: 77367"
The probably are still relying on DGX as those are Servers not in the car and are still needed for training...
They just replaced the inferencing in the cars, the drive AGX platform.

2 Variants: Xavier & Pegasus

Xavier delivers 30 TOPS of performance while consuming only 30 watts of power.
Pegasus™ delivers 320 TOPS with an architecture built on two NVIDIA® Xavier™ processors and two next-generation TensorCore GPUs. (Tu104)

Took a bit of digging as they list twin TU104 gpus but it's actually a super cutdown variant the T4. Only uses 70w ea for 130 Tops/card. Also they don't sell the T4 in the SXM2 format outside the drive platform... Puts platform power at ~200w.
(original announcement said 500w tdp, but that was before the T4 which inferencing specs this matches, full quadro 5000 would be ~265w ea and hit over that 500w tdp and under the inferencing spec)

This FPGA replacement is a 144 TOPS solution, which is likely nearly as efficient as the AGX Xavier but I am not finding the power usage anywhere.
So improvement is not 21x, they quoted nvidia's solution at 20 Tops, not 30 though that might have been the gen1 unit they were replacing.
Its a (72w) solution. It was in the Presentation. Its replacing a Modified Nvidia Drive PX 2 system (57w) and is saving them 20% in cost while delivering 1.25x.

21x is in reference to Frames it can process per second in comparison.
Posted on Reply
#14
efikkan
It's only a matter of time before GPGPUs and CPUs are outclassed by FPGAs and ASICs in the specialized deep learning market, as it grows large enough to justify the development. I hope Nvidia, Intel and AMD don't waste too much on trying to make their products compete in this "niche" market, we need CPUs and GPUs to focus on what they are supposed to be good at.
Posted on Reply
#16
notb
efikkan, post: 4035206, member: 150226"
It's only a matter of time before GPGPUs and CPUs are outclassed by FPGAs and ASICs in the specialized deep learning market, as it grows large enough to justify the development. I hope Nvidia, Intel and AMD don't waste too much on trying to make their products compete in this "niche" market, we need CPUs and GPUs to focus on what they are supposed to be good at.
The article is about inference chips, not learning (which happens on CPU/GPU).
Moreover, what are CPUs and GPUs supposed to be good at, if not at computing?

And I wouldn't really call cars a "niche market"... ;-)
Posted on Reply
#17
silapakorn
And to think Nvidia named its GPU after them. :laugh:
Posted on Reply
#18
john_
Everyone beating Nvidia in this game, the AI game, will be bad for Nvidia, but good for gamers, because it will force Nvidia to come back at the PC market and probably start putting prices that make sense, so they can keep their market share and protect their income. Because if they lose gamers, they are toasted.
Posted on Reply
#19
Frick
Fishfaced Nincompoop
notb, post: 4035193, member: 165619"
As usual, a news way outside of TPU's comfort zone... and it shows.
Tesla announced that they'll end partnership with Nvidia in August 2018, but the first official confirmation that they're building an AI chip was in 2017Q4. A long time ago.

And of course they're "dumping" the inference chip - the one put inside a car.
Whatever they use for learning most likely stays the same. It's very unlikely Tesla could develop a functioning GPU, let alone a competitor to DGX-1.

Of course they're looking for a way to save money, since Tesla's is once again burning cash.

Not happening.

No.

It will take decades before we reach Level 5, i.e. cars that can drive themselves in all conditions. Don't hold your breath.

Technologically we're around level 3, but it still needs a lot of testing and, obviously, legislation. That's another 5 years, maybe more.
But but but Musk said on twitter they'd do it! That's as good as money in the bank!
Posted on Reply
#20
notb
Frick, post: 4035225, member: 23907"
But but but Musk said on twitter they'd do it! That's as good as money in the bank!
Yeah, mid 2018. Tesla's AI chips will be 10x faster than anything available from competition. :-)
john_, post: 4035219, member: 137560"
Everyone beating Nvidia in this game, the AI game,
No "beating" is happening at the moment. Tesla decided to cut costs by designing their own chips. That's all.

The really interesting part is that the chip Tesla made is rather weak for full autonomous driving. Nvidia's solution is more than twice as powerful.
be bad for Nvidia, but good for gamers, because it will force Nvidia to come back at the PC market and probably start putting prices that make sense, so they can keep their market share and protect their income. Because if they lose gamers, they are toasted.
No offense, but you can't be serious saying things like that.
You'd be happy if a leading AI chip company left this business and focused on gaming?
That is just dumb.
Posted on Reply
#21
Ferrum Master
notb, post: 4035236, member: 165619"
The really interesting part is that the chip Tesla made is rather weak for full autonomous driving. Nvidia's solution is more than twice as powerful.
It is not only about that, judging from the SRAM size... it all about latency and how speedy the decisions are made.
Posted on Reply
#22
FreedomEclipse
~Technological Technocrat~
Ferrum Master, post: 4035187, member: 90058"
Exactly? He got a paid vacation while working there?
Ohhhhh my mistake I thought they dropped the Ai part of it not the Cpu
Posted on Reply
#23
Gasaraki
Wavetrex, post: 4035174, member: 182738"
Watched the entire thing live.

Level 5 self driving ( aka, full autonomy ) by next year... would be a miracle.
This falls into the "I want to believe" UFO fanatics...

But who knows, maybe they know something that anyone else doesn't.
If it DOES happen and regulators get such cars into law, I personally couldn't be more happy, because I'm not allowed to drive due to some medical issues. But I would love to buy a car that drives me... especially one that doesn't cost me anything for "fuel" !
Yeah that's fantasy. They are not even level 3 yet.

john_, post: 4035219, member: 137560"
Everyone beating Nvidia in this game, the AI game, will be bad for Nvidia, but good for gamers, because it will force Nvidia to come back at the PC market and probably start putting prices that make sense, so they can keep their market share and protect their income. Because if they lose gamers, they are toasted.
What? nVidia has an ~80% market share for gaming graphics cards. They never left the PC market.
Posted on Reply
#24
SoNic67
Wavetrex, post: 4035174, member: 182738"
I personally couldn't be more happy, because I'm not allowed to drive due to some medical issues. But I would love to buy a car that drives me... especially one that doesn't cost me anything for "fuel" !
I have two questions to that:
1. If your car crashes in another one or a bicyclist or a pedestrian, who will pay for damages and who will go to jail for negligence? You? Will carry insurance even? The automaker?
2. If you "fuel" at the residential electrical grid, how will you pay for the infrastructure (roads/streets) that your car is using? Right now those are paid by gas tax. Will you be OK to have a GPS in the car an be taxed per mile driven?
Posted on Reply
#25
yakk
Yeah it was announced while Jim Keller still worked at Tesla that they were dumping nvidia. Not many companies partner with nvidia for very long...
Posted on Reply
Add your own comment