• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel 10th Gen Core X "Cascade Lake" HEDT Processors Launch on October 7

Interesting, I thought the "Cascade Lake" CPUs were also ring bus based? I will look it up but what are the main architectural differences between the 2?

i am no CPU engineer. From what I heard ringbus is good for low latency; meshis good for scaling up core numbers.
 
i am no CPU engineer. From what I heard ringbus is good for low latency; meshis good for scaling up core numbers.

Got it I was just looking them up online I will have to do some deep reading when i get home today.
 
Why would anyone use it though when every single ML framework has a GPU accelerated back-end. This is an advantage in a non existent fight.

It helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.
 
please watch epox vox's review and you understand why :)
You mean the guy who can't cool his 7980XE properly and doesn't understand how to use it, yeah that's some REALLY smart source you've got there.
 
It helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.


Radeon Institint's Tensorflow support is heavily based upon community support and self-development. Researchers will have to put in a crap ton of money and human power to get it off the ground and going for any serious work.Some machine learning lab have looked into this at my institution. They were trying to develop their own Tensorflow pipeline for analyzing tumor biopsy to look for signs of metastasis. They spent a good year on it going nowhere. Ended up going back to the tried and true Nvidia solution. At least they only bought one MI25 so not a whole lot of capital lost on that.


Point is, Nvidia has absolute dominance in the ML/DL/AI hardware market. They have a very mature software ecosystem as well and a super good customer service team dedicated to researchers across the globe. It will take some serious effort from Intel to bite off a piece from this ever increasing pie.
 
Considering the option for drop in AVX accelerators is already here the need for on die AVX isn't really there. Or they are full of it with Phi and other FPGA's they have built.

Also the pricing really lets you know how much they were making with their almost monopoly built on sketchy security riddled products.
 
It helps when you're using Radeon Instinct instead of Tesla. NVIDIA has fixed-function matrix multipliers called tensor cores, which AMD GPUs lack. Maybe you work for a company that prefers AMD's open software stack to NVIDIA's. That's when DLBoost will help.

Even without Tensor cores most AMD GPUs will vastly outperform any CPU with AVX 512. I struggle to justify how a company would be more willing to spend thousands of dollars on multiple CPU nodes to get the same throughput that could have been obtained with one or two GPUs (AMD or Nvidia) at a fraction of the cost.

Let me be frank with the example you provided, if one buys a whole bunch of those eye-wateringly expensive AMD Instinct cards but they end up using DL boost on CPUs to accelerate their ML workloads that means they are severely out of touch with whatever they were supposed to accomplish.

I can't find a single instance when these CPU would make sense over any other GPU solution as far as ML is concerned, there just isn't any. It's a feature stuck in a no man's land. Intel has upcoming GPUs so why do they insist on these solutions that are clearly not up to the task is beyond me.
 
Last edited:
It's good to see the competition is finally working.

Any new motherboards for this release?
Yes, ASUS, MSI, Gigabyte etc. will launch new motherboards, but existing ones will be fully compatible with a BIOS update.

Interesting, I thought the "Cascade Lake" CPUs were also ring bus based? I will look it up but what are the main architectural differences between the 2?
Perhaps you're mixing it with the upcoming Comet Lake-S?
Cascade Lake-SP/X have always had a mesh interconnect.
 
Radeon Institint's Tensorflow support is heavily based upon community support and self-development. Researchers will have to put in a crap ton of money and human power to get it off the ground and going for any serious work.Some machine learning lab have looked into this at my institution. They were trying to develop their own Tensorflow pipeline for analyzing tumor biopsy to look for signs of metastasis. They spent a good year on it going nowhere. Ended up going back to the tried and true Nvidia solution. At least they only bought one MI25 so not a whole lot of capital lost on that.


Point is, Nvidia has absolute dominance in the ML/DL/AI hardware market. They have a very mature software ecosystem as well and a super good customer service team dedicated to researchers across the globe. It will take some serious effort from Intel to bite off a piece from this ever increasing pie.

That and they have free classes and samples on setting it up, and a community/tech support behind it.
 
Assuming the prices are right, AMD's going to have to up the ante.
 
Back
Top