It is apparent that Intel's decision not to fully support their own platform is merely a marketing tactic. It is speculated that their aim is to promote sales of Xeon processors. And then they can attach a stiff tag to the new desktop processors.
Currently, my financial plan limits me to a budget of $10,000. However, this sum is barely sufficient to acquire one of those 4th-generation scalable Xeon CPUs, which I am solely interested in purchasing. That was what I meant when I introduced my parts list: "A modest yet effective AI workstation." What you are saying about workstations is, of course, very true, I am hoping to be in a position, where I can build a system with a X
eon Platinum 8490H Processors, in a year or so. $17,000 for CPU, and $10,342.00 more for the Nvidia A100 card, now that would be a true workstation. Of course, at least another 10 to 20 grand more for storage and whatnot... If things go as planned. Fingers crossed...
For now, I will have to work with this humble budget of mine.
This is a quote from pcguide (best cpus for deep learning)
Of course after paying homage to where it belongs... "For demanding professional workloads, it is advisable to opt for AMD Threadripper or
Intel Xeon W CPUs."
"The Intel Core i9-13900KS is widely regarded as one of the best CPUs for deep learning. Its processing power is so impressive that it can even rival AMD Threadripper CPUs, making it unnecessary to opt for one of those. One of the most significant advantages of the 13900KS is its 20 PCIe express lanes, which can increase even further with a Z690/Z790 motherboard. This is crucial since many deep learning tasks rely on the GPU, and the extra lanes provide more power for GPU acceleration. Its exceptional processing power, compatibility with deep learning libraries, and additional PCIe express lanes make it an ideal choice for deep learning tasks."
I am also considering getting a mediocre rig for the time being, working on cloud GPUs & CPUs, and looking out for a (4x48) CPU.