Wednesday, October 17th 2012

AMD FX "Vishera" Processors Launch Pricing Surprises

According to new reports, AMD's next-generation FX "Vishera" processors built on its "Piledriver" architecture could surprise buyers with pricing, in a similar way to the A-Series "Trinity." AMD's FX "Vishera" socket AM3+ processors will not exceed US $200, according to the report. The launch pricing (along with specifications compiled from older reports) reportedly looks like this:
  • AMD FX-8350 - $199: 8 cores, 4.00~4.20 GHz with TurboCore, 16 MB total cache
  • AMD FX-8320 - $175: 8 cores, 3.50~4.00 GHz with TurboCore, 16 MB total cache
  • AMD FX-6300 - $135: 6 cores, 3.50~4.10 GHz with TurboCore, 14 MB total cache
  • AMD FX-4300 - $125: 4 cores, 3.80~4.00 GHz with TurboCore, 8 MB total cache
Source: VR-Zone
Add your own comment

41 Comments on AMD FX "Vishera" Processors Launch Pricing Surprises

#26
EarthDog
geonGuys, please stop it with the SuperPi as it has no practical use.

Most demanding work for my cpu is handbrake and of course games, although most games hardly use more than 2 cores so ... they are not so demanding on the cpu.

I think these cpu's will be pretty nice, if priced apropriatly.
Isnt Super Pi CPU math intensive...

And no disrespect, but are you reading the thread? The pricing is listed.
Posted on Reply
#29
Dent1
seronxIf that was true the FX-8350 would be launching under $100.
$200 / 2 = $100
www.techpowerup.com/reviews/AMD/FX8150/images/superpi.gif
Lest we forget an industry standard benchmark. :roll:
(PhysX 2.x = x87 on CPUs)
Higher leakage means the nodes are inefficient so more power is wasted and converted to heat. Higher voltages mean more heat and lower voltages mean less heat. With that in mind having a high leakage part run at a high voltage is counter-productive.
SuperPi is single threaded. Post Wprime and lets see what happens :)
Posted on Reply
#30
Pandora's Box
hmm interesting. cant wait to see performance numbers. feeling that itch to upgrade my cpu...
Posted on Reply
#31
fullinfusion
Vanguard Beta Tester
Why so cheap is what Im asking! could it be another flop of a cpu? and AMD just blowing these out the door before there rumoured closing :rolleyes:
Posted on Reply
#32
Irony
Maybe theyre hoping that if its cheap enough they'll sell a few million and make some money
Posted on Reply
#33
Konceptz
FX-8350 here I come....FX-6300 for my lil bro
Posted on Reply
#34
sergionography
yea the price is about right, while the 8350 is most likely superior to the 3570k in multithread, in single thread wont keep up, trinity didnt have a problem with the i3s because they are clocked lower than the i5s/i7s, so trinity would scale to compete, but with vishera it got to a dead end in terms of frequency.
but with that stock clock speed multi-thread performance should be close if not better than 3770k
Posted on Reply
#35
CAT-THE-FIFTH
EarthDogIsnt Super Pi CPU math intensive...

And no disrespect, but are you reading the thread? The pricing is listed.
Bulldozer lacks hardware to decode x87 instructions unlike their previous cores,so it is emulated in software. TBH,only SuperPi and PhysX use it. PhysX does not run well on even Intel CPUs,when compared to running it on the graphics card, and it seems Nvidia sort of done it on purpose:

techreport.com/news/19216/physx-hobbled-on-the-cpu-by-x87-code

It is basically of no use to anyone.


Now, David Kanter at RealWorld Technologies has added a new twist to the story by analyzing the execution of several PhysX games using Intel's VTune profiling tool. Kanter discovered that when GPU acceleration is disabled and PhysX calculations are being handled by the CPU, the vast majority of the code being executed uses x87 floating-point math instructions rather than SSE. Here's Kanter's summation of the problem with that fact:

x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87. By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Kanter notes that there's no technical reason not to use SSE on the PC—no need for additional mathematical precision, no justifiable requirement for x87 backward compatibility among remotely modern CPUs, no apparent technical barrier whatsoever. In fact, as he points out, Nvidia has PhysX layers that run on game consoles using the PowerPC's AltiVec instructions, which are very similar to SSE. Kanter even expects using SSE would ease development: "In the case of PhysX on the CPU, there are no significant extra costs (and frankly supporting SSE is easier than x87 anyway)."

So even single-threaded PhysX code could be roughly twice as fast as it is with very little extra effort.

Between the lack of multithreading and the predominance of x87 instructions, the PC version of Nvidia's PhysX middleware would seem to be, at best, extremely poorly optimized, and at worst, made slow through willful neglect. Nvidia, of course, is free to engage in such neglect, but there are consequences to be paid for doing so. Here's how Kanter sums it up:

The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

Indeed. The PhysX logo is intended as a selling point for games taking full advantage of Nvidia hardware, but it now may take on a stronger meaning: intentionally slow on everything else.
Posted on Reply
#36
sergionography
CAT-THE-FIFTHBulldozer lacks hardware to decode x87 instructions unlike their previous cores,so it is emulated in software. TBH,only SuperPi and PhysX use it. PhysX does not run well on even Intel CPUs,when compared to running it on the graphics card, and it seems Nvidia sort of done it on purpose:

techreport.com/news/19216/physx-hobbled-on-the-cpu-by-x87-code

It is basically of no use to anyone.


Now, David Kanter at RealWorld Technologies has added a new twist to the story by analyzing the execution of several PhysX games using Intel's VTune profiling tool. Kanter discovered that when GPU acceleration is disabled and PhysX calculations are being handled by the CPU, the vast majority of the code being executed uses x87 floating-point math instructions rather than SSE. Here's Kanter's summation of the problem with that fact:

x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87. By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Kanter notes that there's no technical reason not to use SSE on the PC—no need for additional mathematical precision, no justifiable requirement for x87 backward compatibility among remotely modern CPUs, no apparent technical barrier whatsoever. In fact, as he points out, Nvidia has PhysX layers that run on game consoles using the PowerPC's AltiVec instructions, which are very similar to SSE. Kanter even expects using SSE would ease development: "In the case of PhysX on the CPU, there are no significant extra costs (and frankly supporting SSE is easier than x87 anyway)."

So even single-threaded PhysX code could be roughly twice as fast as it is with very little extra effort.

Between the lack of multithreading and the predominance of x87 instructions, the PC version of Nvidia's PhysX middleware would seem to be, at best, extremely poorly optimized, and at worst, made slow through willful neglect. Nvidia, of course, is free to engage in such neglect, but there are consequences to be paid for doing so. Here's how Kanter sums it up:

The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

Indeed. The PhysX logo is intended as a selling point for games taking full advantage of Nvidia hardware, but it now may take on a stronger meaning: intentionally slow on everything else.
you know i think at this point we should start ignoring those who bring up superpi as it comes up on every other thread, though thanks for the whole insight and all
Posted on Reply
#37
Super XP
No company should support Nvidia's PhysX. This would force them to work with other companies such as Intel and AMD so they can develop a standardized Physics, and not this Nvidia's PhysX nonsense. I will say this once again, NVIDIA's downfall will be brought to them by there egotistical arrogance.
Posted on Reply
#38
mastrdrver
seronxHigher leakage means the nodes are inefficient so more power is wasted and converted to heat. Higher voltages mean more heat and lower voltages mean less heat. With that in mind having a high leakage part run at a high voltage is counter-productive.
Intel processors are made on a Bulk process, AMD is made with FD-SOI. This is why you get different voltages.
seronxSorry, it was me :roll:.

x87 is still being used today because software is a decade behind hardware.
Please list a program that uses x87 that is not Super Pi. I know PhysX is x87, but other then that.....
Posted on Reply
#39
HumanSmoke
CAT-THE-FIFTHNow, David Kanter at RealWorld Technologies has added a new twist to the story by analyzing the execution of several PhysX games using Intel's VTune profiling tool... TL: DC
"Now" ? You get your tech updates via pony express ?
DK's analysis in mid 2010 was based on SDK 2.8.3. PhysX has been compiled using SSE2 instruction since 2.8.4 (Oct 2010 I think from memory). 3.0 introduced multithreading and SSE both as default.
You should be able to find enough documentation in the release notes, or game dev forumsif you're against DL'ing pdf's.
3.2.1 I think is the current build. Apart from that, it's down to the individual developer which features are used- if older builds/x87 is used it is usually to retain compatibility for older games/engines or consoles.

Not quite sure how Bulldozer's lack of x87 ISA is overly relevant to an outdated PhysX SDK, but yeah, ok.
Super XPNVIDIA's downfall will be brought to them by there egotistical arrogance.
If you're getting buried, egotistical arrogance is a good a reason as any, and probably a lot better than fiscal naivety, lack of strategic foresight, and a lack of clear corporate goals. Might be better to be seen as arrogant than say...incompetent.
Posted on Reply
#40
Hits9Nine
sorry for the off topic question but is this site affiliated with overclockers.com?
Posted on Reply
#41
brandonwh64
Addicted to Bacon and StarCrunches!!!
Hits9Ninesorry for the off topic question but is this site affiliated with overclockers.com?
Nope
Posted on Reply
Add your own comment
Apr 24th, 2024 14:02 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts