Discussion in 'News' started by btarunr, Oct 18, 2012.
I know, you could trickle it down to me!
Isnt Super Pi CPU math intensive...
And no disrespect, but are you reading the thread? The pricing is listed.
That with 2cores. Not bad at all but high cpu Volts.
SuperPi is single threaded. Post Wprime and lets see what happens
hmm interesting. cant wait to see performance numbers. feeling that itch to upgrade my cpu...
Why so cheap is what Im asking! could it be another flop of a cpu? and AMD just blowing these out the door before there rumoured closing
Maybe theyre hoping that if its cheap enough they'll sell a few million and make some money
FX-8350 here I come....FX-6300 for my lil bro
yea the price is about right, while the 8350 is most likely superior to the 3570k in multithread, in single thread wont keep up, trinity didnt have a problem with the i3s because they are clocked lower than the i5s/i7s, so trinity would scale to compete, but with vishera it got to a dead end in terms of frequency.
but with that stock clock speed multi-thread performance should be close if not better than 3770k
Bulldozer lacks hardware to decode x87 instructions unlike their previous cores,so it is emulated in software. TBH,only SuperPi and PhysX use it. PhysX does not run well on even Intel CPUs,when compared to running it on the graphics card, and it seems Nvidia sort of done it on purpose:
It is basically of no use to anyone.
Now, David Kanter at RealWorld Technologies has added a new twist to the story by analyzing the execution of several PhysX games using Intel's VTune profiling tool. Kanter discovered that when GPU acceleration is disabled and PhysX calculations are being handled by the CPU, the vast majority of the code being executed uses x87 floating-point math instructions rather than SSE. Here's Kanter's summation of the problem with that fact:
x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87. By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.
Kanter notes that there's no technical reason not to use SSE on the PC—no need for additional mathematical precision, no justifiable requirement for x87 backward compatibility among remotely modern CPUs, no apparent technical barrier whatsoever. In fact, as he points out, Nvidia has PhysX layers that run on game consoles using the PowerPC's AltiVec instructions, which are very similar to SSE. Kanter even expects using SSE would ease development: "In the case of PhysX on the CPU, there are no significant extra costs (and frankly supporting SSE is easier than x87 anyway)."
So even single-threaded PhysX code could be roughly twice as fast as it is with very little extra effort.
Between the lack of multithreading and the predominance of x87 instructions, the PC version of Nvidia's PhysX middleware would seem to be, at best, extremely poorly optimized, and at worst, made slow through willful neglect. Nvidia, of course, is free to engage in such neglect, but there are consequences to be paid for doing so. Here's how Kanter sums it up:
The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.
Indeed. The PhysX logo is intended as a selling point for games taking full advantage of Nvidia hardware, but it now may take on a stronger meaning: intentionally slow on everything else.
you know i think at this point we should start ignoring those who bring up superpi as it comes up on every other thread, though thanks for the whole insight and all
No company should support Nvidia's PhysX. This would force them to work with other companies such as Intel and AMD so they can develop a standardized Physics, and not this Nvidia's PhysX nonsense. I will say this once again, NVIDIA's downfall will be brought to them by there egotistical arrogance.
Intel processors are made on a Bulk process, AMD is made with FD-SOI. This is why you get different voltages.
Please list a program that uses x87 that is not Super Pi. I know PhysX is x87, but other then that.....
"Now" ? You get your tech updates via pony express ?
DK's analysis in mid 2010 was based on SDK 2.8.3. PhysX has been compiled using SSE2 instruction since 2.8.4 (Oct 2010 I think from memory). 3.0 introduced multithreading and SSE both as default.
You should be able to find enough documentation in the release notes, or game dev forums if you're against DL'ing pdf's.
3.2.1 I think is the current build. Apart from that, it's down to the individual developer which features are used- if older builds/x87 is used it is usually to retain compatibility for older games/engines or consoles.
Not quite sure how Bulldozer's lack of x87 ISA is overly relevant to an outdated PhysX SDK, but yeah, ok.
If you're getting buried, egotistical arrogance is a good a reason as any, and probably a lot better than fiscal naivety, lack of strategic foresight, and a lack of clear corporate goals. Might be better to be seen as arrogant than say...incompetent.
sorry for the off topic question but is this site affiliated with overclockers.com?
Separate names with a comma.