• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD FX-8130P Processor Benchmarks Surface

I don't know why you are so hung up on IMC not being dedicated for every single core, what do you say about this, because if every core had his own IMC that would mean in a 4 core CPU every core would have just 32b bus instead of a shared IMC where if not all cores are active, one core can have 128b width and not just 1/4 and its impossible to have 128b for every single core in a 4 core cpu, that would mean 512b width memory access, look at SB-E it has just 256b memory access and they had to place two memory slots on both sides, just so it wouldn't be too complicated or expensive to manufacture.

It's not necessarily that I'm "hung up" on the IMC, nor do I personally believe that every core should have its own IMC. I was simply using it as an example. I think everyone can agree that the Athlon 64 3200+ has a single "core", and that Deneb has four "cores". How many resources were provided per core on the Athlon, that are instead shared on Deneb? Sure, feel free to ask whether or not said resources are actually part of what a CPU really is. However, none of us will have a good answer.

This one is easy. The fetch and decode unit. That's the "thinking" part. I have 2 hands, 2 legs, 2 lungs, but only 1 head and that makes me 1 person. No matter how many pair of things I have.

By all means, you're welcome to feel that is a necessary component of a "core". AMD does not; who's right? Who knows...? However, your analogy is somewhat non-applicable, as a "human" is defined as having the form of a human, and human form is defined as consisting of a head, neck, torso, two arms and two legs. I'm not aware of any such listing of components for a CPU.
 
SSE is not magic and it won't solve every performance issues a cpu have. It's important how floating point execution unit work and how good uOPS decoder can throw at them. And it combined together matters more than if instruction is x87 variety or SSE's.

It is magic when I can record and encode from 25 fps(Phenom II X4) to 62+ fps(FX-8000 ES) just because of more ISAs

if so happen AMD ditched x87 and made it slow in favor of SSE then SSE versions of SuperPi floating around the net should show than difference amd vs intel should be lower. Is it any lower? Or SSE performance of Intel CPUs is also "pretty high"? :laugh:

x87 is emulated not ditched

and wPrime 1024m tells the story

Lacking such obvious extension like SSSE3 in 2011 AMD CPU is quite troubling and Bulldozer will fix that which is good for Intel CPU also :p

What? Mainly it's good for AMD CPUs

Magical increase from 50fps to 130fps in encoding performance
2.3x increase in FPU powuh

They are 24 "cores" which are composed of a 16 SP wide SIMD unit. Then each SP on each SIMD has 4 "ALUs".

24 x 16= 384
384 x 4 = 1536

Give me a picture

-------------------------------------------------
1 Fetch -> 1 cycle = 4 fetch
2 Decode/Stores 1 cycle = 2 per decode/store(load?) per core
 
Last edited:
By all means, you're welcome to feel that is a necessary component of a "core". AMD does not; who's right? Who knows...? However, your analogy is somewhat non-applicable, as a "human" is defined as having the form of a human, and human form is defined as consisting of a head, neck, torso, two arms and two legs. I'm not aware of any such listing of components for a CPU.

Along with many many many NGOs, the Paralympic commitee wants a word with you...

If you are not aware of any such "listing", maybe it's time to read up a little bit on computer architectures, don'tyathink??

Give me a picture

arch.jpg
 
Along with many many many NGOs, the Paralympic commitee wants a word with you...

If you are not aware of any such "listing", maybe it's time to read up a little bit on computer architectures, don'tyathink??

I did read up thanks first rule of the internet

1. Don't annoy someone who has more spare time than you do.

I was simply wondering because AMD is going the opposite direction than just callin out the ALUs on a GPU with GCN
 
Along with many many many NGOs, the Paralympic commitee wants a word with you...

If you are not aware of any such "listing", maybe it's time to read up a little bit on computer architectures, don'tyathink??

Haha, blame Wikipedia's definition. And I understand exactly how important a fetch and decode unit is to a processor (and I'm sure AMD does too). It's got to get an instruction from memory (or cache if available), and then decode it to understand how to execute the given instruction. I can also see how it could be shared with only one other "core" with minimal impact to performance, and I bet AMD has a far better picture than I do.

Also most expect to have an FPU and a branch predictor in a "core" as well, but the 486 didn't have a branch predictor, and the 386 had neither. I don't consider those "zero-core" processors, do you?
 
Wrong,

The first rule of the internet is

1. Don't annoy someone who has more spare time than you do.

AMD hasn't inflated any numbers

They have only said CMT is a more efficient way of what SMT tries to achieve

SMT = more threads on the die without increasing the amount of cores

CMT = more cores on the die with a 50% die increase from Phenom II

4 x 150 = 600%
6 x 100 = 600%

So, Bulldozer is about the same die size as Thuban while achieving relatively the same performance per core while having more cores

*clap* Way to ignore the question, you can't answer it or prove your statement, quit while your ahead.
 
*goes away to get more popcorn*

ooh dont forget the pepsi!

TPU should do an IN YOUR FACE page for everyone overdosing on fandoyism and making near star trek level predictions of technological advancement. On launch day your avatar and your quoted prediction on one side and reality on the other....would b a blast and besides we all need 2 learn to laugh at ourselves.
 
Why do you rely so heavily game results / benchmarks determine your chosen platform?

Because that's what i use my computers for?


:laugh:

There are games that favor AMDs architecture as well.


Name one. I have both platform, and I'll test..chacnes are I already own the game, and if I don't, I'll buy it.


Keep in mind, I'm an Eyefinity user..as I've stated before. This means I have specific performance requirements, and those performance requirements may nto be the same as it is for others.

ALso, I'm not focused on SuperPi numbers. It was merely a single example. That should have been obvious, as it's one app that is heavily memory-dependant.
 
Of course I did. Keep in mind, as a reviewer, I do not EVER post MAXIMUM FPS. I've been doing the benchmarking thing for near 10 years now, and I am not blinded by silly metrics that mean nothing.


In fact, you can find my posts on various websites about just that very subject alone.

I understand you may not have read those posts, so do not understand my opinion fully, but it has been formed through literal DECADES as a PC gamer, and that aspect of my life has gone so far that I'm now doing hardware reviews, not for cash, but because my gaming needs are still not met by hardware that is on the market.

When 30-inch monitors came out, I bought one. I struggled for many years to assemble as system that could actually play games in it's native resolution. That's still a problem for some games, and now I'm running triple monitors...

This journey is literally what had me maknig these performance compares...and I found through the years that many things didn't make sense in reviews. Sometimes, that's because I didn't understand something, or the reviewer was wrong, but today, I'm in the position where I'm the reviewer, and because of those past expereinces, you can rest assured that any comparisons I make in regards to performance, are not only apt, but very much fair.

Heck, I have one of near every CPU sitting here on my desk, and boards to match. if you check the "Easy Rhino Minecraft Server" thread, you'll see my last 775 hardware. I know where the problem are, because I have all the hardware here to play with.

Anyway, keep in mind, the demands I have for manufacturer's, of course, are not going to be the same as others, but here, for me, memory performance...and every aspect of it..from caches to system memory, are very important.

Heck, I'm the one excited by IOMMUs.
 
*clap* Way to ignore the question, you can't answer it or prove your statement, quit while your ahead.

I did answer the question

10% to 30% increase in performance

3.2GHz@185TDP to 3.8GHz@125TDP

If I am wrong time will tell but I have substantial amount of private information(you can google most of this information) that tells me I am right

:pimp:

Heck, I'm the one excited by IOMMUs.

Especially, since that IOMMU tech will help fuse Fermi/Kepler/GCN together to the Zambezi processor

IOMMU.jpg


Anandtech said:
Now what’s interesting is that the unified address space that will be used is the x86-64 address space. All instructions sent to a GCN GPU will be relative to the x86-64 address space, at which point the GPU will be responsible for doing address translation to local memory addresses. In fact GCN will even be incorporating an I/O Memory Mapping Unit (IOMMU) to provide this functionality; previously we’ve only seen IOMMUs used for sharing peripherals in a virtual machine environment. GCN will even be able to page fault half-way gracefully by properly stalling until the memory fetch completes. How this will work with the OS remains to be seen though, as the OS needs to be able to address the IOMMU. GCN may not be fully exploitable under Windows 7.

So, far only GCN will use this but I am pretty sure there will be tweaks that will allow Kepler to use it(or Fermi ++ if there are going to rebrands)

I don't understand what it does but I know it will help GPU/CPU communication, especially in workloads what id tech megatextures do
 
Last edited:
Of course, I said that this was the case for IOMMU long before AMD even really talked about it ;) My crystal ball is quite clear on that one.
 
I did answer the question

10% to 30% increase in performance

3.2GHz@185TDP to 3.8GHz@125TDP

If I am wrong time will tell but I have substantial amount of private information(you can google most of this information) that tells me I am right

:pimp:

Clearly you did not, you answered your own question perhaps but not mine, I asked for sources and you said google, and "private information" neither of which are answers. However you seem to understand my point none the less which is "time will tell", rather than stating opinion and hearsay as fact.
 
Clearly you did not, you answered your own question perhaps but not mine, I asked for sources and you said google, and "private information" neither of which are answers. However you seem to understand my point none the less which is "time will tell", rather than stating opinion and hearsay as fact.

Get your own sources, my sources are mine

http://www.cpu-world.com/CPUs/Bulldozer/AMD-FX-Series FX-8130P.html
http://www.cpu-world.com/CPUs/Bulldozer/AMD-FX-Series FX-8110.html

CPU World uses the same sources I do

01290849.jpg


But, the source had leeway with the information that isn't in that/this image and he only disclosed FX-8110/8130P clocks

http://news.softpedia.com/news/AMD-Bulldozer-CPUs-Clock-Speeds-Leaked-201753.shtml

http://diybbs.zol.com.cn/10/11_99101.html

There is another source but the main idea is that Zambezi is a high clock CPU with an enormous overclock headroom do to components being divided more evenly(heat dissipates faster do to that)
 
You guys are still going? I've gone home from work, ate dinner, played with my daughter, put her to bed, done some reading with my wife, and played some Starcraft II. And you haven't made any progress. :laugh:
 
You guys are still going? I've gone home from work, ate dinner, played with my daughter, put her to bed, done some reading with my wife, and played some Starcraft II. And you haven't made any progress. :laugh:

We made 9 pages of progress

:respect: Gloating progress
 
no, since I left work 5 and a half hours ago, you made a page and a half of...well, you're still saying 10-30% and avoiding other questions at all costs. You made a page and a half of "my head hurts." ;)
 
I got my hair cut, bleached, looked at some fine ass girls, then the wife got the kids and now I am going to drink some scotch and watch some TV before we get down to fucking.
 
no, since I left work 5 and a half hours ago, you made a page and a half of...well, you're still saying 10-30% and avoiding other questions at all costs. You made a page and a half of "my head hurts." ;)

tumblr_ldgxfv3Y3L1qd2nif.bmp


I am always at 10% to 30% from Engineer Sample to Retail Samples

43h.jpg


The Memory Subsystem is usable but still has some serious flaws once those flaws are fixed(where the 10% to 30% comes from)

L1 Read is the only thing correct in this
 
Last edited:
Can some please explain to me back in the day of the old Athlon vs P4 (939 etc) that even though the Athlon decimated Intel's P4 that intel still won the Super pi score? Because far as i know Super pi has ALWAYS favored Intel's CPU's regardless of AMD's performance over intel back then.
 
Can some please explain to me back in the day of the old Athlon vs P4 (939 etc) that even though the Athlon decimated Intel's P4 that intel still won the Super pi score? Because far as i know Super pi has ALWAYS favored Intel's CPU's regardless of AMD's performance over intel back then.

I think I know the answer

SuperPi is x87 right?

AMD Zambezi

1 x SSE2(FMAC emulated) -> 1 x x87 -> 1x x87 80bit

AMD Phenom II does it this way

1 x87 -> 1 x 87 80bit

While

Intel since Conroe bypassed the conversion and since then it has gotten faster and faster architecturally (clock speed and hyperthreading support)

1 x87 80bit upfront

Intel is still supporting x87 simply for the benchmarks

Super Pi Performance =/= System Performance

Why AMD does it that way might be because it is no longer useful to support the x87 platform to much resources to spend on a dead architecture
 
Last edited:
Probably reading BIOS time, which is wrong, clearly, as 3DMark11 did not exist then.
 
Back
Top