Discussion in 'News' started by btarunr, Jan 20, 2012.
thats what they were derived from of course
The architecture was built with multi-threading in mind, yeah, but actually Interlagos is two 4-module BD dies put on the same silicon. It's basically 2x8150 clocked lower with a lower turbo, but think of the server power of just two of these 16-core chips. 32-cores for the price of 16 xeon threads. I have a feeling that Intel's cheapest 8-core offering won't keep up two decent Interlagos chips for server or heavily multithreaded workloads. At least AMD is doing something right.
just didnt translate to the desktop market, now Piledriver should fix somethings. The next one after Piledriver should be what Bulldozer should have truly been
I am very pleased with the fx-6100. I am speaking as an avid overclocker. I purchased the chip that would give me the most performance potential for video editing/converting and gaming on air along with longevity on the platform ie keep the same mobo and upgrade ram/hdd/gpu/cpu/pws individually as tech improves. I really considered the 81xx yet I believe that the 6100 overclocks higher thus making the per thread performance better and overall performance marginal. Plus the price was right. Water cooling gains didn't impress me over a good air heat-sink considering the cost and durability. I do plan to upgrade the 6100 when AMD puts out a CPU with 2x the power for $200ish. I couldn't rationalize spending about $1500, when this rig was $850, for intel to get the i7-3820, which to me was the next reasonable performance improvement along with an upgradable platform. If there was a good chance 1155 was going to see twice the performance as the 2500k (for $200ish) and was say $50 bucks less I would have went with Intel. This all sounds really good in my head, comments?
1)By the time AMD has a cpu core design that doubles Bulldozer/Phenom II performance, your motherboard will be obsolete and useless.
2)The i7-3820 is socket 2011, not 1155; it is comparable to the i7-2600K\2700K (LGA 1155) in performance. If you wanted to wait for much better performance, in an upgrade, you were better off spending some more money and buying a LGA 2011 board and an i7-3820, then waiting for Ivy Bridge Extreme chips to be released 10-11 months from now.
3)As of now, Piledriver is the best you can hope for on AM3+, and it will not be a massive performance improvement, maybe +25% at equivalent power consumption (at most). There is no guarantee that Steamroller will be released as a discrete non-igpu cpu for AM3+, AMD has not released its roadmap for the future beyond Piledriver on AM3+. Piledriver may be the last discrete consumer cpu manufactured by AMD --- after that it might only be APUs, and I think it'll be a while before the process node is small enough to fit 3 cpu modules onto an APU, so it'll be quad core --- unless the fusion thing gives a huge overall performance boost.
Need to see the specs on your head
seriously.....you bought what makes you happy....all that matters
AMD's future chips have real potential to double my fx-6100 on the am3+ platform. I know AMD cancelled the 10 core piledriver for release this year, but am3+ is PD's socket. Say PD's IPC goes up 20% (and doesn't reduce overclock speed), RCM gives 15% more overclock speed and I buy an eight core rather the six. With the added heat and watts of two more cores I lose 10% overclock speed. I'm still looking at a 50% increase available in a few months. Now give them a couple years. It would be a bummer if AMD changed how they advance sockets. I doubt they designed PD for am3+, instead of only FM2, without the plan for AM4 Putting an am4 cpu in my rig some day is partly why I stayed with AMD. FMx for on die graphics and AMx for no graphics? Send me an fx-8370 or a fx-10370 and we'll know for sure!
Also, for converting my 1080 60i video and bluray authoring, my rig is great. I put the source file on an hdd and output file on an ssd and cpu load runs 60-90%. Minute for minute is the slowest conversion i've seen and 10 to 1 for Hidef to Stdef. I just had a baby girl so lots of videos! She wakes up a lot at night so gaming keeps me awake for the 1st shift of feedings.
I am well aware of the intel sockets..... Think outside the Intel box man. 1155 has little chance to double the 2500k performance (for $200ish cpu) but 2011 performance currently does and will hopefully go far beyond that. I want to be able to double my performance with a drop in cpu upgrade someday for $200. And basically 2011 is too expensive and 1155 may be maxed out with Ivybridge. So I didn't go with intel. AMD in 2-3 years, i'm hoping, has a $200ish cpu that will double the fx-6100. RCM, more cores (I still think 10 core am4 is the direction) and yes I surrender, higher IPC sets the stage.
As far as heat goes on my fx-6100, prime 95 for 3 min is the only thing, thus far, that puts it above 60 C at 4.7ghz. It runs warm at high clock speeds on air, but not super hot. I've had it to 74C before crashing with prime 95 @ 5.2. I can get errors above 4.7 on prime 95 but not overheating until 5.2 or heavy voltage.
my head specs are as follows:
4 lobe graymatter (low IPC)
hyperthreading left/right brain
35 solar passes of memory
turbo caffeine 2x (i like espresso)
opsys: UN of WIS
Don't lie! Everybody knows that you're really a robot.
Beep, Beep, buzz, ERRR, 00000011, 11000001 BSOD?!?!?!
I don't think it's about speed anymore, it's all about efficiency and power. Today's Bulldozer has power but slightly lacks in overall efficiency. Tomorrow's Piledriver hopefully will come with both efficiency and power.
For me price/performance matters, which is why I have a FX-8120 OC'ed to 4.40 GHz with a 8-cores. Looking at the numbers I would call that a FX - 8190 or something.
Don't get me wrong, Bulldozer is a complex piece of work, something AMD's past CEO dreamed about one night after having a few beers
Good on AMD, because now they've somewhat developed a modular based design that "WILL" only get better and better with time.
Right on Super XP, When under full load FX is power hungry, up to 175 watts more when overclocked to 4.6 VS 2600k @ 4.6. Full System drain.
My settings with C1E and core parking on, clock me down from 4.7 to 1.5 and park 5 of 6 cores when they not being used. Still it's obvious that any cpu usage takes more watts on FX then sandybridge. For me it's not a huge concern, I maybe use like 10-20 cents a week (1-2 KWhours)
On a global scale It counts. Well, unless you consider all the wasted Intel Mobo's sitting around cause sockets change so much. They take power and resources to make too. So Intel, I think may save $10-$30 bucks in power depending on how much it's used, But people who have to upgrade their motherboard with CPU, waste as well. Can't really say what's better for mother earth.
On Servers this is a big deal. The power is takes to maintain server cpu loading is as big of a concern to the IT industry as gasoline prices are to you and I. I'm not all that up on 32nm server cpu power consumption. Anybody ???
I see the Intel x5690 is rated at 139w while the AMD 6274 is rated at 115w and have similar passmark scores. A real life test would be best to consider which architecture is more power efficient on servers.
Super XP Read this page on RCM for Piledriver and tell me what you think
Sorry to interpose myself, but as interesting as the Resonant clock mesh is, I think it's likely not going to translate into a huge performance increase; maybe 5-10% more performance at similar power draw.
Don't get me wrong, I hope Piledriver is good, it would save me money on future upgrades if all I had to do was plop a PD into my board.
It's difficult to say for sure, as there are lots estimates in the RCM literature. Stock clockspeeds are being advertized at 4+ GHZ for Piledriver. About 10%+ more then the current 8150. The interesting test result of the piledriver based A10-5800k was 30% higher clock speed with 15% less wattage and with a better performing gpu. RCM seems to have made a huge difference in watt/clockspeed ratio but this test comparison was not equal. It's still really unknown yet. If Vishera's specs follows the A10-5800k test chip, piledriver will be clocked higher and have less TDP. Which is good news for overclockers like myself. I'll wager a guess that the gpu in trinity uses LESS power then Llano so the watt/clockspeed improvement won't be as high of a ratio for the FX series.
If FX does see 30% higher clocks with 15% less power like the A10-5800k, It would be HUGE for AMD. The fx-8350 performance would surpass the i7-3770 by 5% or so on passmark but use 25 more watts. Overclocking would be about 5.3 on air and 5.8 on water.
Apparently the piledriver trinity chip with RCM is a milestone in a "Tock" release WOW. I just wish the OS and memory specs were the same and the testing was more extensive.
Well, that's nice, and I'd like to see it, but +5% over the 3770 on passmark is only OK. Passmark does an adequate job of giving you an idea of how a cpu performs maxed out and multithreaded, but that's not helpful except in some specialized applications. A overclocked Phenom II x6 scores fairly high on Passmark and compares favourably to many Intel cpus, even though it's not actually comparable in applications that utilize one or two cores. Passmark is skewed.
that wont be happening anytime soon, atleast not in single thread, unless amd can release piledriver with 5ghz+ then maybe
idk where you guys are getting the phenom II has 20% lower ipc than SB
as far as i know phenom II has 60-70% the ipc of SB(40%) slower
while BD has 90%ipc of phenom II/stars
this is why in some cases SB would perform 160% the performance of bulldozer when running around the same clock speed
now if piledriver truly is 20% faster than stars clock-clock then it should sit at around 80% the ipc of SB which would mean SB would perform 15-25% faster
but amd promised 29% better x86 performance than llano in general and not clock-clock
and that is usualy a best case scenario if you know amd marketing, so knowing they relied on clock speed its hard to compare different skus and efficiency because clockspeed and efficiency dont scale, meaning PD would be much more efficient in lower tdps than at the higher end
however if amd was comparing the fastest trinity with the fastest llanno then it makes alot of sense and its safe to assume that llano and PD have the same IPC, because an a8-3870k has 3.0ghz clockspeed, and the A10-5800k has a higher clock of 4.2ghz, exactly 29% faster clockspeed
instruction per CYCLE is being too generalized in my opinion as it doesnt tell real world performance, like bulldozer for example if looking at its hardware it should do 4 instructions per cycle vs 3 in phenom as each module has more hardware(ex: 4decoders vs 3in stars), however each cycle is longer than that of stars or SB due to its higher latency
and its designed that way so the shared resources can have enough time to feed data for 2 cores, meaning while one core is crunching on data, the other integer core would be getting fed from the shared resources
the latency was higher than expected tho as i believe, amd pretty much worked around that i believe(or thats what seems to be) by either shortening the cycles or allowing more entries(which is increased according to this chart ive seen, L1data became 64 from 32)
I think a lot of people are misunderstanding RCM. RCM doesn't just tack performance on the way Level 3 Cache would, the performance is gained by achieving higher clock speeds. All it does is distribute the power/energy more evenly across the CPU, which in turn causes the CPU to spread out heat that would otherwise be concentrated in one area. It's basically just streamlining the way power flows through the chip, which has the added benefit of causing lower temperatures at higher clock rates with better Performance/Watt.
This also means just because BD-based CPU's could go from like 3GHz to 5GHz, doesn't mean the PD-based CPU's coming out at 4GHz are going to go to like 6GHz. The fact that the PD-based CPU's are launching at 4GHz is in fact the performance gained by RCM. In all likelyhood PD-based CPU's will clock just as high as BD-based CPU's--maybe a tiny bit higher--while using less power. So it's definitely a win-win, but it's not some magical solution that's going to add 5-10-15-20% real world performance, it just allows for higher clock speeds, which generates additional performance.
Well 8150 can match or beat the 2600k in very few benchmarks, so just increasing the IPC to the same or above the old Phenom II range i think would be enough to match a 2600K in most benchmarks. I hope
very few benchmarks with newer instruction set support, but even so it barely beats 2600k
look at the chart, ivy bridge pretty much keeps beating the 8150 with 105%-180% the performance of an 8150(5%-80% faster)
not to mention many of those benchmarks are also multithreaded and thats were the gap is smaller
but in lightly threaded apps that can only use like 3 cores the bulldozer cant even turbo properly, im assuming thats were ivy bridge sees a good 80% performance over bulldozer
so its safe to say a bulldozer has 60% the performance of a SB/IB core in general(excluding the situations were bulldozer excels in new instruction sets and so on) but since it has more cores it ends up close to it in multithread
now here is were i even confuse myself, bulldozer having 60% the performance of SB does NOT mean sb is 40% faster its actualy more, i got confused when i first looked at the graph but it makes sense now
because if 60%(bd)-->100% then 100%(sb) --> X
if you cross multiply you end up with SB having 166.6% the performance of BD (100%) in single thread
so if piledriver is 30%faster than bulldozer in single thread, its still 30% slower than SB/IB
however it will definitely have an edge in multithread against the I7's if thats the case
so its gonna be way more competitive than an 8150 thats for sure
There are issues with this review. The memory set up is not clear. Is the 8150 setup with 4 dimms? If so it runs at 1600 MHZ which causes low performance. The test would have been fair with 16gb of ram With 2 dimms on fx and the 3770k and 4 dimms on lga2011. Next, using an Nvidia card on FX is going to favor Intel. As far as I know AMD cards work just as well on Intel as AMD so why not use an AMD card like the 7970 or 6990? The itunes benchmark has little PC real world application unless you have a recording studio and then I would buy a mac anyway for software functionality. Next the graph line doesn't match up with the tests so did the graphing software just make the highest peaks on it's own or is it a screwed up image? I wouldn't spend my money based on this review.
you fail to realize 1600MHz is the sweet spot for any machine nowadays. However Llanos GP performance Increases with 1866MHz Ram
In most situations the difference between DDR3-1600 and DDR3-2133 is within the standard deviation. For that matter, you can see that unlike Llano, going from average memory to higher-end memory doesn't yield sizable performance gains. If you check the linked review in the article, it actually shows exactly what speed RAM they used, with timings; http://www.xbitlabs.com/articles/cpu/display/core-i7-3770k-i5-3570k_4.html#sect0
2. Why on Earth would it matter what GPU they are using if the CPU is what they are testing? If you look at the gaming tests, they did all largely CPU bound games. The best take away is Metro 2033, which is incredibly well threaded, and you can see that the low IPC of BD causes it to drop behind Intel, despite it having access to more threads at a time. I've read a lot of recent trends that indicate unless overclocked, FX Processors are already starting to bottleneck even single GPU solutions...
I am game on Intell at home, but when spending the least and getting the most comes into play, bring on AMD. My main Desktop at work is a Phenom X4 955, becuase it's insano cheap at Microcenter. And my ESXi box is running a Bulldozer 8120. Mad thread count = Many VMs with plenty of resources.
But like I said, at home, email@example.com till long after 4th Gen Ivy Bridge. No really reason to upgrade the system, GPU maybee, but not the proc.
I explained that part in my previous post, they set bulldozer at 100% so the performance of ivy bridge is relevant to bulldozer at 100%, whether it's fair or not faster ram and and graphics card won't yield more than 10% performance, so even if we take that into account ivy bridge is still 10-70% faster.
Even and doesn't claim that piledriver will compete with Intel, look at the advertising they are hyping about, its all about graphics with little to no mention about CPU because even if they made pd 40% faster than bulldozer its still a tad bit on par or slower than Intel's i7
I read the test bed configuration re read my post It's obvious. Bulldozer should be bench marked on 2 dimms and the test bed lists ram as 2 x 4gb and 4 x 4gb without stating which one is used on which platform. Memory speed changes benchmark results, since bulldozer clocks 2 dimms at 1866 and 4 at 1600 it matters. Along with my other valid issues. The 3770k obviously outperforms bulldozer across different apps but this review, to me, doesn't give a precise calculation.
Also 2 similar GPU's, one Amd and one Nvidia, (preferably 4) should be cross referenced on each platform for gaming benchmarks to get a decent cpu performance comparison.
Back to the subject of Piledriver With the new instruction sets, HT assist, 10% FPU queue load increase, lower latency on certain instruction sets and several other improvements, Piledriver won't just have an increase in clockspeed and lower TDP, several apps will see a much bigger improvement then the clock cycle and IPC enhancements. This article from AMD has all the programming improvements for Piledriver and it's huge for a "tock" release. 361 pages huge.
Bulldozer is 15h...
Some more fun reading
Separate names with a comma.