Discussion in 'News' started by btarunr, Mar 22, 2012.
This is wrong. The video memory is typically user selectable. My a75m-ud2h was set to 1gb.
'much, much slower?" The IGP in the Llano A8 is about half the speed of the 5750 without OC. The trinity A10 will be about 75% as fast without OC. Just below the speed of a 6670 if you are running DDR3-2133. (this is a rough estimated guess.)
Big words... for such a small reply.
I'm partially correct. It all depends on the mainboard and BIOS. The ASRock a75ex6 doesn't allow for more than 512 MB and this limit has been in place for a long long time on a lot of other integrated graphics mainboards. I suppose a manufacturer can go out of spec and increase that value if it codes its own BIOS. No fault if it does so, but for the system in question, both are missing 512MB from the total RAM rounded to a typical number.
I didn't know TPU forum has such sensitive... guys. If I can call you that.
You might want to try replying with some maturity if you want a decent reply.
It was a mistake on my part... I was thinking at IPS and wrote IPC, even if it wasn't to far off of the meaning.
Each product is placed on the level at which it is marketed based on TDP. From there they refine the placement based on slight clock differences. What I'm saying is that for the same TDP or lower the Trinity based APU can clock higher than the Llano APU, at the same TDP level. In the binning process, the Trinity APU meets the desired parameters for the product placement at a higher clock, where in the same process the Llano only met them at a lower clock. That's what I've meant by "normal" IPS, a relative performance to it's place among the other products after binning. And this is not "speculation" or "me being someone", it's how chips are placed. Those of the Trinity chips which won't support the clocks above will be get lower clocks, IPS drops, and will be placed in a lower category of products.
My point: Complaining about the A10 being higher clocked than the A8 has no real meaning... other than complaining.
@Dent1, the evidence is in BD's benchmarks, which PD is based on, unless they redid a 5 year old uarch over night /s. Lower the clocks and see it barely catching up to it's older brother. And try to use your common sense more, instead of GoOgle.
There are a few here that are very sensitive, You really have to watch it here.
Maturity? Not going to happen. Take what you can get and run.
Try not to mention you know what and this will not be an issue.
Case in point never use the "B" word http://www.techpowerup.com/forums/showthread.php?t=162689&page=2
IPS? Isn't that a display panel tech?
I assume he did actually mean IPC, but didn't really know what it meant or why it was important. I assume he meant Trinity needed an IPC on par for Intel offerings, which I don't necessarily agree with.
Exactly, it all comes down to performance / watt here. 50% faster GPU and 20% faster CPU while using 15% less power, on test silicon which isn't even as good as production silicon, and using a 32bit OS instead of a 64bit OS which reduces how much RAM can be used on top of that.
That in my books is a definite improvement overall. It's achieving higher CPU performance while using less power than Llano. Period. End-of-line. IPC is no good if your clocks are 10mhz, and clocks aren't any good if your IPC sucks. It comes down to what is the best IPC+clocks you can get / watt, and Trinity is apparently a decent improvement in those books over the Llano STARS core. Good to see the modules are actually doing what they were meant to do in the first place - be more power efficient.
I'm also wondering how the 65w A10 will do XD.
I/s = IPC x Clock
I retract the "use common sense" suggestion... stick to Google, works better for ya!
trickson, ok... thing is I can't take anything out of this, it's all worthless. So I prefer to walk away.
People need to grow a thick skin, screw this sensitivity crap.
The problem is this will never happen, As time goes on we have a tendency to coddle people more and more. Just look at how things were 15-20 even 30 years ago. Now look every kid gets an award for every thing! No matter how bad they suck they get praised and told how great they are. There is no real ownership for any behavior. There was a time when you were told to stand up to the bully in school punch him in the face! Now you are told to run away and tell an adult, We called them tattle tails! WOW just how far we have gone over the edge!
You went from heavy intel fanboy to heavier sarcastic AMD fanboy I apologize for the vulgarity and personal attacks in my post towards you in that other thread however your blatant fanboyism/partiality was a bit more out of line than the other guys. IMO, your first comments there were far from mature, and you should be the last person to talk about maturity... Lets be real, speak the truth in whole...don't speak 20% of what you want to speak and skew it to make it seem the truth. I personally am disappointed in the CPU performance of this APU. However it is totally absent of L3 cache, so maybe Piledriver will do a bit better in CPU-only form. Still, I doubt AMD will improve much. 5-10% performance per clock over Bulldozer coupled with a little OC headroom in the same power envelope would surprise me. Still, they are way behind intel's offerings in performance per watt.
As far as speaking about kids and bullies in elementary, middle, and high school...it is still that way. I am a junior in high school...I definitely grew up in the "stand up for yourself despite it getting you suspended" environment. However most of the kids grow up by the time they hit high school. In fact, law enforcement gets involved, because schools are monitored with cameras and any fights or backlash occur out in the open. Kids are threatened with expulsion even if they talk about doing such things. Also, as far as the behavior outside that, it isn't just the coddling, but kids now have instant access to an infinite amount of free information. Unfortunately some of the children don't learn to do things themselves (what you would call a spoiled child, who is all about "me me me" and doesn't even know how to clean up after himself or meet a deadline), while other children take advantage of the resources today's age has given them and succeed and exceed higher than any generation before them. By the way, it is the adults that grew up 15-20, even 30 years ago that are influencing and teaching the children of today. It's not a personal dig this time, just my opinion on the topic. I will be 17 soon, if you'd like to age discriminate
Oh the youth of today! If only they really had respect for there elders. I have kids that are older than most of you and grand kids now. What has this world come to?
AMD is doing well in both there CPU and APU lines, They are not that bad and can keep up really well. My honest opinion is AMD is going to continue to provide us all with CPU's, APU's and GPU's we will all buy.
Well, let me point out that we have a chip just 400mhz slower than this one running at 65 watts. VS an i3 with a smaller IGP. AMD is catching up, but they are a generation behind. However, unlike said i3, you have a decent IGP. lol.
I mean, if that is accurate this actually may be closer to a 95/100w part than the old llano. 15% higher power efficiency while having a 20% faster CPU and a 50% faster GPU.
that's a pretty big leap for one generation seeing as an IB i3 vs a SB i3 where the ivy bridge processor is somewhere around 20% more power efficient and there is a 10% faster CPU and a somewhat faster GPU which still can't beat an AMD A4's integrated graphics.
Honestly, this is an open forum...and you are not my father, you are simply another person living on this earth. If you take a look at some of your own posts recently, they've been pretty immature, and way before I said anything to you. By the tone of your comments, you are the kind of person that likes to kick others while they are down. You also blame the children when the adults created this environment. Maybe humanity should just stop reproducing, so their new generations of worsening children will never exist. Am I wrong? I am open minded, and respect those people who deserve respect. I'm sure you know how to be a gentleman, and I try my best to be too.
In my opinion, which I am allowed to voice just as much as you, AMD is doing great with their APU line. However, Bulldozer has sucked. Their APU line is great for low power applications and entry/mainstream, but they are simply unable to compete in upper midrange and high end. Kepler poses some problems for GCN architecture too.
Yeah you win!
Honestly calling some one immature and really the first part of your post just proves my point.
Well, bulldozer sucks mainly due to it's power inefficiency. If it weren't for that, then it would be a success, as it was meant to "hold the line" of IPC and increase power efficiency and thread count. Quite obviously it failed in the first iteration in two out of the three goals. Piledriver is looking to be like what it was meant to be, while increasing clocks as well. Given there's an A10 2m/4c/4t 3.6ghz 65w part with integrated graphics in the lineup.
So the idea behind bulldozer was good... it's first implementation wasn't. If this is real, then IPC still doesn't seem to quite match Llano, but should be about on par with Deneb.
On GCN's end, there is still speculation about on that the 79xx series under-performing based upon how the 78xx series performs. And it is great at GPGPU- which is what AMD needs for it's future heterogeneous computing goals, where they can dump floating point operations onto the GPU part of the APU, as well as heavily multithreaded environments where one can run parallelization on the graphics part as well. Not to mention GCN being able to naively run C++, which as both the PS3 and the Durango (aka xbox 720.) may very well have GCN graphics, we may see future games taking advantage of GPU compute. While it doesn't amount much in the short term except for Bitcoin mining, web browsing, media players, and a faster UI in the OS, it is also still better at tessellation than past AMD graphics cards, and more energy efficient than anything but Kepler. And it isn't really that much less power efficient than kepler. 5% less power efficient than the top end part and 5-15% slower depending on the game, although the 7970 is nearly twice as fast in GPGPU than Kepler. So GCN is a better all-round architecture whereas Kepler is optimized for gaming.
And AMD has also managed to remain more ethical in their practices than their competition, which is a win in my books.
Or there just better at keeping things undercover. Ever thought of that?
After all it took a long time for Apples missery to come out. Given enough time I am almost positive that there will be light shed on some skeletons soon, Every one has them.
Okay, you've said "Yeah you win!" enough times.
This has absolutely nothing about me winning on a personal level, or trying to be better than everyone on earth like a lot of kids do when they hit puberty, this is about a desire for people to at the very least, be level-headed. If you look back at what you said in the thread I called you out on, there is nothing out of line with my assumption on what kind of person you are. I don't know you in person though, of course.
Please, read the PM I sent you. I'm not trying to be rude or anything, but when adults know they speak to children, and speak to them like that, or when children enter a discussion to see an adult like you acting and/or speaking the way you did, (first impression says a lot!) there is no question why the children act the way they do.
Can't wait to see how resonant clock mesh and core improvement help Trinity vs current desktop Bulldozer parts.
Yeah. I wonder how it will OC with that resonant mesh.
I mean, it's 15% more power efficient than Llano too, and I wonder with that resonant mesh turning 10% of what heat you would be putting out as clocks... then wouldn't the more voltage you put on the thing increase your OCability by the more heat you're putting out...
And the Llano STARS core is more power efficient than Bulldozer...
so this will be interesting to see how piledriver ends up. That is of course, if this has any credibility to it.
Nobody knows how long AMD has been working on Piledriver? For all we know they could of been working on Piledrivers refinements for years. We as consumers only know what AMD choose to tell us. So your point is void.
Until Piledriver comes out. Stop guessing.
AMD stated they expect a 10-15% performance gain with Piledriver. I just don't see that happening. He also wasn't saying it took them 5 years of work to get PD out, if that were the case they would have just released PD in the first place, he meant BD. BD was originally announced back in I think 2007, and only appeared last fall. It took them several years of work to get BD out, so I doubt within a year they could release PD (just a revision of BD) that would show massive gains. I'd say they have probably been working on PD for 2 years at this point since they had the final design for BD ready at about that time (just were working on bugs after that point).
"you just don't see that happening?"
So... you didn't see Athalon 64 happening... Or phenom II, or Core 2, or Sandy bridge...
20% gains do happen from one year to the next. Not as often as smaller gains, however Bulldozer was held back by a large number of small things rather than one huge thing.
1. Modules were meant to increase performance / watt. Clearly that didn't happen with BD, however there is no reason they can't pull it off.
2. Low speed cache. One of the main complaints of BD, and one of the ones that I feel could easily have been fixed in PD.
3. Hand-made architecture tweaks. To improve efficiency / performance more. Probably happens with these architecture changes.
4. Maybe they increased the front end's size so it can do better than Brazos / core?...
5. Resonant Clock Mesh. Converts ~10% of energy that would be wasted as heat into clock speed.
Quite likely PD is not -just- a bugfix / revision.
Based on what I've read.
And AMD has stated they're shooting for a 15% increase + each year from now to 2015 in architecture performance. Which isn't a lot when you factor in that Moore's law says it should move faster than that.
(obviously it doesn't but eh it's hardly improbable for PD to do quite a bit better than BD.)
And wasn't Bulldozer's architecture designed to provide more performance in the long run? Instead of sticking with a single architecture, making small tweaks here and there, and relying on die shrinks hoping it turns out better in the end?
For starters, nobody saw Athlon 64 coming. Everyone figured it would be an improvement, but nothing like what they delivered. Similarily, Core2 was a huge advancement, that few people saw coming. I wouldn't put Phenom II up there since Phenom I was a flop, and PII pretty much just corrected it and properly implemented it. As for SB, I was pretty certain it would be impressive, the thing that makes SB so desirable is the price though, not many saw them pricing a CPU that was on par for their last gen $1000 CPU at $200-300.
I'm sure AMD hopes for a 15% improvement, but they also said Bulldozer would be much better than Phenom II, and for a lot of things it is better, but is it better than if they had reworked the IMC and shrunk down Phenom II's design? Phenom II vs. BD is a lot like P3 vs. P4.
The thing is, what AMD says they are trying to do, and what they actually do, are generally two very different things. I mean, BD was intended to use less power but that clearly didn't happen. As for Resonant Clock Mesh, I see nowhere that is "converts" 10% of heat into clock speed. I've seen that it can reduce power consumption by up to 30% (according to Cyclos, the company that designed it--actual PD CPU's are said to have 24% lower consumption) and increase overall performance by 5-10% because you have access to higher clocks and more power. Something to consider, is that Resonant Clock Mesh is also going to require space on die, because it adds capacitors and inductors to the CPU.
Bulldozer was designed to be scalable, but the issue isn't scalability. They can keep adding modules, and anything that can use those will see a performance gain, but their performance per thread is the issue.
Simple thing is, we won't know till we see it.
On topic with this thread- pertaining to those very same cores, looking at the power consumption, we see 99 watts at it's full clockspeed of 3.8ghz. With an IGP using 30-50ish of those.
So if this is real, then I believe AMD bumped up their game in overclockability. Not to mention the 65w part at 3.6ghz.
As far as why they didn't just shrink PII, I think this guy has it right;
"Bulldozer is performing badly mostly because of:
1) Combination of small L1 caches and slow L2 caches. This problem stays with piledriver.
2) L1 instruction cache aliasing problems and write-through L1 caches causing excessive L2 traffic. This problem stays with piledriver
3) The made couple of small mistakes somewhere and it cannot reach the clock speeds it was supposed to reach/what speeds most of it's pipeline would allow. Piledriver will fix this.
4) To get full floating point performance, you have to use AMD's own FMA4 instructions. No legacy software uses those, and not all new software is going to use them because intel is not going to implement those same instructions. Piledriver is going to support Intel Haswell-compatible FMA3, so new code optimized to intel will give full fpu performance on piledriver, no need for amd-specific optimizations.
12 7 [Posted by: hkultala | Date: 02/22/12 09:30:26 PM]
K10 had reached it's age. Already Nehalem beat it badly, and there was no space for improvement in K10, there was too much legacy burden from K7, like lack of memory disambiguation, too tightly coupled ALU and AGU units, tomasulo style OOE instead of PRF-based OOE etc.
And you cannot change these things in existing architecture, they had already changed everything that can be changed/improved between K7 and K10.
So quite many years ago AMD knew it needs a totally new architecture after these K7-derivates, and they developed bulldozer. It ended up being worse than expected, but most of the problems are with the implementation, not deeply in the architecture.
Now there is a lots of room for improvement by fixing those things that appeared to be bottlenecks in the design.
7 1 [Posted by: hkultala | Date: 02/23/12 05:42:39 PM]"
Although I'm hoping AMD will have worked on the cache somewhat.
Simple thing is, we won't know till the end product is released and we all hope for the sake of prices that AMD comes up with a good processor that can compete. And it's not even implausible for them to.
It really doesn't matter whether they've been working on Piledriver for 2 years or 5 years or 20 years. My point is that until Piledriver is released we don't know how it'll perform. We can theorise based on Bulldozer's specification and AMDs history thus far, but its still an educated guess. Nobody here, including Andy77 has any business saying that an unreleased processor will have X or Y performance with 100% certainty.
Separate names with a comma.