Monday, September 17th 2012

AMD "Oland" Radeon HD 8800 Series SKUs Unveiled

Apparently, launch of AMD's Radeon HD 8800 series is close enough for some sources to come up with specifications. The HD 8800 series, according to one source, is based on a new silicon codenamed "Oland," which is built on the 28 nm process, packing 3.4 billion transistors with around 270 mm² die-area. According to the source, the two HD 8800 series models, the HD 8870 "Oland XT" will up performance per Watt and cost-performance ratios over current HD 7800 series, while maintaining current process technologies.

The Radeon HD 8870, according to numbers provided by the source, could offer performance comparable to today's high-end GPUs. The HD 8870 is clocked at 1050 MHz with 1100 MHz PowerTune Boost frequency; while the HD 8850 is clocked at 925 MHz with 975 MHz boost frequency. The memory of both SKUs is clocked at 6.00 GHz, yielding 192 GB/s memory bandwidth. The chips hence have 256-bit wide memory interfaces.

Key details such as stream processor, TMU, and ROP counts are excluded, though the source mentions that the HD 8870 provides up to 75% higher single-precision floating point and up to 60% higher double-precision floating point performance over its prdecessor, the HD 7870. The texture fill-rate is up by 65%. The Radeon HD 8850 offers similar increases over its predecessor, the HD 7850. Find them tabled above.Sources: Read2ch, VideoCardz
Add your own comment

93 Comments on AMD "Oland" Radeon HD 8800 Series SKUs Unveiled

#1
NeoXF
by: happita
Well, I can see me benefitting from a card upgrade as I play BF3 and will be getting Metro:Last Light, which is going to put every system down on their knees. And also, it can't hurt seeing as you have a lot more possibility to max out the AA/AF with SuperSampling which is extremely taxing on just about ALL new games out today.
He's talking about games not really taking advantage of the great hardware both AMD and nVidia have to offer and how developers are incredibly lazy, especially when it comes to porting. I don't know about other people, but sure Metro "2" might look nice, but if it will require another 3 generations of GPUs to be played properly, then I'm seriously not gonna be impressed... or may I remind you that Metro 2033, originally launched durring the 5870 era, still can't be played at over 40fps on a single GPU at 1920x1080? Same can be said about The Witcher 2 and some other games. Also, consider that by the time hardware that will run these games properly comes out, most of the graphical interest will be gone, not to mention the story, the game itself.
Posted on Reply
#2
nt300
I think this may be what they want to push gamers away from how good the game looks and to hae us see how fun it is to play instead.
Todays hardware is more than capable of playing games like Metro 2033, Crysis and Witcher 2, the problem is bad coding IMO.
Posted on Reply
#3
erocker
by: nt300
the problem is bad coding IMO
Even when a GPU maker has its hands directly in the game's making? Bad coding is not of my opinion.
Posted on Reply
#4
NeoXF
Bad coding might be wrong wording... less then optimum sounds about right.

And what do GPU makers have to do with the development of the game, they just pay them to implement their SDKs (3D stereoscopy stuff, nVidia's PhysX, AMD's Global Illumination enhancements and so on...) or render stuff in a certain way as to take advantage of their architecture better, and last, but certainly not least, (which makes up the bulk of it, especially in older TWIMTBP games) pay to have first-hand access to the game's inside, as to optimize their drivers as much as possible for the incoming title. With the shovelware of console ports coming this way, PC has little say in improving how the game looks or runs, GPU makers even less, as they're only a part of PC as a gaming platform.
Posted on Reply
#5
nt300
by: erocker
Even when a GPU maker has its hands directly in the game's making? Bad coding is not of my opinion.
Yes wrong choice of words.
Posted on Reply
#6
jihadjoe
by: nt300
You missed his point I think. The 7970 was the beginning of a new architecture that will be extended to the 8970. Like a HD 7970 2nd generation. The 1st generation HD 7970 did not gain as much performance versus the 6970 because it's built from the ground up new. Now that AMD is in the 2nd gen faze I can easy see the HD 8970 topping the charts with about 70% performance over the 7900s.
6970 was basically a 2nd generation of the 5870, but performance increase was nowhere near 70%. Unless AMD did something seriously wrong with the 7000 series (and I don't think they did), the 8000 series will be an incremental, rather than revolutionary upgrade.

OTOH, the 7000 series was a complete redesign compared to the 5000/6000 series, and yes there were HUGE gains, but because those gains happened on the compute side, most people don't notice it.
Posted on Reply
#7
NeoXF
by: jihadjoe
OTOH, the 7000 series was a complete redesign compared to the 5000/6000 series, and yes there were HUGE gains, but because those gains happened on the compute side, most people don't notice it.
The graphics drivers definitely weren't there at launch, Catalyst 12.7b/12.8WHQL fixed most of that, but still. Not sure how compute stacks up, then and now, tho.
Posted on Reply
#8
Widjaja
by: jigar2speed
I hope they don't drop the balls (Support) on their HD 5*** series, My HD 5850 is still fast enough and serving in my secondary system.
5*** series are in the DX11 bracket so highly unlikely AMD will drop support for the series and make it legacy.
Posted on Reply
#9
NeoXF
by: Widjaja
5*** series are in the DX11 bracket so highly unlikely AMD will drop support for the series and make it legacy.
I don't think AMD have dropped support for older generations either anyway (DX10 part at least)... they just release actual updates for them on a much slower (but probably consistent) basis.
Posted on Reply
#10
revin
Not sure I understand why they went back to smaller 256 mem buss?
Posted on Reply
#11
mediasorcerer
by: NC37
Likely because AMD had performance leads at times, well before nVidia got more of Kepler out. If you are a top dog you don't have much of a reason to lower prices. People will pay for it. If 8000 series is seeing a price drop that big then it makes me suspect performance won't beat nVidia in the end so AMD goes back to competing based on price. Which is good cause maybe it'll get nVidia to lower some as well.
I agree with that, amd did a smart move getting the 7 series out some mnths back, when my card came out, i think it was around 500$-600$ if i remember, 3 weeks ago i paid 320$ 7 odd mnths later etc, nearly half price. I was thinking of nvidia for a change, but the equivelant card[660ti?650ti?], because its just released, was more pricey than the 7950.

Perhaps the memory bus doesnt make a significant enough difference in performance? , i noticed in max payne3, when i pushed the af [or was it aa] up high, it used much more memory, but not the full 3 gb, maybe around half if i remember, yet the game still jagged out on that setting?
Posted on Reply
#12
Prima.Vera
by: NeoXF
I don't think AMD have dropped support for older generations either anyway (DX10 part at least)... they just release actual updates for them on a much slower (but probably consistent) basis.
Already the CrossfireX support for 5xxx is complete crap for newer games, and no signs that they will improve it soon...
Posted on Reply
#13
eidairaman1
by: Prima.Vera
Already the CrossfireX support for 5xxx is complete crap for newer games, and no signs that they will improve it soon...
your comment is making me fall asleep, Im going to bed good night
Posted on Reply
#14
Widjaja
We can only hope AMDs drivers a worked on more due to the extended time gap before each release and not because they have secretly laid off driver developers.

There is always going to be someone worrying about performance loss with older generation cards as AMD releases a new generation on GPUs.
Posted on Reply
#15
eidairaman1
by: Widjaja
We can only hope AMDs drivers a worked on more due to the extended time gap before each release and not because they have secretly laid off driver developers.

There is always going to be someone worrying about performance loss with older generation cards as AMD releases a new generation on GPUs.
users of 12.9 betas are not complaining about crossfire issues at all so...
Posted on Reply
#16
TRWOV
by: jihadjoe
6970 was basically a 2nd generation of the 5870, but performance increase was nowhere near 70%. Unless AMD did something seriously wrong with the 7000 series (and I don't think they did), the 8000 series will be an incremental, rather than revolutionary upgrade.

OTOH, the 7000 series was a complete redesign compared to the 5000/6000 series, and yes there were HUGE gains, but because those gains happened on the compute side, most people don't notice it.
HD6950/70 were a new architecture (VLIW4) it wasn't a refined VLIW5. All ATi cards from 9800 to HD6800 were VLIW5.
Posted on Reply
#17
xorbe
by: TRWOV
HD6950/70 were a new architecture (VLIW4) it wasn't a refined VLIW5. All ATi cards from 9800 to HD6800 were VLIW5.
I looked this up the other day. I was left with the impression that vliw4 was basically vliw5 with the 5th pipe cut out, because it was rarely used after they did some game profilings. Most peak usage was 3-4 pipes at a time. Cut 20% per unit, and stamp 20% more units out.
Posted on Reply
#18
eidairaman1
by: xorbe
I looked this up the other day. I was left with the impression that vliw4 was basically vliw5 with the 5th pipe cut out, because it was rarely used after they did some game profilings. Most peak usage was 3-4 pipes at a time. Cut 20% per unit, and stamp 20% more units out.
yup exactly why they did it, increases profits. when the 4th pipe is loaded 100% of time they will have a VLIW 5/6 design.
Posted on Reply
Add your own comment