• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Unveils World’s First DirectX 11 Compliant Mobile Graphics

So I went to check the website for specs and here are the full specs and respective desktop counterpart:

Mobility HD5870: Juniper XT (800sp, desktop HD57xx), GDDR5 @ 128bit, 50W
Mobility HD5850: Juniper XT, GDDR5/GDDR3/DDR3 @ 128bit, 30-39W
Mobility HD5830: Juniper XT, DDR3/GDDR3 @ 128bit, 25W

Mobility HD5770: Redwood (400sp, desktop HD56xx), GDDR5 @ 128bit, 30W
Mobility HD5750: Redwood, GDDR5 @ 128bit, 25W
Mobility HD5730: Redwood, GDDR3/DDR3 @ 128bit, 26W
Mobility HD5650: Redwood, GDDR3/DDR3 @ 128bit, 15-19W (lower clocked than HD5730)


Mobility HD5470: Cedar (80sp, desktop HD53xx), GDDR5/DDR3 @ 64bit, 13-15W
Mobility HD5450: Cedar, DDR3 @ 64bit, 11W
Mobility HD5430: Cedar, DDR3 @ 64bit, 7W (lower clocked)


Mobility HD5165: RV730 (320sp, desktop HD46xx), DDR3/GDDR3 @ 128bit
Mobility HD5145: RV710 (80sp, desktop HD43xx), DDR3/GDDR3 @ 64bit




- First off, the Mobility HD5850 is a total mess. It can come with either GDDR5 or DDR3, its clocks range from 500 to 625MHz. This means it can be exactly as fast as a Mobility HD5830 or just a tad slower than the HD5870. People will have to be very carefull with these.


- The HD5165 and HD5145 are renames from the previous generation, nVidia style (with the big difference that they're not trying to fool customers by calling them HD5650 and HD5350, respectively).
The naming of the HD5165 is also a mess. This GPU will be a lot faster than the HD54xx although it's named as being a lot slower.

- Inclusion of GDDR5 in mobile GPUs still shows a considerable bump in power consumption. Besides, the memory's power consumption depends a lot on the bus width (the wider the bus, the more amount os memory chips is needed).




IMO, ATI went the nVidia way (calling different names between Mobility and Desktop) because it had no choice.
nVidia keeps churning out different names for its mobile GPUs while AMD kept the same, honest naming scheme. Many retailers tried to take advantage of people's ignorance of this, so the renamed GPUs still got design wins.

I don't like this either, I used to congratulate ATI for being consistent with their name scheme, but I guess this ended up as being essential to grab more design wins.




As for not launching Cypress for laptops, I think it's the right way to go.
Let's face it, the RV770's adoption for laptops was horrible, and trying to sell Cypress with 8 chips of GDDR5 would go the same way.

The GPU consumed a bit more than the G92b, so it had to be downclocked to death, and 256bit (8 chips) of GDDR5 were never used in any laptop at all (all the Mobility HD4870 bundled GDDR3).
For this reason, the once-again-renamed-G92b GTX 280M and 260M chips got almost all design wins for gaming laptops.

That Asus W90 was a total joke. They paired a slow CPU (2GHz) with two "HD4870" with 512MB GDDR3. Why they decided to save on video memory on a top-end laptop is still a mistery to me, but of course that SLI'ed GTX280M would win in more recent tests because they are all bundled with 1GB each.





so is the mobile 5870 only slightly better than the 4870M? Because it's a 5770 and 5770 is about the 4870 in performance.
There was never a Mobile HD4870 with GDDR5, so this Mobility HD5870 will be a lot faster (something like the difference between desktop HD4850 and HD5770).




The Geforce 4 MX wasn't a Geforce 2 MX/GTS but rather a stripped down Geforce 3 series. Hence the lack for SM2.0 if i'm correct.

The Geforce 4 MX had no programmable shaders, so it was technologically inferior to the GF3 and it was more like a redesigned GF2. SM2.0 was a DX9 feature than only appeared in the Geforce (5) FX generation. Geforce 3 was a DX8 GPU and Geforce 4 MX was a DX7 GPU.
 
Wrong. Cypress has nearly the same TDP as RV770, and RV770s could work on MXMs. They also have nearly the same pad size, the same memory IO pins, same PCI-Express lanes, and so on. It's just that AMD isn't able to dole out good enough quantities of Cypress even today. When they get to do so, don't be surprised if they release a "Mobility HD 5970" which is a single AMD Cypress MXM board.

TDP of the HD4870 is 150W, the TDP of the HD5870 is 188, . . . . . thats almost the same?
 
TDP of the HD4870 is 150W, the TDP of the HD5870 is 188, . . . . . thats almost the same?

Base it on this:

power_average.gif


Both reference boards, both with digital PWM circuitry.

More charts here.
 
1) Yeilds are not good enough for a mobility part (they can make more money selling them as desktop parts)

Exactly, that's the way I see it. Since AMD isn't able maintain a decent yield of Cypress to sustain three desktop (HD 5850, HD 5870, HD 5970) and three mobile (M HD 5830, M HD 5850, M HD 5870) products, it fears of losing "design wins". Unlike with the DIY-driven desktop market, for notebooks, once a manufacturer selects your GPU for his upcoming design (you score a design win), you have to assure him with your quantities, output, and a backlog. Only then will you two ink a deal. Apparently Juniper is being made in good enough quantities that it has countless reference and non-reference desktop board designs already, while non-reference Cypress-based products are just trickling out, despite Cypress having a couple of months' headstart.
 
GF 4 Ti had shader 1.4
Geforce 3 had 1.3
GF 4 MX had 0.0
Geforce 2 had 0.0

Trust me, it was their first rebrand.

Please, get your information straight:

The later NV17 revision of the NV10 design, used for the GeForce 4 MX, was more efficient; although the GeForce 4 MX 460 was a 2x2 pipeline design, it could outperform the GeForce 2 Ultra.

source

and

If the capabilities of the GeForce4 generation are defined by the GeForce4 Ti, then the GeForce4 MX (NV17) is a GeForce4 in name only. Many criticized the GeForce MX name as a misleading marketing ploy since it was less advanced than the preceding GeForce 3. In the features comparison chart between the Ti and MX lines, it showed that the only "feature" that was missing on the MX was the nfiniteFX II engine—the DirectX 8 programmable vertex and pixel shaders.[12] In reality, however, the GeForce4 MX was not a GeForce4 Ti with the shader hardware removed, as the MX's performance in games that did not use shaders was considerably behind the GeForce 4 Ti and GeForce 3. Disappointed enthusiasts described the GeForce4 MX as "GeForce 2 on steroids".

Source

Basicly its just a stripped Geforce 3, with misleading name , called Geforce 4, its not a Geforce 2. Both where budgetlines anyway, offering low/mid-end performance for gaming back in the days.
 
your own quote says "the GeForce4 MX was not a GeForce4 Ti with the shader hardware removed... described the GeForce4 MX as "GeForce 2 on steroids"

it had nothing to do with the geforce 3, and was based on the GF2 core.
 
ATI release mobility 5xxx and nvidia still talk "we goona release GT300 soon"
 
SO what if t's a 5770 equivalent? If you need any more than tht for gaming, buy a damn desktop.
 
SO what if t's a 5770 equivalent? If you need any more than tht for gaming, buy a damn desktop.

point is, 5770 is sweet for a laptop. we're all for that. what we're not for, is it being named something else to mislead people into thinking its twice as powerful as it really is.
 
Back
Top