So I went to check the website for specs and here are the full specs and respective desktop counterpart: Mobility HD5870: Juniper XT (800sp, desktop HD57xx), GDDR5 @ 128bit, 50W Mobility HD5850: Juniper XT, GDDR5/GDDR3/DDR3 @ 128bit, 30-39W Mobility HD5830: Juniper XT, DDR3/GDDR3 @ 128bit, 25W Mobility HD5770: Redwood (400sp, desktop HD56xx), GDDR5 @ 128bit, 30W Mobility HD5750: Redwood, GDDR5 @ 128bit, 25W Mobility HD5730: Redwood, GDDR3/DDR3 @ 128bit, 26W Mobility HD5650: Redwood, GDDR3/DDR3 @ 128bit, 15-19W (lower clocked than HD5730) Mobility HD5470: Cedar (80sp, desktop HD53xx), GDDR5/DDR3 @ 64bit, 13-15W Mobility HD5450: Cedar, DDR3 @ 64bit, 11W Mobility HD5430: Cedar, DDR3 @ 64bit, 7W (lower clocked) Mobility HD5165: RV730 (320sp, desktop HD46xx), DDR3/GDDR3 @ 128bit Mobility HD5145: RV710 (80sp, desktop HD43xx), DDR3/GDDR3 @ 64bit - First off, the Mobility HD5850 is a total mess. It can come with either GDDR5 or DDR3, its clocks range from 500 to 625MHz. This means it can be exactly as fast as a Mobility HD5830 or just a tad slower than the HD5870. People will have to be very carefull with these. - The HD5165 and HD5145 are renames from the previous generation, nVidia style (with the big difference that they're not trying to fool customers by calling them HD5650 and HD5350, respectively). The naming of the HD5165 is also a mess. This GPU will be a lot faster than the HD54xx although it's named as being a lot slower. - Inclusion of GDDR5 in mobile GPUs still shows a considerable bump in power consumption. Besides, the memory's power consumption depends a lot on the bus width (the wider the bus, the more amount os memory chips is needed). IMO, ATI went the nVidia way (calling different names between Mobility and Desktop) because it had no choice. nVidia keeps churning out different names for its mobile GPUs while AMD kept the same, honest naming scheme. Many retailers tried to take advantage of people's ignorance of this, so the renamed GPUs still got design wins. I don't like this either, I used to congratulate ATI for being consistent with their name scheme, but I guess this ended up as being essential to grab more design wins. As for not launching Cypress for laptops, I think it's the right way to go. Let's face it, the RV770's adoption for laptops was horrible, and trying to sell Cypress with 8 chips of GDDR5 would go the same way. The GPU consumed a bit more than the G92b, so it had to be downclocked to death, and 256bit (8 chips) of GDDR5 were never used in any laptop at all (all the Mobility HD4870 bundled GDDR3). For this reason, the once-again-renamed-G92b GTX 280M and 260M chips got almost all design wins for gaming laptops. That Asus W90 was a total joke. They paired a slow CPU (2GHz) with two "HD4870" with 512MB GDDR3. Why they decided to save on video memory on a top-end laptop is still a mistery to me, but of course that SLI'ed GTX280M would win in more recent tests because they are all bundled with 1GB each. There was never a Mobile HD4870 with GDDR5, so this Mobility HD5870 will be a lot faster (something like the difference between desktop HD4850 and HD5770). The Geforce 4 MX had no programmable shaders, so it was technologically inferior to the GF3 and it was more like a redesigned GF2. SM2.0 was a DX9 feature than only appeared in the Geforce (5) FX generation. Geforce 3 was a DX8 GPU and Geforce 4 MX was a DX7 GPU.