Monday, May 5th 2014
GeForce GTX 880 ES Intercepted En Route Testing Lab, Features 8 GB Memory?
An engineering sample (ES) of the GeForce GTX 880 was intercepted on its way from a factory in China, to NVIDIA's development center in India, where it will probably undergo testing and further development. The shipping manifest of the courier ferrying NVIDIA's precious package was sniffed out by the Chinese press. NVIDIA was rather descriptive about the ES, in its shipping declaration. Buzzwords include "GM204" and "8 GB GDDR5," hinting at what could two of the most important items on its specs sheet. GM204 is a successor of GK104, and is rumored to feature 3,200 CUDA cores, among other things, including a 256-bit wide memory bus. If NVIDIA is cramming 8 GB onto the card, it must be using some very high density memory chips. The manifest also declares its market value at around 47,000 Indian Rupees. It may convert to US $780, but adding all taxes and local markups, 47,000 INR is usually where $500-ish graphics cards end up in the Indian market. The R9 290X, for example, is going for that much.
Sources:
ChipHell, VideoCardz
66 Comments on GeForce GTX 880 ES Intercepted En Route Testing Lab, Features 8 GB Memory?
It also helps to uncover hardware and software design flaws. Imagine you build a new GPU, then a year later decide you'll make a 6 GB Titan card, and oops, the memory controller in the GPU doesn't work with that much memory.
As always people are free to vote with their wallets, nVidia have a stronger brand and people are clearly happy to pay, again I'm sorry but that is just how it is.
How nice.
A! Yes.... 8GB RAM.
for example
It wreaked prices. Fermi had the 104s in a beautiful pricing segment and then Kepler saw them jump cause suddenly 104s were used for all the high ends. NV then stuffed 106s into the mid-high bracket when 106s were more mid-low. So they literally found a way to make people pay more for weaker GPUs.
Then we will still have the same but redefined architecture for the GTX 990 with a 20nm it seams to me now NVidia and Intel > ( like sandy bridge ivy bridge ) are both going the same route to make more money and not give us the latest technology right away when available remember we seen this with the 8800 to 9800 same but redefined architecture then 400 to 500 same then 600 to 700 same and now we will get 800 AND ONLT IN THE 900 we will get true full Maxwell chip and that's what NVidia use to stand for. nVidia that's the way its meant to be played. but I guess those days are going away don't get me wrong I still would only buy only NVidia GPUs.
I think the timeline went something along those lines:
-TSMC's 28nm process had problem as usual.
- There weren't that many 7970s at first.
-The 680 came in a bit later than the 7970.
-Gk104 had some supply issues as well.
-The 7970 GHz Ed. came out.
-Non-competitive high pricing for both AMD and Nvidia
-The 780 came out some time later.
-The 290x came out a few months after that.
Frankly, I think that the 680 was simply the best that Nvidia could put out from TSMC at that time. It was a new architecture with a complicated design on a new manufacturing node. The last time they tried this, they came up with the fx5000 series, which was a bit of a flop. So they made the calculated choice not to rush things like last time and work through the issues to deliver a more solid solution.
Unsurprisingly AMD was in a similar position. They were getting sub-par results that delivered good performance but in a higher power envelope. They made the choice not to optimize power consumption as much and to rush the design so that they can deliver before Nvidia and conquer the market in the mean time. AMD as a whole was in a financially fragile situation at that time and was really in need of a few months of good sales. Not long after the initial release AMD tried to mediate their unoptimized design with the refined 7970 GHz edition somewhat successfully.
Both products had high pricing for a long time due to the difficulty of manufacturing them at a sufficiently high rates to satisfy demand.
Once Nvidia sorted things out with the manufacturing process, they could finally bring out the Fermi successor they envisioned from the start and the 780 came out.
The 290x came out a few months later and managed to deliver comparable performance but had the same high power usage issues like the 7970. The conclusion is that the 290 had the same silicon substrate as the 7970 and the two were designed in tandem from the start. Most likely AMD tried to solve the high power consumption but failed and decided not to delay the release any further to prevent further loss of sales to the 780.
The GTX 680 came out and did screw with prices, because it was so good. AMD had to drop the price of their HD7970 to compete with Nvidia's smaller, cooler, more energy efficient mid-range part that they were getting better profit margins on. The people who control the upper bracket of performance set the price bar. AMD didn't hesitate to throw the HD7970 out there at $550 when the HD6970 launched at under $400. Just like they didn't hesitate to throw the FX Series Socket 939 CPU's out there for $1000+ when they were substantially better than Intel's offerings. It's capitalism. If they are offering a superior product, they are going to charge a premium. Corvette's are more expensive than Camaro's, it doesn't mean Camaro's are a rip off or crappy products. Show me a single benchmark where a 4GB variation of a normally 2GB card significantly outperforms at higher resolutions. I'll wait. I haven't found a single one.
I look at it very negatively in that Nvidia decided to screw consumers and milk them for another upgrade.
AMD have a record of playing the GPU cards close to their chests (something the CPU division could learn from), so I'll reserve judgement for the time being. I wouldn't be at all surprised to see a re-run of the previous few generations with the GM204 v Pirate Islands episode. When people are arguing about superiority when the difference is a few percentage points and an optimized game bench or two can swing the result one way of the other, it seems that neither side will put daylight between their product and the competition.
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/32.html
When you compare the true Kepler flagship GTX 780Ti to the GTX 680 it's clear where the GTX 680 fits on the Kepler scale.
www.geforce.com/hardware/desktop-gpus/geforce-gtx-295/specifications
Don't see why everyone assumes its 256bit memory bus as it would be a bit of a downgrade performance wise if they did that. 256bit was nothing but a rumor so best to leave it as one til proper specs are released.