Discussion in 'NVIDIA' started by qubit, Aug 30, 2012.
I think we'd have seen a higher clocked GK104, nothing more nothing less.
GK110 wasn't ready imo.
TPU wrong? Well Recus, you should leave the forums now, that's heresy.
And i do also read many, many other review sites I prefer Anand, Hexus & TPU as my staples but the German site HT4U.net does a pretty comprehensive review as well -shame i don't speak german! I find a lot of other review sites to be poor though, many fail to bench cards at high resolutions, preferring to stick to 1080p, which is console-tastically underwhelming. The irony is, people don't need 680's and 7970's at that resolution.
Quite so. But rumours suggested AMD weren't sure the initial bins would stand up to the core increases but with hindsight, they did clock very well indeed. Shame they priced so high initially (which also brought $$$ signs to Nvidia). Still, there's always Larabee...no wait, Knights Corner or is it Broadwell...... Intel wont stay gpu agnostic forever...
I'm not sure how much AMD knew about what Nvidia was up to, Kepler was a well preserved secret until very close to launch so they could have assumed any scenario, GK110 beating the crap out of Tahiti, GK110 hard to manufacture and not on the market till very late, GK114 can never touch Tahiti. Also they were prepared to gamble since they launch first. I think the main reason was to keep the power consumption reasonable and reference cooling to be decent.
Intel has no intention to tackle the consumer discrete market which, I'm sad to say, is in fast decline.
Yeah, they're looking towards full SoC solutions. It's a pretty safe bet that in the next 5-10 years discrete gpu's will be gone, unless 4k resolution suddenly becomes the norm. But we have to face facts that the next console refresh will be the guiding light for gpu acceleration (in a bad way).
I suppose our next challenge (as system builders) will be how small can i make my PC in 5 years time and how can i make an excuse to put a water loop in it?
This, it can be proven by taking a look at what Intel is doing with discrete accelerators on the HPC market, they are focusing on accelerators that are more similar to CPUs than GPUs and they are relying on the easiness of coding.
I've read some articles that talk about the Xeon Phi coprocessor and how it is gaining territory with it's relative ease of programming compared to GCN or Kepler(CUDA).
I've read the same articles i think. Some of them infer Xeon Phi will decimate NVidia's HPC market share.
It's not so simple. Nvidia's clients hooked on CUDA will not change overnight. But it is a serious threat. NV has a 45% operating margin on "professional solutions" as they name it.
Separate names with a comma.