Wednesday, February 10th 2016

EVGA GeForce GTX 980 Ti VR Edition Starts Selling

The EVGA GeForce GTX 980 Ti VR EDITION has arrived. Accelerated by the groundbreaking NVIDIA Maxwell architecture, the GTX 980 Ti delivers an unbeatable 4K and virtual reality experience. With 2,816 NVIDIA CUDA Cores and 6GB of GDDR5 memory, it has the horsepower to drive whatever comes next. And with the VR EDITION, you get an included 5.25" drive bay with front HDMI 2.0 and USB 3.0 giving you easy access to your VR device's input. The graphics card also has an internal HDMI connector, meaning no external cables will be visible.

New and Key Features:
  • Built for VR: Included 5.25" Drive Bay with Front HDMI 2.0 and USB 3.0.
  • Internal HDMI: Connects to a 5.25" Drive Bay, no visible external wires!
  • ACX 2.0+ Cooling: EVGA's latest cooling solution offers improved cooling, reduced fan noise and better overclocking.
  • 6GB of GDDR5 Memory: Gives you access to higher texture qualities in games, improved 4K gaming performance and optimized for the next generation of gaming.


For more information, visit this page.
Add your own comment

37 Comments on EVGA GeForce GTX 980 Ti VR Edition Starts Selling

#1
xfia
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:
Posted on Reply
#2
Toothless
I want this for a tiny little lan box where you can plug everything but the power cable in the front. Makes things easier.
Posted on Reply
#3
MxPhenom 216
Corsair Fanboy
xfia said:
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:
You have a real stick up your ass when it comes to nvidia products huh? Some sort of conspiracy theory like they are out to get you.
Posted on Reply
#4
[502]
I want that breakout box...
Posted on Reply
#5
bubbleawsome
That front output is actually a good idea. I like it.
Posted on Reply
#6
xfia
MxPhenom 216 said:
You have a real stick up your ass when it comes to nvidia products huh? Some sort of conspiracy theory like they are out to get you.
idk? are they the dumb shits that fuck up games all the time?
Posted on Reply
#7
the54thvoid
xfia said:
idk? are they the dumb shits that fuck up games all the time?
No. That would be lazy developers. Gameworks is what you're getting at I assume but its not entirely useless.
However, on topic, this VR edition is a bit of a con. Normal card with drive bay for cables. Hmm....
Posted on Reply
#8
redundantslurs
Wouldn't a cheap $10 10ft hdmi cable be a better solution instead of the breakout box?
Posted on Reply
#9
R-T-B
redundantslurs said:
Wouldn't a cheap $10 10ft hdmi cable be a better solution instead of the breakout box?
A better solution? No. A far more economical one though, which is why this is probably useless. I'm sure they charge a premium for it.
Posted on Reply
#10
vega22
xfia said:
did they ad ultra wide low latency vram to make it better at vr? o nope guess not :rolleyes:
still 4+2 gb too which really helps developers...
Posted on Reply
#11
xfia
vega22 said:
still 4+2 gb too which really helps developers...
do i really want to know? did they really?
Posted on Reply
#12
Prima.Vera
4K only if you use all details at low....
Posted on Reply
#13
GhostRyder
I think its actually kinda clever add on. I mean it is good for those wanting to do VR to have the input be like that at the front of a case. While there are alternatives to this as long as the premium isn't crazy I see no problem to it!
Posted on Reply
#14
Slizzo
vega22 said:
still 4+2 gb too which really helps developers...
I'm sorry, what? Are you getting at that the 980Ti is like the 970 in the split ram dept? If so you couldn't be further from the truth.
Posted on Reply
#15
MxPhenom 216
Corsair Fanboy
Slizzo said:
I'm sorry, what? Are you getting at that the 980Ti is like the 970 in the split ram dept? If so you couldn't be further from the truth.
He's just a troll. He doesn't know.

If it weren't for the clever split ram on the 970, that card would have only 2gb. At least according to an interview of an nvidia representative when the entire Internet broke because of the whole thing. And then the guy who made the benchmark that people were using to test said it was not a valid way to verify the issue, and has asked people to stop.
Posted on Reply
#16
xfia
MxPhenom 216 said:
He's just a troll. He doesn't know.

If it weren't for the clever split ram on the 970, that card would have only 2gb. At least according to an interview of an nvidia representative when the entire Internet broke because of the whole thing. And then the guy who made the benchmark that people were using to test said it was not a valid way to verify the issue, and has asked people to stop.
yes according to the people that want to save some skin on their ass lol

maxwell cores are the best of the 28nm process but everything else attached to that core... yup. hitting high frequency with less cores making things more efficient overall but starved already by vram bandwidth and relying on compression to keep relevance.

they talk up some garbage that now with maxwell we can cut down the arch like this... oops fail there not following the leader. the 970 has such high processing ability per core it would be silly to see one with 2gb or even 3gb especially with lower bandwidth and slimmer width if you go by amd standard for how a gpu should be balanced.

if amd made the 970...4gb
if amd made the 980...6gb
if amd made the 980ti..8gb
if amd the titan..12gb

but they dont want messy situation with high end gpu's especially regarding fields they are among world leaders in. they tell no lie that 4gb of hbm is just as good as 8gb of gddr5 but if they cant get nv to figure it out then what the hell should they do. truth is its just a load of shit and nv knows what they are doing and they know most people have no damn idea.. even at the patent office.
Posted on Reply
#17
the54thvoid
xfia said:
yes according to the people that want to save some skin on their ass lol

maxwell cores are the best of the 28nm process but everything else attached to that core... yup. hitting high frequency with less cores making things more efficient overall but starved already by vram bandwidth and relying on compression to keep relevance.

they talk up some garbage that now with maxwell we can cut down the arch like this... oops fail there not following the leader. the 970 has such high processing ability per core it would be silly to see one with 2gb or even 3gb especially with lower bandwidth and slimmer width if you go by amd standard for how a gpu should be balanced.

if amd made the 970...4gb
if amd made the 980...6gb
if amd made the 980ti..8gb
if amd the titan..12gb

but they dont want messy situation with high end gpu's especially regarding fields they are among world leaders in. they tell no lie that 4gb of hbm is just as good as 8gb of gddr5 but if they cant get nv to figure it out then what the hell should they do. truth is its just a load of shit and nv knows what they are doing and they know most people have no damn idea.. even at the patent office.
Fury X = 8.9 billion transistors
980ti = 8 billion transistors

Fury X = 4096 shaders
980ti = 2816 shaders/cuda cores

Fury X texture fillrate = 268.8 GT/s
980ti texture fillrate = 176 GT/s

Fury X ROPS = 64
980ti ROPS = 96

Fury X pixel fillrate = 67.2 GP/s
980ti pixel fillrate = 96 GP/s

Fury X memory bus = 4096 bit
980ti memory bus = 384 bit

HBM doesn't make AMD awesome, it helps gloss over the Fiji arch's weaknesses. When it comes to business, Nvidia knew what they were doing perfectly well. If you compare the hardware numbers, Fury X should wipe the floor with Maxwell but it doesn't. However they did it, they misfired. And as great as the HBM implementation is, it was an early adoption with little traction. Arguments suggest they had to go HBM otherwise their power draw would have been silly.

Don't get me wrong, I think Fury X is a great card but it should be better.
Posted on Reply
#18
deemon
nvidia for VR ... :D
Posted on Reply
#19
xfia
deemon said:
nvidia for VR ... :D
with the prices how they are right now its nano crossfire if you have a rift receipt already
Posted on Reply
#20
Casecutter
the54thvoid said:
980ti = 2816 shaders/cuda cores
Not so much to argue, but didn't Nvidia revise what or how the call/count a Cuda core by half and that's why that number doesn't jive. I though they went to a fewer number of higher clocked units on Kepler vs. larger number of lower clocked units of Fermi. So it hard to call one shader equal to another.

It’s hard to say because there’s more than “bits and pieces” there's their sum together, it’s more how well they interact. On that we can say Fiji doesn’t appear to make the most of what’s placed on it vs. the GM200 only because there are GTX 980Ti that in more cases provide percentage increase in FpS… which is one matrix. The only thing you can say is Nvidia has a 601mm² part, while AMD nearly competitive part is a 596mm² not much else.
Posted on Reply
#21
Fluffmeister
I'm impressed (although I guess shouldn't be surprised) how Nvidia completely stole AMD's Fury X thunder. They sliced their GM200 as required, dictated the price and sat back and watched the fireworks (and tears).

Fact is Nvidia didn't need a fully fledged GM200 to compete with Fiji.
Posted on Reply
#22
MxPhenom 216
Corsair Fanboy
Casecutter said:
Not so much to argue, but didn't Nvidia revise what or how the call/count a Cuda core by half and that's why that number doesn't jive. I though they went to a fewer number of higher clocked units on Kepler vs. larger number of lower clocked units of Fermi. So it hard to call one shader equal to another.

It’s hard to say because there’s more than “bits and pieces” there's their sum together, it’s more how well they interact. On that we can say Fiji doesn’t appear to make the most of what’s placed on it vs. the GM200 only because there are GTX 980Ti that in more cases provide percentage increase in FpS… which is one matrix. The only thing you can say is Nvidia has a 601mm² part, while AMD nearly competitive part is a 596mm² not much else.
Well in terms of comparison between Nvidia and AMD architectures and core design, you cannot compare them.
Posted on Reply
#23
xfia
Fluffmeister said:
I'm impressed (although I guess shouldn't be surprised) how Nvidia completely stole AMD's Fury X thunder. They sliced their GM200 as required, dictated the price and sat back and watched the fireworks (and tears).

Fact is Nvidia didn't need a fully fledged GM200 to compete with Fiji.
lies... the titan doesnt really sell to people unless they plan to put a water block on it and oc it. having a lot to do with how much heat it produces already and needing the overclock to even be worth the money. pretty much all efficiency out the window at that point but at least you can overclock it when its water.. the furyx has a custom water cooler and overclocks for shit.
Posted on Reply
#24
xfia
MxPhenom 216 said:
Well in terms of comparison between Nvidia and AMD architectures and core design, you cannot compare them.
there is certainly a level of comparison to be had.
that would probably be a good thread.. no? pull up some white papers on compute units for both camps?
Posted on Reply
#25
Fluffmeister
xfia said:
lies... the titan doesnt really sell to people unless they plan to put a water block on it and oc it. having a lot to do with how much heat it produces already and needing the overclock to even be worth the money. pretty much all efficiency out the window at that point but at least you can overclock it when its water.. the furyx has a custom water cooler and overclocks for shit.
Titan X sells just fine, in fact it wouldn't surprise me if it outsold every Fiji based card with ease.

Fury X is water cooled because it is already clocked to the balls to compete, it's lack of headroom is there for everyone to see... sorry. :P
Posted on Reply
Add your own comment