Wednesday, September 2nd 2020

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

NVIDIA just announced its new generation GeForce "Ampere" graphics card series. The company is taking a top-to-down approach with this generation, much like "Turing," by launching its two top-end products, the GeForce RTX 3090 24 GB, and the GeForce RTX 3080 10 GB graphics cards. Both cards are based on the 8 nm "GA102" silicon. Join us as we live blog the pre-recorded stream by NVIDIA, hosted by CEO Jen-Hsun Huang.

Update 16:04 UTC: Fortnite gets RTX support. NVIDIA demoed an upcoming update to Fortnite that adds DLSS 2.0, ambient occlusion, and ray-traced shadows and reflections. Coming soon.
Update 16:06 UTC: NVIDIA Reflex technology works to reduce e-sports game latency. Without elaborating, NVIDIA spoke of a feature that works to reduce input and display latencies "by up to 50%". The first supported games will be Valorant, Apex Legends, Call of Duty Warzone, Destiny 2 and Fortnite—in September.
Update 16:07 UTC: Announcing NVIDIA G-SYNC eSports Displays—a 360 Hz IPS dual-driver panel that launches through various monitor partners in this fall. The display has a built-in NVIDIA Reflex precision latency analyzer.
Update 16:07 UTC: NVIDIA Broadcast is a brand new app available in September that is a turnkey solution to enhance video and audio streaming taking advantage of the AI capabilities of GeForce RTX. It makes it easy to filter and improve your video, add AI-based backgrounds (static or animated), and builds on RTX Voice to filter out background noise from audio.
Update 16:10 UTC: Ansel evolves into Omniverse Machinima, an asset exchange that helps independent content creators to use game assets to create movies. Think fan-fiction Star Trek episodes using Star Trek Online assets. Beta in October.
Update 16:15 UTC: Updates to the AI tensor cores and RT cores. In addition to more numbers of RT- and tensor cores, the 2nd generation RT cores and 3rd generation tensor cores offer higher IPC. Making ray-tracing have as little performance impact as possible appears to be an engineering goal with Ampere.
Update 16:18 UTC: Ampere 2nd Gen RTX technology. Traditional shaders are up by 270%, raytracing units are 1.7x faster and the tensor cores bring a 2.7x speedup.
Update 16:19 UTC: Here it is! Samsung 8 nm and Micron GDDR6X memory. The announcement of Samsung and 8 nm came out of nowhere, as we were widely expecting TSMC 7 nm. Apparently NVIDIA will use Samsung for its Ampere client-graphics silicon, and TSMC for lower volume A100 professional-level scalar processors.
Update 16:20 UTC: Ampere has almost twice the performance per Watt compared to Turing!
Update 16:21 UTC: Marbles 2nd Gen demo is jaw-dropping! NVIDIA demonstrated it at 1440p 30 Hz, or 4x the workload of first-gen Marbles (720p 30 Hz).
Update 16:23 UTC: Cyberpunk 2077 is playing big on the next generation. NVIDIA is banking extensively on the game to highlight the advantages of Ampere. The 200 GB game could absorb gamers for weeks or months on end.
Update 16:24 UTC: New RTX IO technology accelerates the storage sub-system for gaming. This works in tandem with the new Microsoft DirectStorage technology, which is the Windows API version of the Xbox Velocity Architecture, that's able to directly pull resources from disk into the GPU. It requires for game engines to support the technology. The tech promises a 100x throughput increase, and significant reductions in CPU utilization. It's timely as PCIe gen 4 SSDs are on the anvil.

Update 16:26 UTC: Here it is, the GeForce RTX 3080, 10 GB GDDR6X, running at 19 Gbps, 238 tensor TFLOPs, 58 RT TFLOPs, 18 power phases.
Update 16:29 UTC: Airflow design. 90 W more cooling performance than Turing FE cooler.
Update 16:30 UTC: Performance leap, $700. 2x as fast as RTX 2080, available September 17. Up to 2x faster than the original RTX 2070.
Update 17:05 UTC: GDDR6X was purpose-developed by NVIDIA and Micron Technology, which could be an exclusive vendor of these chips to NVIDIA. These chips use the new PAM4 encoding scheme to significantly increase data-rates over GDDR6. On the RTX 3090, the chips tick at 19.5 Gbps (data rates), with memory bandwidths approaching 940 GB/s.
Update 16:31 UTC: RTX 3070, $500, faster than RTX 2080 Ti, 60% faster than RTX 2070, available in October. 20 shader TFLOPs, 40 RT TFLOPs, 163 tensor cores, 8 GB GDDR6
Update 16:33 UTC: Call of Duty: Black Ops Cold War is RTX-on.

Update 16:35 UTC: RTX 3090 is the new TITAN. Twice as fast as RTX 2080 Ti, 24 GB GDDR6X. The Giant Ampere. A BFGPU, $1500 available from September 24. It is designed to power 60 fps at 8K resolution, up to 50% faster than Titan RTX.

Update 16:43 UTC: Wow, I want one. On paper, the RTX 3090 is the kind of card I want to upgrade my monitor for. Not sure if a GPU ever had that impact.
Update 16:59 UTC: Insane CUDA core counts, 2-3x increase generation-over-generation. You won't believe these.
Update 17:01 UTC: GeForce RTX 3090 in the details. Over Ten Thousand CUDA cores!
Update 17:02 UTC: GeForce RTX 3080 details. More insane specs.

Update 17:03 UTC: The GeForce RTX 3070 has more CUDA cores than a TITAN RTX. And it's $500. Really wish these cards came out in March. 2020 would've been a lot better.
Here's a list of the top 10 Ampere features.

Update 19:22 UTC: For a limited time, gamers who purchase a new GeForce RTX 30 Series GPU or system will receive a PC digital download of Watch Dogs: Legion and a one-year subscription to the NVIDIA GeForce NOW cloud gaming service.

Update 19:47 UTC: All Turing cards support HDMI 2.1. The increased bandwidth provided by HDMI 2.1 allows, for the first time, a single cable connection to 8K HDR TVs for ultra-high-resolution gaming. Also supported is AV1 video decode.

Update 20:06 UTC: Added the complete NVIDIA presentation slide deck at the end of this post.

Update Sep 2nd: We received following info from NVIDIA regarding international pricing:
  • UK: RTX 3070: GBP 469, RTX 3080: GBP 649, RTX 3090: GBP 1399
  • Europe: RTX 3070: EUR 499, RTX 3080: EUR 699, RTX 3090: EUR 1499 (this might vary a bit depending on local VAT)
  • Australia: RTX 3070: AUD 809, RTX 3080: AUD 1139, RTX 3090: AUD 2429
Add your own comment

502 Comments on NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

#401
Manoa
LycanwolfenSingle card 3 slots ouch. I know you guys say video games today do not use SLI. Well I beg to differ. I went to 4k gaming couple years ago and I play a few games like FFXIV and doom and few others. I bought a single 1070ti and in FFXIV at 4k i pushed about 50 to 60 fps but in SLI I pushed over 120 FPS @ 4k so the game says it does not support it but it does. SLI is always on. I have met many people that bought SLI and did not know how to configure it so they never saw the benefit from it. Also in SLI the game ran smoother cleaner. Now maybe it cannot address all the memory but it still can use both GPU's which increase the smoothness.

When I ran 1080 P gaming I ran two 660ti's in SLI and everything was sweet. But 4k nope could not handle the load.
yes you increased smoothness by increase fps but you also increase latency of every one of the frames by factor 2 or even 3, fermi was the last of Tom since then it all deth :x
Posted on Reply
#402
etayorius
I payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
Posted on Reply
#403
Chrispy_
efikkanI think you are putting too much faith in game developers. Most of them just take an off-the-shelf game engine, load in some assets, do some scripting and call it a game. Most game studios don't do a single line of low-level engine code, and the extent of their "optimizations" are limited to adjusting assets to reach a desired frame rate.
change all of a sudden?
I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" have been slowly but surely migrating to console-optimised engines over the last few years.
efikkanNot really. The difference between a "standard" 500 MB/s SSD and a 3 GB/s SSD will be loading times. For resource streaming, 500 MB/s is plenty.
Also, don't forget that these "cheap" NVMe QLC SSDs can't deliver 3 GB/s sustained, so if a game truely depended on this, you would need a SLC SSD or Optane.
You're overanalyzing this. I said 3GB/s simply beacuse that's a commonly-accepted read speed of a typical NVMe drive. Also, even the worst PCIe 3.0 x4 drives read at about 3GB/s sustained, no matter whether they're QLC or MLC. The performance differences between QLC and MLC is only really apparent on sustained write speeds.
efikkanGames in general isn't particularly good at utilizing the hardware we have currently, and the trend in game development has clearly been less performance optimization, so what makes you think this will change all of a sudden?
. My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
Posted on Reply
#404
medi01
efikkanThe only thing missing from the Steam hardware survey is people who buy graphics cards and don't game.
I used to buy graphics cards, game and not be on Steam.
Pretty sure most people who are into Blizzard games do not use steam.
That doesn't explain why WoW players would necessarily skip NV... and this is when AMD bothered to explain what is going on.
Their main argument was their absence in internet cafe business, which was skewing the figures a lot (each user that was logging in was counting separately).
Steam fixed it somewhat, but not all to AMD liking (in AMD's words), brushing it off as that Valve doesn't really care about how representative that survey is. (yikes)

Mindfactory is a major pc parts online shop in Germany and it shows buying habbits of the respective DIY demographic in Germany. I don't see why that is not relevant.
ValantarAll that graph says is that "Ampere" (likely the GA104 die, unknown core count and memory configuration) at ~140W could match the 2080 Ti/TU102 at ~270W. Which might very well be true, but we'll never know outside of people undervolting and underclocking their GPUs, as Nvidia is never going to release a GPU based on this chip at that power level (unless they go entirely insane on mobile, I guess).
This makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
Posted on Reply
#405
Valantar
BoboOOZI'm not shifting perspectives, you're probably overanalysing my (maybe too short) messages.
Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.
BoboOOZThe whole point should be considered only from the viewpoint of the company, in a more or less competitive market.
Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.
BoboOOZI'm pretty certain that in a year or two there will be more games, requiring more than 10k of VRAM in certain situations, but I think if AMD comes out with a competitive option for the 2080 (with a more reasonable amount of memory), reviews will point out this problem, let's say, in the next 5 months. If this happens , Nvidia will have to react to remain competitive (they are very good at this).
There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
BoboOOZNo SLI on the 3080? Anyways, I will repeat myself, Nvidia will do this only if they have to, and AMD beats the 3080.
SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.
BoboOOZDo you mean to say that the increase from 8 to 10 GB is proportional with the compute and bandwith gap between the 2080 and the 3080? It's rather obvious that it's not. On the contrary, if you look at the proportions, the 3080 is the outlier of the lineup, it has 2x the memory bandwidth of the 3070 but only 25% more VRAM
...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.
BoboOOZOlder games have textures optimized for viewing at 10802p. This gen is about 4k gaming being really possible, so we'll see more detailed textures.
They will be leveraged on the consoles via streaming from the SSD, and on PCs via increasing RAM/VRAM usage.
The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.
medi01First, this makes the statement fairly useless, whereas AMD's perf/w claim (+50% in RDNA2) is reflecting practical reality at least in TPU reviews.
It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
Posted on Reply
#406
medi01
ValantarI also really don't see how the +50% perf/W for RDNA 2
I actually meant RDNA1.
Posted on Reply
#407
R0H1T
Valantarand removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
Not sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
Posted on Reply
#408
mouacyk
To the people who don't like the high stock TDPs, you can thank overclockers who went to all ends to circumvent NVidia's TDP lockdown on Pascal and Turing. NVidia figured that if people would go to extreme lengths to shunt mod flagship GPUs to garner power in excess of 400W, why not push a measly 350W and look good in performance at the same time?
Posted on Reply
#409
FeelinFroggy
The pricing and cuda core count is defiantly a surprise. While competition from AMD's RDNA is a driver for the leap, I think that the next gen console release is what is pushing this performance jump and price decrease. I think that Nvidia fears that PC gaming is getting too expensive and the next gen consoles may take away some market share if prices cant be lowered.

Plus, it is apparent that Nvidia has been sandbagging since Pascal as AMD just had nothing to compete.
Posted on Reply
#410
Valantar
R0H1TNot sure how accurate that is, there's rumors of a cheap Xbox following (accompanying?) the regular one's release & that one sure as hell isn't going to use just as fast an SSD
Actually it is guaranteed to use that. The XSX uses a relatively cheap ~2.4GB/s SSD. The cheaper one might cut the capacity in half, but it won't move away from NVMe. The main savings will come from less RAM (lots of savings), a smaller SoC (lots of savings) and accompanying cuts in the PSU, VRM, cooling, likely lack of an optical drive, etc. (also lots of savings when combined). The NVMe storage is such a fundamental part of the way games are built for these consoles that you can't even run the games off slower external storage, so how would that work with a slower drive internally?
Posted on Reply
#411
Shatun_Bear
DuxCroAccording to RTX 2080 review on Guru3D, it achieves 45 fps average in Shadow of the Tomb Raider in 4K and same settings as Digital Foundry used in their video. But they achieve around 60fps. Which is only 33% more. But they claim avg fps is 80% higher. You can see fps counter in left top corner with Tomb Raider. Vsync was on in captured footage? If there was 80% increase in performance. Avg fps should be around 80. RTX 2080Ti fps on same cpu DF was using should be over 60 in Shadow of TR. So RTX 3080 is Just around 30% faster than 2080Ti. So the new TOP of the line Nvidia gaming GPU is just 30% faster than previous TOP of the line GPU. When you look at it like that, i really don't see any special jump in performance. RTX 3090 is for professionals and i don't even count it in at that price.
I can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
Posted on Reply
#413
xorbe
I just can't imagine having a 350W card in my system. 250W is pretty warm. I felt that 180W blower was a sweet spot. All of these cards should have at least 16GB imho.
Posted on Reply
#414
Shatun_Bear
The Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
Posted on Reply
#415
BluesFanUK
Awaiting reviews before I take any real interest in this. Nvidia have a history of bullshitting.
Posted on Reply
#416
Makaveli
etayoriusI payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
Since when did NV care about what is fair? They will sell to what the market will bare.
Shatun_BearThe Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
There is a reason the Direct Storage API was created.
Posted on Reply
#417
BoboOOZ
ValantarSorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.

Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.

There will definitely be games requiring more than 10k of VRAM ;) But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.

SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.

...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.

The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again, reduce VRAM usage.


It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement false - Nvidia could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).
My dude, you spend too long to argue, too little to understand. I'm gonna cut this discussion a little short because I don't like discussions that don't go anywhere, no disrespect intended. The 3080 has already loads of bandwidth, all it's lacking is memory size.

If you don't believe me plot a x y graph, with memory bandwidth x FP32 perf as x, memory size as y. Plot the 780,980, 1080, 2080, 3080, 3070 and 3090 points on it and you'll see if there are any outliers ;) . Or we'll just agree to disagree.
Posted on Reply
#418
tehehe
etayoriusI payed $370 fot my GTX1070 in 2017. $500 for a 3070 does not seem very fair to me. Next gen the GTX 4070 will be $600.
I agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.
Posted on Reply
#419
ppn
teheheI agree. People thinking these prices are low are bonkers. We don't have enough competition in GPU space. $500 for a 8GB card in 2020. Are they joking? It will be fast obsolete.
Yeah 1070 +35% 2070 +45% 3070, this thing should be at least 95% faster than 1070, and vram remains the same 8GB.

but this is the gimped chip in order protect 2080Ti users that got gutted by the price cut 60%, 1199 to 499, 11GB is all they have left, not for long. we should get the 6144 Cuda 16GB at some point. only for $599.

8GB should be fine at low detail e-sports for the next 4 years. I get unplayable frame rate at below 8GB. and even 45% won't help with the framerate.
Posted on Reply
#420
efikkan
Chrispy_I graduated alongside, lived with, and stay in touch with multiple game developers from Campos Santos (now Valve), Splash Damage, Jagex, Blizzard, King, Ubisoft, and by proxy EA, and Activision; I think they'd all be insulted by your statement. More importantly, even if there is a grain of truth to what you say, the "off-the-shelf engines" …
Most studios don't make their own game engine in-house anymore, unfortunately. That's not an insult, but a fact. There has been a clear trend in fewer studios making their own engines for years, and the lack of performance optimizations and buggy/broken games at launch are the results. There are some studios, like Id software, which does still do quality work.

We are talking a lot about new hardware features and new APIs in this forum, yet the adoption of such features in games is very slow. Many have been wondering why we haven't seen the revolutionary performance gains we were promised with DirectX 12. Well, the reality is that for generic engines the low-level rendering code is hidden behind layers upon layers of abstractions, so those are not going to get the full potential.
Chrispy_… have been slowly but surely migrating to console-optimised engines over the last few years.
"Console optimization" is a myth.
In order to optimize code, low-level code must be written to target specfic instructions, API features or performance characteristics.
When people are claiming games are "console optimized", they are usually referring to them not being scalable, so it's rather lack of optimization if anything.
Chrispy_My point was exactly that. Perhaps English isn't your first language but when I said "the last 25 years of PC gaming has proven that devs always cater to the lowest common denominator" - that was me saying that it ISN'T going to change suddenly, and it's been like this for 25 years without changing. That's exactly why games aren't particularly good at utilising the hardware we have currently, because the devs need to make sure it'll run on a dual-core with 4GB RAM and a 2GB graphics card from 9 years ago.
Games today are usually not intentionally catering to the lowest common denominator, but it's more a result of the engine they have chosen, especially if they don't make one in-house. If having support for 10 year old PCs were a priority, we would see more games with support for older Windows versions etc.
Posted on Reply
#421
Valantar
Shatun_BearI can't wait to see the REAL performance increase by reputable sites (Digital Foundry is not reputable, this was paid marketing deal for Nvidia) of a 3080 vs a 2080 or 2080 Ti. Without the cherry-picking, marketing fiddling of figures and nebulous tweaking of settings (RT, DLSS, vsync etc).

Before it's even been reliably benchmarked it's being proclaimed as the greatest thing ever. But I've been in this game too long to know that the figures sans RT are not nearly as impressive as is being touted by Nvidia's world class marketing and underhanded settings fiddling.
Just a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":
It's early days with RTX 3080 testing. In terms of addressing the claims of the biggest generational leap Nvidia has ever delivered, I think the reviews process with the mass of data from multiple outlets testing a much wider range of titles is going to be the ultimate test for validating that claim. That said, some of the numbers I saw in my tests were quite extraordinary and on a more general level, the role of DLSS in accelerating RT titles can't be understated.
That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.
Posted on Reply
#422
r9
I just hope this new gen bring the prices down on the used cards, because the prices for used gpus are nuts.
People asking new card money for their used crap.
Hopefully AMD have something competitive this time around and have the effect on nvidia as it had on intel.
Because after many many years I see a better value in intel i7 10700 than any Ryzen.
Posted on Reply
#423
dir_d
Shatun_BearThe Series X SSD is not exactly fast or advanced. The PS5's, ok, that is impressive.

So the budget Series S will surely have the same SSD as the X for reasons mentioned above. If it doesn't, MS have created even more problems for themselves with game development.
MS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.
Posted on Reply
#424
Shatun_Bear
dir_dMS didnt have to overtune the SSD because they created a whole new APi, Direct Storage. I think the PS5 and the Xbox will have about the same effective speed.
No, the PS5 SSD is literally TWICE as fast, and its IO is apparently significantly more advanced, there's no chance they are similar in performance.
ValantarJust a technicality: there's a big difference between closely regulated exclusive access to hardware and paid marketing. Is it a marketing plot by Nvidia? Absolutely. Does it undermine DF' s credibility whatsoever? No. Why? Because they are completely transparent about the process, the limitations involved, and how the data is presented. Their conclusion is also "we should all wait for reviews, but this looks very good for now":



That there? That's nuance. (Something that is sorely lacking in your post.) They are making very, very clear that this is a preliminary hands-on, in no way an exhaustive review, and that there were mssive limitations to which games they could test, how they could run the tests, which data they could present from these tests, and how they could be presented. There is also no disclosure of this being paid content, which they are required by law to provide if it is. So no, this is not a "paid marketing deal". It's an exclusive preview. Learn the difference.
It's a paid marketing deal.
Posted on Reply
#425
SkynetAI
So I have a PSU 750W bronze, I know I need to upgrade but to which wattage
Posted on Reply
Add your own comment
Apr 16th, 2024 09:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts