News Posts matching #Pascal

Return to Keyword Browsing

NVIDIA Releases GeForce Hotfix Driver Version 431.18

NVIDIA released their latest driver package version, updating the software number up to 431.18. The new hotfix driver builds upon the previous 430.86 release, already fixed upon by another, previous hotfix driver 430.97.

This new release fixes BSODs on hibernation wake-up for ASUS' ASUS GL703GS/Asus GL502VML notebooks; game crashes or TDR on Shadow of the Tomb Raider when launching the game on Pascal GPUs; Shadow of the Tomb Raider's benchmark exiting abruptly should ray tracing be enabled; and flickering issues on Grand Theft Auto V when MSAA is enabled. Look below for the updated driver.
NVIDIA GeForce 431.18 Hotfix Driver

TechPowerUp Releases GPU-Z 2.20.0 with Major Fixes

TechPowerUp today released GPU-Z 2.20.0 as a very quick follow-up to version 2.19.0 from a few hours ago. We have come across a few critical bugs with the older version that needed immediate fixing. To begin with, your overclock getting reset on NVIDIA graphics cards with Boost when using GPU-Z, has been fixed. A crash noticed on machines with NVIDIA "Pascal" GPUs with no driver loaded, has also been fixed. Crashes noticed on Apple machines (i.e. Windows on Apple) with AMD Radeon Vega 12 GPU have been fixed. We touched up the memory bus-width read-out to show "bit" instead of "Bit," while we were at it. Grab the download from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.20.0
The change-log follows.

GIGABYTE Unveils GeForce GTX 1650 Series Graphics Cards

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the latest GeForce GTX 1650 graphics cards powered by NVIDIA Turing architecture. GIGABYTE launched 4 graphics cards - GeForce GTX 1650 GAMING OC 4G, GeForce GTX 1650 WINDFORCE OC 4G G, GeForce GTX 1650 OC 4G, GeForce GTX 1650 MINI ITX OC 4G. Turing architecture graphics cards have the ability to execute both integer and floating-point operations simultaneously making them much faster than the previous Pascal architecture. These graphics cards all use overclocked GPUs certified by GIGABYTE, and with WINDFORCE cooling technology, all players can enjoy the perfect experience from a variety of games.

GeForce GTX 1650 GAMING OC 4G provides the WINDFORCE 2x 100mm cooling solution for all key components of the graphics card. It takes care of not only the GPU but also VRAM and MOSFET, to ensure a stable overclock operation and longer life. GIGABYTE patented "Alternate Spinning" and unique blade fan features are designed to increase airflow and the addition of composite heat-pipes helps dissipate the heat from the GPU quickly. GAMING OC graphics card provides 3 HDMI and 1 display port output, which can support up to 4 displays at the same time. With RGB Fusion 2.0, Ultra-Durable top-grade materials and protection back-plate, GAMING OC graphics card delivers the best quality for the customers.

NVIDIA Also Releases Tech Demos for RTX: Star Wars, Atomic Heart, Justice Available for Download

We've seen NVIDIA's move to provide RTX effects on older, non-RT capable hardware today being met with what the company was certainly expecting: a cry of dismay from users that now get to see exactly what their non-Turing NVIDIA hardware is capable of. The move from NVIDIA could be framed as a way to democratize access to RTX effects via Windows DXR, enabling users of its GTX 1600 and 1000 series of GPUs to take a look at the benefits of raytracing; but also as an upgrade incentive for those that now see how their performance is lacking without the new specialized Turing cores to handle the added burden.

Whatever your side of the fence on that issue, however, NVIDIA has provided users with one more raytraced joy today. Three of them, in fact, in the form of three previously-shown tech demos. The Star Wars tech demo (download) is the most well known, certainly, with its studies on reflections on Captain Phasma's breastplate. Atomic Heart (download) is another one that makes use of RTX for reflections and shadows, while Justice (download) adds caustics to that equation. If you have a Turing graphics card, you can test these demos in their full glory, with added DLSS for improved performance. If you're on Pascal, you won't have that performance-enhancing mode available, and will have to slog it through software computations. Follow the embedded links for our direct downloads of these tech demos.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

NVIDIA RTX Logic Increases TPC Area by 22% Compared to Non-RTX Turing

Public perception on NVIDIA's new RTX series of graphics cards was sometimes marred by an impression of wrong resource allocation from NVIDIA. The argument went that NVIDIA had greatly increased chip area by adding RTX functionality (in both its Tensor ad RT cores) that could have been better used for increased performance gains in shader-based, non-raytracing workloads. While the merits of ray tracing oas it stands (in terms of uptake from developers) are certainly worthy of discussion, it seems that NVIDIA didn't dedicate that much more die area to their RTX functionality - at least not to the tone of public perception.

After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.

NVIDIA: Turing Adoption Rate 45% Higher Than Pascal, 90% of Users Buying Upwards in Graphics Product Tiers

NVIDIA during its investor day revealed some interesting statistics on its Turing-based graphics cards. The company essentially announced that revenue for Turing graphics cards sales increased 45% over that generated when NVIDIA introduced their Pascal architecture - which does make sense, when you consider how NVIDIA actually positioned its same processor tiers (**60, **70, **80) in higher pricing brackets than previously. NVIDIA's own graphics showcase this better than anyone else could, with a clear indication of higher pricing for the same graphics tier. According to the company, 90% of users are actually buying pricier graphics cards this generation than they were in the previous one -which makes sense, since a user buying a 1060 at launch would only have to pay $249, while the new RTX 2060 goes for $349.

Other interesting tidbits from NVIDIA's presentation at its investor day is that Pascal accounts for around 50% of the installed NVIDIA graphics cards, while Turing, for now, only accounts for 2% of that. This means 48% of users sporting an NVIDIA graphics card are using Maxwell or earlier designs, which NVIDIA says presents an incredible opportunity for increased sales as these users make the jump to the new Turing offerings - and extended RTX feature set. NVIDIA stock valuation grew by 5.82% today, likely on the back of this info.

NVIDIA to Enable DXR Ray Tracing on GTX (10- and 16-series) GPUs in April Drivers Update

NVIDIA had their customary GTC keynote ending mere minutes ago, and it was one of the longer keynotes clocking in at nearly three hours in length. There were some fascinating demos and features shown off, especially in the realm of robotics and machine learning, as well as new hardware as it pertains to AI and cars with the all-new Jetson Nano. It would be fair to say, however, that the vast majority of the keynote was targeting developers and researchers, as usually is the case at GTC. However, something came up in between which caught us by surprise, and no doubt is a pleasant update to most of us here on TechPowerUp.

Following AMD's claims on software-based real-time ray tracing in games, and Crytek's Neon Noir real-time ray tracing demo for both AMD and NVIDIA GPUs, it makes sense in hindsight that NVIDIA would allow rudimentary DXR ray tracing support to older hardware that do not support RT cores. In particular, an upcoming drivers update next month will allow DXR support for 10-series Pascal-microarchitecture graphics cards (GTX 1060 6 GB and higher), as well as the newly announced GTX 16-series Turing-microarchitecture GPUs (GTX 1660, GTX 1660 Ti). The announcement comes with a caveat letting people know to not expect RTX support (think lower number of ray traces, and possibly no secondary/tertiary effects), and this DXR mode will only be supported in Unity and Unreal game engines for now. More to come, with details past the break.

NVIDIA Adds New Options to Its MX200 Mobile Graphics Solutions - MX250 and MX230

NVIDIA has added new SKUs to its low power mobility graphics lineup. the MX230 and MX250 come in to replace The GeForce MX130 and MX150, but... there's really not that much of a performance improvement to justify the increase in the series' tier. Both solutions are based on Pascal, so there are no Turing performance uplifts at the execution level.

NVIDIA hasn't disclosed any CUDA core counts or other specifics on these chips; we only know that they are paired with GDDR 5 memory and feature Boost functionality for increased performance in particular scenarios. The strange thing is that NVIDIA's own performance scores compare their MX 130, MX150, and now MX230 and MX250 to Intel's UHD620 IGP part... and while the old MX150 was reported by NVIDIA as offering an up to 4x performance uplift compared to that Intel part, the new MX250 now claims an improvement of 3.5x the performance. Whether this is because of new testing methodology, or some other reason, only NVIDIA knows.

Version 4.6.0 Beta 10 of MSI Afterburner Introduces OC Scanner for Pascal

One of the runaway features for NVIDIA's latest RTX-20 series of graphics cards was the introduction of support for the OC Scanner feature - a program that automagically tests a range of frequencies on your NVIDIA graphics card and overclocks it to a deemed "stable" sweet-spot. This practically obviates the need for manual fine-tuning, though of course, the best results should always be found down that road - provided there's enough tinkering.

The latest version of MSI's Afterburner (v4.6.0 beta 10, available in the source link) now brings this functionality to Pascal-based graphics cards (besides some other features, such as voltage control, for Turing; check the entire release notes after the break). Have fun.

Intel Gen11 iGPU Roughly as Fast as Radeon Vega 8 (Ryzen 3 2200G)

Today, Intel is revealing major details about its upcoming CPU and graphics architectures to select audience. A big scoop VideoCardz landed is the company's next-generation Gen11 integrated graphics core, the first major upgrade to the company's 4-year old Gen9 architecture. According to them, a Gen11 (default GT2 trim we assume) graphics core should offer a compute throughput of 1 TFLOP/s, which is in the league of the Radeon Vega 8, with its 1.12 TFLOP/s throughput. The Vega 8 is part of AMD's Ryzen 3 2200G processor.

Raw compute power only paints half the picture, the iGPU reportedly also supports tile-based rendering. This is a highly publicized method of rendering that made its consumer debut with NVIDIA "Pascal." Also mentioned are redesigned FPU interfaces, support for half-precision FP16, 2x pixel/clock pipelines, display stream compression that lets it support 5K and 8K displays, and adaptive sync. Intel will debut its Gen11 iGPU with its upcoming Core "Ice Lake" processors that debut on the company's 10 nm silicon fabrication process.

GeForce GTX 1080 Ti Supply is Reportedly Dwindling, Prices on the Rise

Multiple sources confirmed to GamersNexus that the GTX 1080 Ti is starting to be really difficult to find. Supplies are decreasing and the reason seems to be clear: NVIDIA could have stopped the production of those graphics cards. This has had an immediate effect on these cards' prices, which in the last few days have increased everywhere in the world. The performance differences with the new GeForce RTX 2080 are not that important if you don't need the RT part of the equation -we could confirm this on our own review-, but the price of these new graphics card have made considering a 1080 Ti a viable option for many users that are looking to upgrade their systems.

Prices for the RTX 2080 start at $769 at Newegg for example, while the cheapest GTX 1080 Ti costs $850 there. The story is the same at Amazon, where we can find the cheapest RTX 2080 at $799,99 versus the $878.12 for a used model of the GTX 1080 Ti. The high-end model of the Pascal series competes directly with the RTX 2080 and was cheaper not long ago, but that's not the story now. With prices climbing, some are claiming the same will happen to the GTX 1080, GTX 1070 or GTX 1070 Ti in the next few weeks. Reports of RTX 2080 and RTX 2080 Ti inexplicably dying on users could also be fueling consumer-fear, as well as a [temporary] erosion in the value proposition of the RTX 20-series itself, as Microsoft pulled Windows 10 1809 Update, leaving fewer people with DirectX Ray-tracing, the software foundation for RTX.

NVIDIA GeForce GTX and GeForce RTX to Coexist in Product-Stack Till Q1-2019

NVIDIA CFO Colette Kress, speaking in the company's latest post-results financial analyst call, confirmed that NVIDIA isn't retiring its GeForce GTX 10-series products anytime soon, and that the series could coexist with the latest GeForce RTX series, leading up to Holiday-2018, which ends with the year. "We will be selling probably for the holiday season, both our Turing and our Pascal overall architecture," Kress stated. "We want to be successful for the holiday season, both our Turing and our Pascal overall architecture," she added. NVIDIA is expected to launch not just its RTX 2080 Ti and RTX 2080, but also its RTX 2070 towards the beginning of Q4-2018, and is likely to launch its "sweetspot" segment RTX 2060 by the end of the year.

NVIDIA reportedly has mountains of unsold GeForce GTX 10-series inventory, in the wake of not just a transition to the new generation, but also a slump in GPU-accelerated crypto-currency mining. The company could fine-tune prices of its popular 10-series SKUs such as the GTX 1080 Ti, the GTX 1080, GTX 1070 Ti, and GTX 1060, to sell them at slimmer margins. To consumers this could mean a good opportunity to lap up 4K-capable gaming hardware; but for NVIDIA, it could mean those many fewer takers for its ambitious RTX Technology in its formative year.

NVIDIA GPUs Can be Tricked to Support AMD FreeSync

Newer generations of NVIDIA GPUs such as "Pascal" and "Maxwell" meet or exceed the hardware requirements of AMD FreeSync, as they feature DisplayPort 1.4 connectors that include the features of DisplayPort 1.2a, required for VESA adaptive sync. In a bid to promote its own G-SYNC technology, NVIDIA doesn't expose this feature to monitors or software that support FreeSync. Redditor "bryf50" may have found a way around this. The trick is deceptively simple, however, you'll need games that support on-the-fly switching of rendering GPUs, and an AMD Radeon graphics card at hand.

When poking around with system settings in "Warcraft: Battle for Azeroth," bryf50 discovered that you can switch the "rendering GPU" on the fly, without having to physically connect your display to that newly selected GPU. You can start the game with your display connected to VGA1 (an AMD Radeon GPU), and switch the renderer in-game to VGA2 (an NVIDIA GPU). FreeSync should continue to work, while you enjoy the performance of that NVIDIA GPU. In theory, this should allow you to pair your high-end GTX 1080 Ti with a $50 RX 550 that supports FreeSync, instead of paying the $200+ G-SYNC tax.

NVIDIA GeForce RTX 2000 Series Specifications Pieced Together

Later today (20th August), NVIDIA will formally unveil its GeForce RTX 2000 series consumer graphics cards. This marks a major change in the brand name, triggered with the introduction of the new RT Cores, specialized components that accelerate real-time ray-tracing, a task too taxing on conventional CUDA cores. Ray-tracing and DNN acceleration requires SIMD components to crunch 4x4x4 matrix multiplication, which is what RT cores (and tensor cores) specialize at. The chips still have CUDA cores for everything else. This generation also debuts the new GDDR6 memory standard, although unlike GeForce "Pascal," the new GeForce "Turing" won't see a doubling in memory sizes.

NVIDIA is expected to debut the generation with the new GeForce RTX 2080 later today, with market availability by end of Month. Going by older rumors, the company could launch the lower RTX 2070 and higher RTX 2080+ by late-September, and the mid-range RTX 2060 series in October. Apparently the high-end RTX 2080 Ti could come out sooner than expected, given that VideoCardz already has some of its specifications in hand. Not a lot is known about how "Turing" compares with "Volta" in performance, but given that the TITAN V comes with tensor cores that can [in theory] be re-purposed as RT cores; it could continue on as NVIDIA's halo SKU for the client-segment.

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

NVIDIA's Next Gen GPU Launch Held Back to Drain Excess, Costly Built-up Inventory?

We've previously touched upon whether or not NVIDIA should launch their 1100 or 2000 series of graphics cards ahead of any new product from AMD. At the time, I wrote that I only saw benefits to that approach: earlier time to market -> satisfaction of upgrade itches and entrenchment as the only latest-gen manufacturer -> raised costs over lack of competition -> ability to respond by lowering prices after achieving a war-chest of profits. However, reports of a costly NVIDIA mistake in overestimating demand for its Pascal GPUs does lend some other shades to the whole equation.

Write-offs in inventory are costly (just ask Microsoft), and apparently, NVIDIA has found itself in a miscalculating demeanor: overestimating gamers' and miners' demand for their graphics cards. When it comes to gamers, NVIDIA's Pascal graphics cards have been available in the market for two years now - it's relatively safe to say that the majority of gamers who needed higher-performance graphics cards have already taken the plunge. As to miners, the cryptocurrency market contraction (and other factors) has led to a taper-out of graphics card demand for this particular workload. The result? NVIDIA's demand overestimation has led, according to Seeking Alpha, to a "top three" Taiwan OEM returning 300,000 GPUs to NVIDIA, and "aggressively" increased GDDR5 buying orders from the company, suggesting an excess stock of GPUs that need to be made into boards.

Akasa Unveils a Range of Fanless Cases for "Dawson Canyon" NUC Desktops

Akasa at Computex, unveiled a wide range of fan-less aluminium cases for the 7th generation "Dawson Canyon" NUC boards. The company had alreadly launched the Pascal MD late-2017. Among the new cases are the Newton S7D, Newton D3, and the Plato X7D. The Plato X7D is the largest of the three, and is characterized by a lattice of aluminium ridges that work like heatsinks for the SoC and chipset of the NUC, and diamond-cut edges along the front panel. Front-panel connectivity includes two each of USB 3.0 and USB 2.0 ports. Besides rear I/O holes for "Dawson Canyon" series NUC boards, the case offers a stub for an RS232 (COM) port at the back. All three cases feature VESA mounts, so you can strap the NUCs behind your monitor and reduce clutter on your desk.

The Newton D3 is the most compact case of the three, and supports fewer NUC board models, namely the NUC7i3DNBE, NUC7i3DNKE, and NUC7i3DNHE (all of which have are low-TDP SoCs and fewer connectors). You still get a 2.5-inch drive bay, mount holes for your WLAN card's antennae, two USB 3.0 front panel connectors, an IR window, and an RS232 serial port provision at the back. The Newton S7D is its larger sibling, with more metal to the bone, to cope with higher TDP SoCs, and hence supports NUC boards based on Core i5 and Core i7 SoCs.

NVIDIA Has a DisplayPort Problem Which Only a BIOS Update Can Fix

NVIDIA "Maxwell" and "Pascal" graphics architectures introduced support for modern display connectivity to keep up with the breakneck pace at which display resolutions are scaling up. The two introduce support for DisplayPort 1.4 and 1.3, however the implementation is less than perfect. Some of the newer monitors that leverage DisplayPort 1.4 or 1.3 standards don't function as designed on "Maxwell" (GeForce GTX 900 series) and "Pascal" (GeForce 10-series) graphics cards, with users reporting a range of bugs from blank screens until the operating system loads, to frozen boot sequences.

Unfortunately, these issues cannot be fixed by driver updates, and require graphics card BIOS updates. Luckily, you won't be at the mercy of lethargic AIC partners looking to limit their warranty claims by going slow on BIOS updates, or NVFlash rocket-science. NVIDIA released a tool which can detect if your graphics card needs the update, and then updates the BIOS for you, from within Windows. The app first unloads your driver, and flashes your graphics card BIOS (a process which must not be interrupted, lest you end up with an expensive brick).

Update: We have confirmation that the tool is intended for both reference-design and custom-design graphics cards.
DOWNLOAD: NVIDIA Graphics Firmware Update Tool for DisplayPort 1.3 and 1.4 Displays

NVIDIA Releases GeForce 397.93 WHQL Drivers

NVIDIA today released GeForce 397.93 WHQL "Game Ready" drivers. The drivers come with optimization for "The Crew" closed beta and "State of Decay 2." SLI profiles are either added or updated for "DRG Initiative," and "Star Wars: Battlefront II." The drivers also introduce CUDA 9.2 support. In addition, the drivers also address a number of bugs.

You now no longer need to close Steam to enable/disable SLI. A "Wolfenstein II: TNC" bug that causes the game to freeze in the Roosevelt area, is fixed. A critical issue is fixed on machines with both "Pascal" and "Kepler" GPUs installed, in which the driver fails to load. Green flickering noticed in "Far Cry 5" when using HDR on non-native screen-resolution, is fixed. Grab the drivers from the link below.
DOWNLOAD: NVIDIA GeForce 397.93 WHQL

The change-log follows.

Colorful Announces iGame SLI HB Bridge

Colorful Technology Company Limited, professional manufacturer of graphics cards, motherboards and high-performance storage solutions is proud to announce its iGame SLI HB Bridge that will fit perfectly with the aesthetics of COLORFUL graphics cards supporting SLI. The new iGame SLI HB bridges supports dual-link SLI mode that improves performance for graphics card that support NVIDIA SLI including the latest NVIDIA 10-Series graphics cards.

The new COLORFUL iGame SLI HB Bridge improve SLI performance when used with compatible NVIDIA 10-series Pascal GPUs that support High-Bandwidth SLI by delivering double the bandwidth of traditional SLI bridges. The iGame SLI HB Bridge also supports traditional SLI compatible graphics cards from NVIDIA. This increases overall performance from both GPUs and provides a smoother experience when playing demanding games that properly utilize NVIDIA SLI. Not only do you get improved, overall performance in higher resolutions, you also get higher framerates which compliment today's high-performance monitors.

EK Releases RGB Water Block for GeForce Founders Edition Based Graphics Cards

EK the Slovenia-based premium PC liquid cooling gear manufacturer is expanding its RGB portfolio by releasing the EK-FC GeForce GTX FE RGB water block that is compatible with multiple reference design Founders Edition NVIDIA GeForce GTX 1060, 1070, 1080, 1080 Ti, Titan X Pascal and Titan Xp based graphics cards. As known from before, the FE labeled GPU blocks come as a replacement to the old GeForce GTX 10×0 / TITAN X Series of water blocks.

EK-FC GeForce GTX FE RGB
This water block directly cools the GPU, RAM as well as VRM (voltage regulation module) as water flows directly over these critical areas, thus allowing the graphics card and it's VRM to remain stable under high overclocks. EK-FC GeForce GTX FE RGB water block features a central inlet split-flow cooling engine design for best possible cooling performance, which also works flawlessly with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance allowing this product to be used in liquid cooling systems using weaker water pumps.

CORSAIR Launches the CORSAIR ONE ELITE

CORSAIR, a world leader in PC gaming peripherals and enthusiast components, today announced a new version of its award-winning CORSAIR ONE small form factor gaming PC, the CORSAIR ONE ELITE. Now featuring the latest 8th Generation Intel Core i7-8700K six-core CPU, the CORSAIR ONE ELITE boasts six liquid-cooled CPU cores, running at up to 4.2GHz, providing even more processing power to drive the latest games, streaming tools and demanding content creation applications, all at once.

Accompanying its new state-of-the-art CPU, the CORSAIR ONE ELITE boasts hardware to rival the most powerful desktop PCs, all still contained in a chassis that's just 12L in volume and which produces less than 20dBa of noise at idle. 32GB of CORSAIR VENGEANCE LPX DDR4 2,666MHz memory offers plentiful capacity and performance for the most demanding games or content creation applications, while a liquid-cooled NVIDIA GeForce GTX 1080 Ti delivers leading 3D graphics performance to push frame rates to the limit, even with maximum detail settings and at 4K resolution.

Lesson from the Crypto/DRAM Plagues: Build Future-Proof

As someone who does not mine crypto-currency, loves fast computers, and gaming on them, I find the current crypto-currency mining craze using graphics cards nothing short of a plague. It's like war broke out, and your government took away all the things you love from the market. All difficult times teach valuable lessons, and in this case, it is "Save up and build future-proof."

When NVIDIA launched its "Pascal" GPU architecture way back in Summer 2016, and AMD followed up, as a user of 2x GeForce GTX 970 SLI, I did not feel the need to upgrade anything, and planned to skip the Pascal/Polaris/Vega generation, and only upgrade when "Volta" or "Navi" offered something interesting. My pair of GTX 970 cards are backed by a Core i7-4770K processor, and 16 GB of dual-channel DDR3-1866 memory, both of which were considered high-end when I bought them, around 2014-15.

Throughout 2016, my GTX 970 pair ate AAA titles for breakfast. With NVIDIA investing on advancing SLI with the new SLI-HB, and DirectX 12 promising a mixed multi-GPU utopia, I had calculated a rather rosy future for my cards (at least to the point where NVIDIA would keep adding SLI profiles for newer games for my cards to chew through). What I didn't see coming was the inflection point between the decline of multi-GPU and crypto-plague eating away availability of high-end graphics cards at sane prices. That is where we are today.

NVIDIA "Pascal" and AMD "Vega" Graphics Card Prices Sizzle Stateside

Over the 2018 International CES week, prices of performance-segment and high-end graphics card prices have taken flight on US-based online retailers. Prices of the recently-launched GeForce GTX 1070 Ti ($380-ish launch price) cards are touching, $900; those of the GTX 1080 (non-Ti) are over the $1,000-mark, while the GTX 1080 Ti is out of stock in many places. Prices of the GTX 1060 series is still under the $300-mark, but are beginning to rise. AMD's Radeon RX Vega family is either out of stock, or over the $1,000-mark. A combination of crypto-currency mining craze, coupled with reports of graphics card prices rising over 2018 could be behind this rally.
Return to Keyword Browsing