News Posts matching #Pascal

Return to Keyword Browsing

Data is Beautiful: 10 Years of AMD and NVIDIA GPU Innovation Visualized

Using our graphics card database, which is managed by our very own T4CFantasy, reddit user u/Vito_ponfe_Andariel created some basic charts mapping out the data points from our expansive, industry-leading GPU database. In these charts, the user compares technological innovation for both AMD and NVIDIA's GPUs in the last ten years, plotting out the performance evolution of the "best available GPU" per year in terms of performance, performance per dollar (using the database's launch price metric), energy consumption, performance per transistor, and a whole lot of other data correlation sets.

It's interesting to note technological changes in these charts and how they relate to the overall values. For example, if you look at the performance per transistor graph, you'll notice that performance per transistor has actually declined roughly 20% with the transition from NVIDIA's Pascal (GTX 1080 Ti) to the Turing (RTX 20-series) architecture. At the same time, AMD's performance per transistor exploded around 40% from Vega 64 to the RX 5700 XT graphics card. This happens, in part, due to the introduction of raytracing-specific hardware on NVIDIA's Turing, which takes up transistor counts without aiding in general shading performance - while AMD benefited from a new architecture in RDNA as well as the process transition from 14 nm to 7 nm. We see this declining performance behavior again with AMD's introduction of the RX 6800 XT from AMD, which loses some 40% in this performance per transistor metric - likely due to the introduction of RT cores and other architectural changes. There are of course other variables to the equation, but it is nonetheless interesting to note. Look after the break for the rest of the charts.

AAEON Announces NVIDIA Jetson TX2 NX AI Edge Platform

- AAEON, an industry leader in embedded AI edge computing systems, announces a lineup of platforms currently under development powered by the newly released NVIDIA Jetson TX2 NX system on module (SOM). This range of systems, named the BOXER-823x Series, will augment AAEON's lineup of diverse solutions powered by the NVIDIA Jetson edge AI platform. With these platforms planned for release in mid-2021, early adopters have a chance to demo the systems while in development.

The NVIDIA Jetson TX2 NX is the latest addition to NVIDIA's Jetson lineup, slotting in between the NVIDIA Jetson Nano and NVIDIA Xavier NX. Jetson TX2 NX is powered by a NVIDIA Pascal GPU, quad-core Arm Cortex -A57 MPCore processor complex, offering 256 CUDA Cores, 4 GB of onboard LPDD4 memory, and 51.2 GB/s of memory bandwidth compatible with extensions of various standard bus interfaces. In total, this offers AI processing speeds up to 1.33 TFLOPS.

NVIDIA Seemingly Begins Resupplying GeForce GTX 1050 Ti GPUs

In a move that speaks loads towards the current state of the semiconductor market, NVIDIA has apparently begun reseeding retailers with 5-year-old Pascal-based GTX 1050 Ti graphics cards. In some retailers (namely, Newegg), the card can still be found at $499, a vestige of tight supply since its discontinuation, and a result of the constrained GPU market. However, retailers that have received fresh supply of the 14 nm, 4 GB GDDR5-totting graphics card have it at $179 - still above the 5-year-old asking price at release, which was set at $140. The GTX 1050 Ti features a 192-bit memory bus and a whopping 768 shading units.

Resupplying this card means that customers looking at the lower-end of the spectrum now have a feasible alternative to non-existent solutions on the RTX 3000 series. Equivalent models in the 2000-series are also hard to come by, and marred by much higher pricing. The choice for the GTX 1050 Ti with its 4 GB GDDR5 bus isn't an innocent one; it actually skirts two problems with current-generation hardware. First of all, constraints with GDDR6 memory allocation, which is becoming a bottleneck as well for new graphics card manufacture on account on the increasing amount of chips employed in each individual card, as well as its deployment in latest-gen consoles. And secondly, the 4 GB VRAM is no longer enough for these graphics cards to fit in the current Ethereum mining workload fully into memory, which means they also skirt mining demand. It is, however, a heavy moment for the industry and for any enthusiast who wants to see the progress we have been so readily promised.

Mysterious GeForce GT 1010 Rears its Head, Targeting OEMs

NVIDIA has quietly introduced a new entry-level desktop discrete GPU positioned a notch below even the GeForce GT 1030. The new GT 1010 is based on the "Pascal" graphics architecture circa 2016, and is cut further down from the 16 nm "GP108" silicon. The GT 1010 appears to be NVIDIA's move at replacing the "Kepler" based GT 710 from its bare entry-level, and helping the company clear out all remaining inventory of the "GP108" silicon from the channel, out to OEMs. The GT 1010 likely features 256 CUDA cores, 16 TMUs, 16 ROPs, and 2 GB of GDDR5 memory across a 64-bit wide memory bus (40 GB/s bandwidth), with a maximum GPU Boost frequency of 1468 MHz. If the GT 1010 does make it to the retail channel, we expect a sub-$60 price. With these specs, the chip will be easily bested by the latest iGPUs from AMD and Intel.

NVIDIA Releases GeForce MX450 with PCI-Express 4.0 Interface

NVIDIA released a mysterious new mobile GPU that has us scratching our heads over the silicon that could be driving it. The new GeForce MX450 is an entry-mainstream mobile GPU that apparently ships with a PCI-Express gen 4.0 bus interface, something only NVIDIA's "Ampere" GPUs feature. The product page for the MX450 doesn't list out any other specs, than its memory type support including new GDDR6 memory (supported only on NVIDIA architectures "Turing" or later). Interestingly, it also lists GDDR5 as one of its memory options. PCI-Express 4.0 is prominently listed as one of its specs.

Upon digging some more among device IDs, we've come across the ID of the GDDR5 variant, with the ASIC code "GP107-670-A1," and the silicon is based on the much older "Pascal" architecture, which lacks PCIe gen 4 support. The GDDR6 variant eludes us. This is the SKU which could be based on a newer architecture, given its support for GDDR6 and PCIe gen 4. NVIDIA's GeForce MX line of entry-mainstream mobile GPUs are built to performance/power targets, and wildly vary with the underlying tech. They've been historically a means for NVIDIA to clear inventory of older generation ASICs to notebook manufacturers, who get put the NVIDIA logo on their products, and advertise discrete graphics. Given this, the use of a newer (even unreleased) generation of GPUs comes as a surprise.

Video Memory Sizes Set to Swell as NVIDIA Readies 20GB and 24GB GeForce Amperes

NVIDIA's GeForce RTX 20-series "Turing" graphics card series did not increase video memory sizes in comparison to GeForce GTX 10-series "Pascal," although the memory itself is faster on account of GDDR6. This could change with the GeForce RTX 30-series "Ampere," as the company looks to increase memory sizes across the board in a bid to shore up ray-tracing performance. WCCFTech has learned that in addition to a variety of strange new memory bus widths, such as 320-bit, NVIDIA could introduce certain higher variants of its RTX 30-series cards with video memory sizes as high as 20 GB and 24 GB.

Memory sizes of 20 GB or 24 GB aren't new for NVIDIA's professional-segment Quadro products, but it's certainly new for GeForce, with only the company's TITAN-series products breaking the 20 GB-mark at prices due north of $2,000. Much of NVIDIA's high-end appears to be resting on segmentation of the PG132 common board design, coupled with the GA102 silicon, from which the company could carve out several SKUs spaced far apart in the company's product stack. NVIDIA's next-generation GeForce "Ampere" family is expected to debut in September 2020, with product launches in the higher-end running through late-Q3 and Q4 of 2020.

IBASE Announces NVIDIA Jetson TX2 AI Computing Platform for AIoT Applications

IBASE Technology Inc. (TPEx: 8050), a world-leading manufacturer of industrial motherboard boards and embedded computing solutions, has revealed its latest EC-3200 AI computing platform that is based on the cutting-edge power-efficient and high-performance NVIDIA Jetson Tegra X2 (TX2) processor. The NVIDIA Jetson TX2 pairs a dual-core Denver 2 alongside a quad-core ARM Cortex -A57 processor and provides 256 CUDA cores on the NVIDIA's advanced Pascal GPU architecture with up to 1.33 TFLOPS, delivering exceptional AI performance.

Designed to operate under an extended temperature range from -20°C to +60°C, the EC-3200 leverages the integrated GPU-computing power of the Jetson TX2 platform in building edge inference servers to analyze and better manage real time traffic flow in smart cities or to optimize the efficiency of operational processes in intelligent factories. "Current edge and cloud-based AI products require better computational and video analytics capability to perform demanding real-time data processing and overcome latency issues," said Wilson Lin, Director of IBASE Product Planning Department. "The EC-3200 is no doubt a durable solution, especially built with a fanless design for non-stop operation and takes advantage of the GPU-accelerated parallel processing of NVIDIA Jetson TX2 to handle data intensive and mission-critical workloads with power efficiency and unmatched reliability."

Microsoft DirectX 12 Ultimate: Why it Helps Gamers Pick Future Proof Graphics Cards

Microsoft Thursday released the DirectX 12 Ultimate logo. This is not a new API with any new features, but rather a differentiator for graphics cards and game consoles that support four key modern features of DirectX 12. This helps consumers recognize the newer and upcoming GPUs, and tell them apart from some older DirectX 12 capable GPUs that were released in the mid-2010s. For a GPU to be eligible for the DirectX 12 Ultimate logo, it must feature hardware acceleration for ray-tracing with the DXR API; must support Mesh Shaders, Variable Rate Shading (VRS), and Sampler Feedback (all of the four). The upcoming Xbox Series X console features this logo by default. Microsoft made it absolutely clear that the DirectX 12 Ultimate logo isn't meant as a compatibility barrier, and that these games will work on older hardware, too.

As it stands, the "Navi"-based Radeon RX 5000 series are "obsolete", just like some Turing cards from the GeForce GTX 16-series. At this time, the only shipping product which features the logo is NVIDIA's GeForce RTX 20-series and the TITAN RTX, as they support all the above features.

NVIDIA GeForce MX350 and MX330 Surface, Pascal Based Entry mGPUs

NVIDIA's GeForce MX-series mobile GPU line exists so notebook manufacturers can put the NVIDIA logo on their products and boast of gaming capabilities. The company is giving finishing touches to its new GeForce MX330 and MX350 chips, based on the "Pascal" architecture. The MX330 is the company's second rebrand of the MX150 that's based on the 14 nm "GP108" silicon. It's equipped with 384 CUDA cores, and up to 2 GB of GDDR5 memory across a 64-bit wide memory interface. NVIDIA increased the clock speeds to 1531 MHz base, and 1594 MHz GPU Boost (compared to 1227/1468 MHz of the MX150), while remaining in the 25 W TDP envelope.

The MX350, on the other hand, is based on the 14 nm "GP107" silicon, is equipped with 640 CUDA cores, and 2 GB of GDDR5 memory across the same 64-bit bus width as the MX330; but has aggressive power-management that lends it a TDP of just 20 W, despite 66% more CUDA cores than the MX330. Both chips are easily capable of handling non-gaming tasks on typical 1080p / 1440p notebooks; but can game only at 720p thru 1080p, with low-to-mid settings.

NVIDIA to Reuse Pascal for Mobility-geared MX300 Series

NVIDIA will apparently still be using Pascal when they launch their next generation of low-power discrete graphics solutions for mobile systems. The MX300 series will replace the current crop of MX200 series (segregated in three products in the form of the MX230, MX250 10 W and MX250 25 W). The new MX300 keeps the dual-tiered system, but ups the ante on the top of the line MX350. Even though it's still Pascal, on a 14 nm process, the MX350 should see an increase in CUDA cores to 640 (by using NVIDIA's Pascal GP107 chip) from the MX250's 384. Performance, then, should be comparable to the NVIDIA GTX 1050.

The MX330, on the other hand, will keep specifications of the MX250, which signals a tier increase from the 256 execution units in the MX230 to 384. This should translate to appreciable performance increases for the new MX300 series, despite staying on NVIDIA's Pascal architecture. The new lineup is expected to be announced on February.

NVIDIA Releases GeForce Hotfix Driver Version 431.18

NVIDIA released their latest driver package version, updating the software number up to 431.18. The new hotfix driver builds upon the previous 430.86 release, already fixed upon by another, previous hotfix driver 430.97.

This new release fixes BSODs on hibernation wake-up for ASUS' ASUS GL703GS/Asus GL502VML notebooks; game crashes or TDR on Shadow of the Tomb Raider when launching the game on Pascal GPUs; Shadow of the Tomb Raider's benchmark exiting abruptly should ray tracing be enabled; and flickering issues on Grand Theft Auto V when MSAA is enabled. Look below for the updated driver.
NVIDIA GeForce 431.18 Hotfix Driver

TechPowerUp Releases GPU-Z 2.20.0 with Major Fixes

TechPowerUp today released GPU-Z 2.20.0 as a very quick follow-up to version 2.19.0 from a few hours ago. We have come across a few critical bugs with the older version that needed immediate fixing. To begin with, your overclock getting reset on NVIDIA graphics cards with Boost when using GPU-Z, has been fixed. A crash noticed on machines with NVIDIA "Pascal" GPUs with no driver loaded, has also been fixed. Crashes noticed on Apple machines (i.e. Windows on Apple) with AMD Radeon Vega 12 GPU have been fixed. We touched up the memory bus-width read-out to show "bit" instead of "Bit," while we were at it. Grab the download from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.20.0
The change-log follows.

GIGABYTE Unveils GeForce GTX 1650 Series Graphics Cards

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the latest GeForce GTX 1650 graphics cards powered by NVIDIA Turing architecture. GIGABYTE launched 4 graphics cards - GeForce GTX 1650 GAMING OC 4G, GeForce GTX 1650 WINDFORCE OC 4G G, GeForce GTX 1650 OC 4G, GeForce GTX 1650 MINI ITX OC 4G. Turing architecture graphics cards have the ability to execute both integer and floating-point operations simultaneously making them much faster than the previous Pascal architecture. These graphics cards all use overclocked GPUs certified by GIGABYTE, and with WINDFORCE cooling technology, all players can enjoy the perfect experience from a variety of games.

GeForce GTX 1650 GAMING OC 4G provides the WINDFORCE 2x 100mm cooling solution for all key components of the graphics card. It takes care of not only the GPU but also VRAM and MOSFET, to ensure a stable overclock operation and longer life. GIGABYTE patented "Alternate Spinning" and unique blade fan features are designed to increase airflow and the addition of composite heat-pipes helps dissipate the heat from the GPU quickly. GAMING OC graphics card provides 3 HDMI and 1 display port output, which can support up to 4 displays at the same time. With RGB Fusion 2.0, Ultra-Durable top-grade materials and protection back-plate, GAMING OC graphics card delivers the best quality for the customers.

NVIDIA Also Releases Tech Demos for RTX: Star Wars, Atomic Heart, Justice Available for Download

We've seen NVIDIA's move to provide RTX effects on older, non-RT capable hardware today being met with what the company was certainly expecting: a cry of dismay from users that now get to see exactly what their non-Turing NVIDIA hardware is capable of. The move from NVIDIA could be framed as a way to democratize access to RTX effects via Windows DXR, enabling users of its GTX 1600 and 1000 series of GPUs to take a look at the benefits of raytracing; but also as an upgrade incentive for those that now see how their performance is lacking without the new specialized Turing cores to handle the added burden.

Whatever your side of the fence on that issue, however, NVIDIA has provided users with one more raytraced joy today. Three of them, in fact, in the form of three previously-shown tech demos. The Star Wars tech demo (download) is the most well known, certainly, with its studies on reflections on Captain Phasma's breastplate. Atomic Heart (download) is another one that makes use of RTX for reflections and shadows, while Justice (download) adds caustics to that equation. If you have a Turing graphics card, you can test these demos in their full glory, with added DLSS for improved performance. If you're on Pascal, you won't have that performance-enhancing mode available, and will have to slog it through software computations. Follow the embedded links for our direct downloads of these tech demos.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

NVIDIA RTX Logic Increases TPC Area by 22% Compared to Non-RTX Turing

Public perception on NVIDIA's new RTX series of graphics cards was sometimes marred by an impression of wrong resource allocation from NVIDIA. The argument went that NVIDIA had greatly increased chip area by adding RTX functionality (in both its Tensor ad RT cores) that could have been better used for increased performance gains in shader-based, non-raytracing workloads. While the merits of ray tracing oas it stands (in terms of uptake from developers) are certainly worthy of discussion, it seems that NVIDIA didn't dedicate that much more die area to their RTX functionality - at least not to the tone of public perception.

After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.

NVIDIA: Turing Adoption Rate 45% Higher Than Pascal, 90% of Users Buying Upwards in Graphics Product Tiers

NVIDIA during its investor day revealed some interesting statistics on its Turing-based graphics cards. The company essentially announced that revenue for Turing graphics cards sales increased 45% over that generated when NVIDIA introduced their Pascal architecture - which does make sense, when you consider how NVIDIA actually positioned its same processor tiers (**60, **70, **80) in higher pricing brackets than previously. NVIDIA's own graphics showcase this better than anyone else could, with a clear indication of higher pricing for the same graphics tier. According to the company, 90% of users are actually buying pricier graphics cards this generation than they were in the previous one -which makes sense, since a user buying a 1060 at launch would only have to pay $249, while the new RTX 2060 goes for $349.

Other interesting tidbits from NVIDIA's presentation at its investor day is that Pascal accounts for around 50% of the installed NVIDIA graphics cards, while Turing, for now, only accounts for 2% of that. This means 48% of users sporting an NVIDIA graphics card are using Maxwell or earlier designs, which NVIDIA says presents an incredible opportunity for increased sales as these users make the jump to the new Turing offerings - and extended RTX feature set. NVIDIA stock valuation grew by 5.82% today, likely on the back of this info.

NVIDIA to Enable DXR Ray Tracing on GTX (10- and 16-series) GPUs in April Drivers Update

NVIDIA had their customary GTC keynote ending mere minutes ago, and it was one of the longer keynotes clocking in at nearly three hours in length. There were some fascinating demos and features shown off, especially in the realm of robotics and machine learning, as well as new hardware as it pertains to AI and cars with the all-new Jetson Nano. It would be fair to say, however, that the vast majority of the keynote was targeting developers and researchers, as usually is the case at GTC. However, something came up in between which caught us by surprise, and no doubt is a pleasant update to most of us here on TechPowerUp.

Following AMD's claims on software-based real-time ray tracing in games, and Crytek's Neon Noir real-time ray tracing demo for both AMD and NVIDIA GPUs, it makes sense in hindsight that NVIDIA would allow rudimentary DXR ray tracing support to older hardware that do not support RT cores. In particular, an upcoming drivers update next month will allow DXR support for 10-series Pascal-microarchitecture graphics cards (GTX 1060 6 GB and higher), as well as the newly announced GTX 16-series Turing-microarchitecture GPUs (GTX 1660, GTX 1660 Ti). The announcement comes with a caveat letting people know to not expect RTX support (think lower number of ray traces, and possibly no secondary/tertiary effects), and this DXR mode will only be supported in Unity and Unreal game engines for now. More to come, with details past the break.

NVIDIA Adds New Options to Its MX200 Mobile Graphics Solutions - MX250 and MX230

NVIDIA has added new SKUs to its low power mobility graphics lineup. the MX230 and MX250 come in to replace The GeForce MX130 and MX150, but... there's really not that much of a performance improvement to justify the increase in the series' tier. Both solutions are based on Pascal, so there are no Turing performance uplifts at the execution level.

NVIDIA hasn't disclosed any CUDA core counts or other specifics on these chips; we only know that they are paired with GDDR 5 memory and feature Boost functionality for increased performance in particular scenarios. The strange thing is that NVIDIA's own performance scores compare their MX 130, MX150, and now MX230 and MX250 to Intel's UHD620 IGP part... and while the old MX150 was reported by NVIDIA as offering an up to 4x performance uplift compared to that Intel part, the new MX250 now claims an improvement of 3.5x the performance. Whether this is because of new testing methodology, or some other reason, only NVIDIA knows.

Version 4.6.0 Beta 10 of MSI Afterburner Introduces OC Scanner for Pascal

One of the runaway features for NVIDIA's latest RTX-20 series of graphics cards was the introduction of support for the OC Scanner feature - a program that automagically tests a range of frequencies on your NVIDIA graphics card and overclocks it to a deemed "stable" sweet-spot. This practically obviates the need for manual fine-tuning, though of course, the best results should always be found down that road - provided there's enough tinkering.

The latest version of MSI's Afterburner (v4.6.0 beta 10, available in the source link) now brings this functionality to Pascal-based graphics cards (besides some other features, such as voltage control, for Turing; check the entire release notes after the break). Have fun.

Intel Gen11 iGPU Roughly as Fast as Radeon Vega 8 (Ryzen 3 2200G)

Today, Intel is revealing major details about its upcoming CPU and graphics architectures to select audience. A big scoop VideoCardz landed is the company's next-generation Gen11 integrated graphics core, the first major upgrade to the company's 4-year old Gen9 architecture. According to them, a Gen11 (default GT2 trim we assume) graphics core should offer a compute throughput of 1 TFLOP/s, which is in the league of the Radeon Vega 8, with its 1.12 TFLOP/s throughput. The Vega 8 is part of AMD's Ryzen 3 2200G processor.

Raw compute power only paints half the picture, the iGPU reportedly also supports tile-based rendering. This is a highly publicized method of rendering that made its consumer debut with NVIDIA "Pascal." Also mentioned are redesigned FPU interfaces, support for half-precision FP16, 2x pixel/clock pipelines, display stream compression that lets it support 5K and 8K displays, and adaptive sync. Intel will debut its Gen11 iGPU with its upcoming Core "Ice Lake" processors that debut on the company's 10 nm silicon fabrication process.

GeForce GTX 1080 Ti Supply is Reportedly Dwindling, Prices on the Rise

Multiple sources confirmed to GamersNexus that the GTX 1080 Ti is starting to be really difficult to find. Supplies are decreasing and the reason seems to be clear: NVIDIA could have stopped the production of those graphics cards. This has had an immediate effect on these cards' prices, which in the last few days have increased everywhere in the world. The performance differences with the new GeForce RTX 2080 are not that important if you don't need the RT part of the equation -we could confirm this on our own review-, but the price of these new graphics card have made considering a 1080 Ti a viable option for many users that are looking to upgrade their systems.

Prices for the RTX 2080 start at $769 at Newegg for example, while the cheapest GTX 1080 Ti costs $850 there. The story is the same at Amazon, where we can find the cheapest RTX 2080 at $799,99 versus the $878.12 for a used model of the GTX 1080 Ti. The high-end model of the Pascal series competes directly with the RTX 2080 and was cheaper not long ago, but that's not the story now. With prices climbing, some are claiming the same will happen to the GTX 1080, GTX 1070 or GTX 1070 Ti in the next few weeks. Reports of RTX 2080 and RTX 2080 Ti inexplicably dying on users could also be fueling consumer-fear, as well as a [temporary] erosion in the value proposition of the RTX 20-series itself, as Microsoft pulled Windows 10 1809 Update, leaving fewer people with DirectX Ray-tracing, the software foundation for RTX.

NVIDIA GeForce GTX and GeForce RTX to Coexist in Product-Stack Till Q1-2019

NVIDIA CFO Colette Kress, speaking in the company's latest post-results financial analyst call, confirmed that NVIDIA isn't retiring its GeForce GTX 10-series products anytime soon, and that the series could coexist with the latest GeForce RTX series, leading up to Holiday-2018, which ends with the year. "We will be selling probably for the holiday season, both our Turing and our Pascal overall architecture," Kress stated. "We want to be successful for the holiday season, both our Turing and our Pascal overall architecture," she added. NVIDIA is expected to launch not just its RTX 2080 Ti and RTX 2080, but also its RTX 2070 towards the beginning of Q4-2018, and is likely to launch its "sweetspot" segment RTX 2060 by the end of the year.

NVIDIA reportedly has mountains of unsold GeForce GTX 10-series inventory, in the wake of not just a transition to the new generation, but also a slump in GPU-accelerated crypto-currency mining. The company could fine-tune prices of its popular 10-series SKUs such as the GTX 1080 Ti, the GTX 1080, GTX 1070 Ti, and GTX 1060, to sell them at slimmer margins. To consumers this could mean a good opportunity to lap up 4K-capable gaming hardware; but for NVIDIA, it could mean those many fewer takers for its ambitious RTX Technology in its formative year.

NVIDIA GPUs Can be Tricked to Support AMD FreeSync

Newer generations of NVIDIA GPUs such as "Pascal" and "Maxwell" meet or exceed the hardware requirements of AMD FreeSync, as they feature DisplayPort 1.4 connectors that include the features of DisplayPort 1.2a, required for VESA adaptive sync. In a bid to promote its own G-SYNC technology, NVIDIA doesn't expose this feature to monitors or software that support FreeSync. Redditor "bryf50" may have found a way around this. The trick is deceptively simple, however, you'll need games that support on-the-fly switching of rendering GPUs, and an AMD Radeon graphics card at hand.

When poking around with system settings in "Warcraft: Battle for Azeroth," bryf50 discovered that you can switch the "rendering GPU" on the fly, without having to physically connect your display to that newly selected GPU. You can start the game with your display connected to VGA1 (an AMD Radeon GPU), and switch the renderer in-game to VGA2 (an NVIDIA GPU). FreeSync should continue to work, while you enjoy the performance of that NVIDIA GPU. In theory, this should allow you to pair your high-end GTX 1080 Ti with a $50 RX 550 that supports FreeSync, instead of paying the $200+ G-SYNC tax.

NVIDIA GeForce RTX 2000 Series Specifications Pieced Together

Later today (20th August), NVIDIA will formally unveil its GeForce RTX 2000 series consumer graphics cards. This marks a major change in the brand name, triggered with the introduction of the new RT Cores, specialized components that accelerate real-time ray-tracing, a task too taxing on conventional CUDA cores. Ray-tracing and DNN acceleration requires SIMD components to crunch 4x4x4 matrix multiplication, which is what RT cores (and tensor cores) specialize at. The chips still have CUDA cores for everything else. This generation also debuts the new GDDR6 memory standard, although unlike GeForce "Pascal," the new GeForce "Turing" won't see a doubling in memory sizes.

NVIDIA is expected to debut the generation with the new GeForce RTX 2080 later today, with market availability by end of Month. Going by older rumors, the company could launch the lower RTX 2070 and higher RTX 2080+ by late-September, and the mid-range RTX 2060 series in October. Apparently the high-end RTX 2080 Ti could come out sooner than expected, given that VideoCardz already has some of its specifications in hand. Not a lot is known about how "Turing" compares with "Volta" in performance, but given that the TITAN V comes with tensor cores that can [in theory] be re-purposed as RT cores; it could continue on as NVIDIA's halo SKU for the client-segment.
Return to Keyword Browsing
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.