News Posts matching #Pascal

Return to Keyword Browsing

NVIDIA RTX 20-series and GTX 16-series "Turing" GPUs Get Resizable BAR Support Through NVStrapsReBAR Mod

February saw community mods bring resizable BAR support to several older platforms; and now we come across a mod that brings it to some older GPUs. The NVStrapsReBAR mod by terminatorul, which is forked out of the ReBarUEFI mod by xCurio, brings resizable BAR support to NVIDIA GeForce RTX 20-series and GTX 16-series GPUs based on the "Turing" graphics architecture. This mod is intended for power users, and can potentially brick your motherboard. NVIDIA officially implemented resizable BAR support since its RTX 30-series "Ampere" GPUs in response to AMD's Radeon RX 6000 RDNA 2 GPUs implementing the tech under the marketing name Smart Access Memory. While AMD would go on to retroactively enable the tech for even the older RX 5000 series RDNA GPUs, NVIDIA didn't do so for "Turing."

NVStrapsReBAR is a motherboard UEFI firmware mod. It modifies the way your system firmware negotiates BAR size with the GPU on boot. There are only two ways to go about modding a platform to enable resizable BAR on an unsupported platform—by modding the motherboard firmware, or the video BIOS. Signature checks by security processors in NVIDIA GPUs make the video BIOS modding route impossible for most users; thankfully motherboard firmware modding isn't as difficult. There is an extensive documentation by the author to go about using this mod. The author has tested the mod to work with "Turing" GPUs, however, it doesn't work with older NVIDIA GPUs, including "Pascal." Resizable BAR enables the CPU (software) to see video memory as a single contiguously addressable block, rather than through 256 MB apertures.

NVIDIA Ada Lovelace Successor Set for 2025

According to the NVIDIA roadmap that was spotted in the recently published MLCommons training results, the Ada Lovelace successor is set to come in 2025. The roadmap also reveals the schedule for Hopper Next and Grace Next GPUs, as well as the BlueField-4 DPU.

While the roadmap does not provide a lot of details, it does give us a general idea of when to expect NVIDIA's next GeForce architecture. Since NVIDIA usually launches a new GeForce architecture every two years or so, the latest schedule might sound like a small delay, at least if it plans to launch the Ada Lovelace Next in early 2025 and not later. NVIDIA Pascal was launched in May 2016, Turing in September 2018, Ampere in May 2020, and Ada Lovelace in October 2022.

NVIDIA Ramps Up Battle Against Makers of Unlicensed GeForce Cards

NVIDIA is stepping up to manufacturers of counterfeit graphics card in China according to an article published by MyDrivers - the hardware giant is partnering up with a number of the nation's major e-commerce companies in order to eliminate inventories of bogus GPUs. It is claimed that these online retail platforms, including JD.com and Douyin, are partway into removing a swathe of dodgy stock from their listings. NVIDIA is seeking to disassociate itself from the pool of unlicensed hardware and the brands responsible for flooding the domestic and foreign markets with so-called fake graphics cards. The company is reputed to be puzzled about the murky origins of this bootlegging of their patented designs.

The market became saturated with fake hardware during the Ethereum mining boom - little known cottage companies such as 51RSIC, Corn, Bingying and JieShuoMllse were pushing rebadged cheap OEM cards to domestic e-tail sites. The knock-off GPUs also crept outside of that sector, and import listings started to appear on international platforms including Ebay, AliExpress, Amazon and Newegg. NVIDIA is also fighting to stop the sale of refurbished cards - these are very likely to have been utilized in intensive cryptocurrency mining activities. A flood of these hit the market following an extreme downturn in crypto mining efforts, and many enthusiast communities have warned against acquiring pre-owned cards due to the high risk of component failure.

NVIDIA Enables More Encoding Streams on GeForce Consumer GPUs

NVIDIA has quietly removed some video encoding limitations on its consumer GeForce graphics processing units (GPUs), allowing encoding of up to five simultaneous streams. Previously, NVIDIA's consumer GeForce GPUs were limited to three simultaneous NVENC encodes. The same limitation did not apply to professional GPUs.

According to NVIDIA's own Video Encode and Decode GPU Support Matrix document, the number of concurrent NVENC encodes on consumer GPUs have been increased from three to five. This includes certain GeForce GPUs based on Maxwell 2nd Gen, Pascal, Turing, Ampere, and Ada Lovelace GPU architectures. While the number of concurrent NVDEC decodes were never limited, there is a limitation on how many streams you can encode by certain GPU, depending on the resolution of the stream and the codec.

Six Year Old GTX 1060 Beats Intel Arc A380, GeForce GTX 1630 and Radeon RX 6400, Wins TPU popularity contest

NVIDIA GeForce GTX 1060 6 GB "Pascal" continues to be a popular choice among TechPowerUp readers as an entry-mainstream graphics card choice over rivals that are two generations ahead. The recent TechPowerUp Frontpage Poll asked our readers what graphics card they'd choose, assuming they're priced the same, with choices that include the GTX 1060 6 GB, GTX 1630 4 GB, GTX 1650 4 GB, RX 570 4 GB, RX 5500 XT 4 GB, RX 6400 4 GB, and the A380 6 GB. The poll received great response, with over 18,200 votes cast since it went live on June 30, 2022, closing on August 16.

The GeForce GTX 1060 6 GB dominated the poll, and nearly scored a simple majority, with 49 percent of the respondents, or 8,920 people, saying they'd choose the card over the others. A distant second was the RX 5500 XT 4 GB, with 15.1 percent, or 2,749 votes. The GTX 1650 and Arc A380 are nearly on par, with 11,9 percent, or around 2,170 votes. The remaining options, including the RX 6400, RX 570, and GTX 1630, are marginal, single-digit percentage choices.

Data is Beautiful: 10 Years of AMD and NVIDIA GPU Innovation Visualized

Using our graphics card database, which is managed by our very own T4CFantasy, reddit user u/Vito_ponfe_Andariel created some basic charts mapping out the data points from our expansive, industry-leading GPU database. In these charts, the user compares technological innovation for both AMD and NVIDIA's GPUs in the last ten years, plotting out the performance evolution of the "best available GPU" per year in terms of performance, performance per dollar (using the database's launch price metric), energy consumption, performance per transistor, and a whole lot of other data correlation sets.

It's interesting to note technological changes in these charts and how they relate to the overall values. For example, if you look at the performance per transistor graph, you'll notice that performance per transistor has actually declined roughly 20% with the transition from NVIDIA's Pascal (GTX 1080 Ti) to the Turing (RTX 20-series) architecture. At the same time, AMD's performance per transistor exploded around 40% from Vega 64 to the RX 5700 XT graphics card. This happens, in part, due to the introduction of raytracing-specific hardware on NVIDIA's Turing, which takes up transistor counts without aiding in general shading performance - while AMD benefited from a new architecture in RDNA as well as the process transition from 14 nm to 7 nm. We see this declining performance behavior again with AMD's introduction of the RX 6800 XT from AMD, which loses some 40% in this performance per transistor metric - likely due to the introduction of RT cores and other architectural changes. There are of course other variables to the equation, but it is nonetheless interesting to note. Look after the break for the rest of the charts.

AAEON Announces NVIDIA Jetson TX2 NX AI Edge Platform

- AAEON, an industry leader in embedded AI edge computing systems, announces a lineup of platforms currently under development powered by the newly released NVIDIA Jetson TX2 NX system on module (SOM). This range of systems, named the BOXER-823x Series, will augment AAEON's lineup of diverse solutions powered by the NVIDIA Jetson edge AI platform. With these platforms planned for release in mid-2021, early adopters have a chance to demo the systems while in development.

The NVIDIA Jetson TX2 NX is the latest addition to NVIDIA's Jetson lineup, slotting in between the NVIDIA Jetson Nano and NVIDIA Xavier NX. Jetson TX2 NX is powered by a NVIDIA Pascal GPU, quad-core Arm Cortex -A57 MPCore processor complex, offering 256 CUDA Cores, 4 GB of onboard LPDD4 memory, and 51.2 GB/s of memory bandwidth compatible with extensions of various standard bus interfaces. In total, this offers AI processing speeds up to 1.33 TFLOPS.

NVIDIA Seemingly Begins Resupplying GeForce GTX 1050 Ti GPUs

In a move that speaks loads towards the current state of the semiconductor market, NVIDIA has apparently begun reseeding retailers with 5-year-old Pascal-based GTX 1050 Ti graphics cards. In some retailers (namely, Newegg), the card can still be found at $499, a vestige of tight supply since its discontinuation, and a result of the constrained GPU market. However, retailers that have received fresh supply of the 14 nm, 4 GB GDDR5-totting graphics card have it at $179 - still above the 5-year-old asking price at release, which was set at $140. The GTX 1050 Ti features a 192-bit memory bus and a whopping 768 shading units.

Resupplying this card means that customers looking at the lower-end of the spectrum now have a feasible alternative to non-existent solutions on the RTX 3000 series. Equivalent models in the 2000-series are also hard to come by, and marred by much higher pricing. The choice for the GTX 1050 Ti with its 4 GB GDDR5 bus isn't an innocent one; it actually skirts two problems with current-generation hardware. First of all, constraints with GDDR6 memory allocation, which is becoming a bottleneck as well for new graphics card manufacture on account on the increasing amount of chips employed in each individual card, as well as its deployment in latest-gen consoles. And secondly, the 4 GB VRAM is no longer enough for these graphics cards to fit in the current Ethereum mining workload fully into memory, which means they also skirt mining demand. It is, however, a heavy moment for the industry and for any enthusiast who wants to see the progress we have been so readily promised.

Mysterious GeForce GT 1010 Rears its Head, Targeting OEMs

NVIDIA has quietly introduced a new entry-level desktop discrete GPU positioned a notch below even the GeForce GT 1030. The new GT 1010 is based on the "Pascal" graphics architecture circa 2016, and is cut further down from the 16 nm "GP108" silicon. The GT 1010 appears to be NVIDIA's move at replacing the "Kepler" based GT 710 from its bare entry-level, and helping the company clear out all remaining inventory of the "GP108" silicon from the channel, out to OEMs. The GT 1010 likely features 256 CUDA cores, 16 TMUs, 16 ROPs, and 2 GB of GDDR5 memory across a 64-bit wide memory bus (40 GB/s bandwidth), with a maximum GPU Boost frequency of 1468 MHz. If the GT 1010 does make it to the retail channel, we expect a sub-$60 price. With these specs, the chip will be easily bested by the latest iGPUs from AMD and Intel.

NVIDIA Releases GeForce MX450 with PCI-Express 4.0 Interface

NVIDIA released a mysterious new mobile GPU that has us scratching our heads over the silicon that could be driving it. The new GeForce MX450 is an entry-mainstream mobile GPU that apparently ships with a PCI-Express gen 4.0 bus interface, something only NVIDIA's "Ampere" GPUs feature. The product page for the MX450 doesn't list out any other specs, than its memory type support including new GDDR6 memory (supported only on NVIDIA architectures "Turing" or later). Interestingly, it also lists GDDR5 as one of its memory options. PCI-Express 4.0 is prominently listed as one of its specs.

Upon digging some more among device IDs, we've come across the ID of the GDDR5 variant, with the ASIC code "GP107-670-A1," and the silicon is based on the much older "Pascal" architecture, which lacks PCIe gen 4 support. The GDDR6 variant eludes us. This is the SKU which could be based on a newer architecture, given its support for GDDR6 and PCIe gen 4. NVIDIA's GeForce MX line of entry-mainstream mobile GPUs are built to performance/power targets, and wildly vary with the underlying tech. They've been historically a means for NVIDIA to clear inventory of older generation ASICs to notebook manufacturers, who get put the NVIDIA logo on their products, and advertise discrete graphics. Given this, the use of a newer (even unreleased) generation of GPUs comes as a surprise.

Video Memory Sizes Set to Swell as NVIDIA Readies 20GB and 24GB GeForce Amperes

NVIDIA's GeForce RTX 20-series "Turing" graphics card series did not increase video memory sizes in comparison to GeForce GTX 10-series "Pascal," although the memory itself is faster on account of GDDR6. This could change with the GeForce RTX 30-series "Ampere," as the company looks to increase memory sizes across the board in a bid to shore up ray-tracing performance. WCCFTech has learned that in addition to a variety of strange new memory bus widths, such as 320-bit, NVIDIA could introduce certain higher variants of its RTX 30-series cards with video memory sizes as high as 20 GB and 24 GB.

Memory sizes of 20 GB or 24 GB aren't new for NVIDIA's professional-segment Quadro products, but it's certainly new for GeForce, with only the company's TITAN-series products breaking the 20 GB-mark at prices due north of $2,000. Much of NVIDIA's high-end appears to be resting on segmentation of the PG132 common board design, coupled with the GA102 silicon, from which the company could carve out several SKUs spaced far apart in the company's product stack. NVIDIA's next-generation GeForce "Ampere" family is expected to debut in September 2020, with product launches in the higher-end running through late-Q3 and Q4 of 2020.

IBASE Announces NVIDIA Jetson TX2 AI Computing Platform for AIoT Applications

IBASE Technology Inc. (TPEx: 8050), a world-leading manufacturer of industrial motherboard boards and embedded computing solutions, has revealed its latest EC-3200 AI computing platform that is based on the cutting-edge power-efficient and high-performance NVIDIA Jetson Tegra X2 (TX2) processor. The NVIDIA Jetson TX2 pairs a dual-core Denver 2 alongside a quad-core ARM Cortex -A57 processor and provides 256 CUDA cores on the NVIDIA's advanced Pascal GPU architecture with up to 1.33 TFLOPS, delivering exceptional AI performance.

Designed to operate under an extended temperature range from -20°C to +60°C, the EC-3200 leverages the integrated GPU-computing power of the Jetson TX2 platform in building edge inference servers to analyze and better manage real time traffic flow in smart cities or to optimize the efficiency of operational processes in intelligent factories. "Current edge and cloud-based AI products require better computational and video analytics capability to perform demanding real-time data processing and overcome latency issues," said Wilson Lin, Director of IBASE Product Planning Department. "The EC-3200 is no doubt a durable solution, especially built with a fanless design for non-stop operation and takes advantage of the GPU-accelerated parallel processing of NVIDIA Jetson TX2 to handle data intensive and mission-critical workloads with power efficiency and unmatched reliability."

Microsoft DirectX 12 Ultimate: Why it Helps Gamers Pick Future Proof Graphics Cards

Microsoft Thursday released the DirectX 12 Ultimate logo. This is not a new API with any new features, but rather a differentiator for graphics cards and game consoles that support four key modern features of DirectX 12. This helps consumers recognize the newer and upcoming GPUs, and tell them apart from some older DirectX 12 capable GPUs that were released in the mid-2010s. For a GPU to be eligible for the DirectX 12 Ultimate logo, it must feature hardware acceleration for ray-tracing with the DXR API; must support Mesh Shaders, Variable Rate Shading (VRS), and Sampler Feedback (all of the four). The upcoming Xbox Series X console features this logo by default. Microsoft made it absolutely clear that the DirectX 12 Ultimate logo isn't meant as a compatibility barrier, and that these games will work on older hardware, too.

As it stands, the "Navi"-based Radeon RX 5000 series are "obsolete", just like some Turing cards from the GeForce GTX 16-series. At this time, the only shipping product which features the logo is NVIDIA's GeForce RTX 20-series and the TITAN RTX, as they support all the above features.

NVIDIA GeForce MX350 and MX330 Surface, Pascal Based Entry mGPUs

NVIDIA's GeForce MX-series mobile GPU line exists so notebook manufacturers can put the NVIDIA logo on their products and boast of gaming capabilities. The company is giving finishing touches to its new GeForce MX330 and MX350 chips, based on the "Pascal" architecture. The MX330 is the company's second rebrand of the MX150 that's based on the 14 nm "GP108" silicon. It's equipped with 384 CUDA cores, and up to 2 GB of GDDR5 memory across a 64-bit wide memory interface. NVIDIA increased the clock speeds to 1531 MHz base, and 1594 MHz GPU Boost (compared to 1227/1468 MHz of the MX150), while remaining in the 25 W TDP envelope.

The MX350, on the other hand, is based on the 14 nm "GP107" silicon, is equipped with 640 CUDA cores, and 2 GB of GDDR5 memory across the same 64-bit bus width as the MX330; but has aggressive power-management that lends it a TDP of just 20 W, despite 66% more CUDA cores than the MX330. Both chips are easily capable of handling non-gaming tasks on typical 1080p / 1440p notebooks; but can game only at 720p thru 1080p, with low-to-mid settings.

NVIDIA to Reuse Pascal for Mobility-geared MX300 Series

NVIDIA will apparently still be using Pascal when they launch their next generation of low-power discrete graphics solutions for mobile systems. The MX300 series will replace the current crop of MX200 series (segregated in three products in the form of the MX230, MX250 10 W and MX250 25 W). The new MX300 keeps the dual-tiered system, but ups the ante on the top of the line MX350. Even though it's still Pascal, on a 14 nm process, the MX350 should see an increase in CUDA cores to 640 (by using NVIDIA's Pascal GP107 chip) from the MX250's 384. Performance, then, should be comparable to the NVIDIA GTX 1050.

The MX330, on the other hand, will keep specifications of the MX250, which signals a tier increase from the 256 execution units in the MX230 to 384. This should translate to appreciable performance increases for the new MX300 series, despite staying on NVIDIA's Pascal architecture. The new lineup is expected to be announced on February.

NVIDIA Releases GeForce Hotfix Driver Version 431.18

NVIDIA released their latest driver package version, updating the software number up to 431.18. The new hotfix driver builds upon the previous 430.86 release, already fixed upon by another, previous hotfix driver 430.97.

This new release fixes BSODs on hibernation wake-up for ASUS' ASUS GL703GS/Asus GL502VML notebooks; game crashes or TDR on Shadow of the Tomb Raider when launching the game on Pascal GPUs; Shadow of the Tomb Raider's benchmark exiting abruptly should ray tracing be enabled; and flickering issues on Grand Theft Auto V when MSAA is enabled. Look below for the updated driver.
NVIDIA GeForce 431.18 Hotfix Driver

TechPowerUp Releases GPU-Z 2.20.0 with Major Fixes

TechPowerUp today released GPU-Z 2.20.0 as a very quick follow-up to version 2.19.0 from a few hours ago. We have come across a few critical bugs with the older version that needed immediate fixing. To begin with, your overclock getting reset on NVIDIA graphics cards with Boost when using GPU-Z, has been fixed. A crash noticed on machines with NVIDIA "Pascal" GPUs with no driver loaded, has also been fixed. Crashes noticed on Apple machines (i.e. Windows on Apple) with AMD Radeon Vega 12 GPU have been fixed. We touched up the memory bus-width read-out to show "bit" instead of "Bit," while we were at it. Grab the download from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.20.0
The change-log follows.

GIGABYTE Unveils GeForce GTX 1650 Series Graphics Cards

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the latest GeForce GTX 1650 graphics cards powered by NVIDIA Turing architecture. GIGABYTE launched 4 graphics cards - GeForce GTX 1650 GAMING OC 4G, GeForce GTX 1650 WINDFORCE OC 4G G, GeForce GTX 1650 OC 4G, GeForce GTX 1650 MINI ITX OC 4G. Turing architecture graphics cards have the ability to execute both integer and floating-point operations simultaneously making them much faster than the previous Pascal architecture. These graphics cards all use overclocked GPUs certified by GIGABYTE, and with WINDFORCE cooling technology, all players can enjoy the perfect experience from a variety of games.

GeForce GTX 1650 GAMING OC 4G provides the WINDFORCE 2x 100mm cooling solution for all key components of the graphics card. It takes care of not only the GPU but also VRAM and MOSFET, to ensure a stable overclock operation and longer life. GIGABYTE patented "Alternate Spinning" and unique blade fan features are designed to increase airflow and the addition of composite heat-pipes helps dissipate the heat from the GPU quickly. GAMING OC graphics card provides 3 HDMI and 1 display port output, which can support up to 4 displays at the same time. With RGB Fusion 2.0, Ultra-Durable top-grade materials and protection back-plate, GAMING OC graphics card delivers the best quality for the customers.

NVIDIA Also Releases Tech Demos for RTX: Star Wars, Atomic Heart, Justice Available for Download

We've seen NVIDIA's move to provide RTX effects on older, non-RT capable hardware today being met with what the company was certainly expecting: a cry of dismay from users that now get to see exactly what their non-Turing NVIDIA hardware is capable of. The move from NVIDIA could be framed as a way to democratize access to RTX effects via Windows DXR, enabling users of its GTX 1600 and 1000 series of GPUs to take a look at the benefits of raytracing; but also as an upgrade incentive for those that now see how their performance is lacking without the new specialized Turing cores to handle the added burden.

Whatever your side of the fence on that issue, however, NVIDIA has provided users with one more raytraced joy today. Three of them, in fact, in the form of three previously-shown tech demos. The Star Wars tech demo (download) is the most well known, certainly, with its studies on reflections on Captain Phasma's breastplate. Atomic Heart (download) is another one that makes use of RTX for reflections and shadows, while Justice (download) adds caustics to that equation. If you have a Turing graphics card, you can test these demos in their full glory, with added DLSS for improved performance. If you're on Pascal, you won't have that performance-enhancing mode available, and will have to slog it through software computations. Follow the embedded links for our direct downloads of these tech demos.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

NVIDIA RTX Logic Increases TPC Area by 22% Compared to Non-RTX Turing

Public perception on NVIDIA's new RTX series of graphics cards was sometimes marred by an impression of wrong resource allocation from NVIDIA. The argument went that NVIDIA had greatly increased chip area by adding RTX functionality (in both its Tensor ad RT cores) that could have been better used for increased performance gains in shader-based, non-raytracing workloads. While the merits of ray tracing oas it stands (in terms of uptake from developers) are certainly worthy of discussion, it seems that NVIDIA didn't dedicate that much more die area to their RTX functionality - at least not to the tone of public perception.

After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.

NVIDIA: Turing Adoption Rate 45% Higher Than Pascal, 90% of Users Buying Upwards in Graphics Product Tiers

NVIDIA during its investor day revealed some interesting statistics on its Turing-based graphics cards. The company essentially announced that revenue for Turing graphics cards sales increased 45% over that generated when NVIDIA introduced their Pascal architecture - which does make sense, when you consider how NVIDIA actually positioned its same processor tiers (**60, **70, **80) in higher pricing brackets than previously. NVIDIA's own graphics showcase this better than anyone else could, with a clear indication of higher pricing for the same graphics tier. According to the company, 90% of users are actually buying pricier graphics cards this generation than they were in the previous one -which makes sense, since a user buying a 1060 at launch would only have to pay $249, while the new RTX 2060 goes for $349.

Other interesting tidbits from NVIDIA's presentation at its investor day is that Pascal accounts for around 50% of the installed NVIDIA graphics cards, while Turing, for now, only accounts for 2% of that. This means 48% of users sporting an NVIDIA graphics card are using Maxwell or earlier designs, which NVIDIA says presents an incredible opportunity for increased sales as these users make the jump to the new Turing offerings - and extended RTX feature set. NVIDIA stock valuation grew by 5.82% today, likely on the back of this info.

NVIDIA to Enable DXR Ray Tracing on GTX (10- and 16-series) GPUs in April Drivers Update

NVIDIA had their customary GTC keynote ending mere minutes ago, and it was one of the longer keynotes clocking in at nearly three hours in length. There were some fascinating demos and features shown off, especially in the realm of robotics and machine learning, as well as new hardware as it pertains to AI and cars with the all-new Jetson Nano. It would be fair to say, however, that the vast majority of the keynote was targeting developers and researchers, as usually is the case at GTC. However, something came up in between which caught us by surprise, and no doubt is a pleasant update to most of us here on TechPowerUp.

Following AMD's claims on software-based real-time ray tracing in games, and Crytek's Neon Noir real-time ray tracing demo for both AMD and NVIDIA GPUs, it makes sense in hindsight that NVIDIA would allow rudimentary DXR ray tracing support to older hardware that do not support RT cores. In particular, an upcoming drivers update next month will allow DXR support for 10-series Pascal-microarchitecture graphics cards (GTX 1060 6 GB and higher), as well as the newly announced GTX 16-series Turing-microarchitecture GPUs (GTX 1660, GTX 1660 Ti). The announcement comes with a caveat letting people know to not expect RTX support (think lower number of ray traces, and possibly no secondary/tertiary effects), and this DXR mode will only be supported in Unity and Unreal game engines for now. More to come, with details past the break.

NVIDIA Adds New Options to Its MX200 Mobile Graphics Solutions - MX250 and MX230

NVIDIA has added new SKUs to its low power mobility graphics lineup. the MX230 and MX250 come in to replace The GeForce MX130 and MX150, but... there's really not that much of a performance improvement to justify the increase in the series' tier. Both solutions are based on Pascal, so there are no Turing performance uplifts at the execution level.

NVIDIA hasn't disclosed any CUDA core counts or other specifics on these chips; we only know that they are paired with GDDR 5 memory and feature Boost functionality for increased performance in particular scenarios. The strange thing is that NVIDIA's own performance scores compare their MX 130, MX150, and now MX230 and MX250 to Intel's UHD620 IGP part... and while the old MX150 was reported by NVIDIA as offering an up to 4x performance uplift compared to that Intel part, the new MX250 now claims an improvement of 3.5x the performance. Whether this is because of new testing methodology, or some other reason, only NVIDIA knows.

Version 4.6.0 Beta 10 of MSI Afterburner Introduces OC Scanner for Pascal

One of the runaway features for NVIDIA's latest RTX-20 series of graphics cards was the introduction of support for the OC Scanner feature - a program that automagically tests a range of frequencies on your NVIDIA graphics card and overclocks it to a deemed "stable" sweet-spot. This practically obviates the need for manual fine-tuning, though of course, the best results should always be found down that road - provided there's enough tinkering.

The latest version of MSI's Afterburner (v4.6.0 beta 10, available in the source link) now brings this functionality to Pascal-based graphics cards (besides some other features, such as voltage control, for Turing; check the entire release notes after the break). Have fun.
Return to Keyword Browsing
Oct 15th, 2024 22:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts