News Posts matching #Turing

Return to Keyword Browsing

NVIDIA's New 30HX & 40HX Crypto Mining Cards Are Based on Turing Architecture

We have recently discovered that NVIDIA's newly announced 30HX and 40HX Crypto Mining Processors are based on the last-generation Turing architecture. This news will come as a pleasant surprise to gamers as the release shouldn't affect the availability of Ampere RTX 30 Series GPUs. The decision to stick with Turing for these new devices is reportedly due to the more favorable power-management of the architecture which is vital for profitable cryptocurrency mining operations. The NVIDIA CMP 40HX will feature a custom TU106 processor while the 30HX will include a custom TU116. This information was discovered in the latest GeForce 461.72 WHQL drivers which added support for the two devices.

NVIDIA to Re-introduce GeForce RTX 2060 and RTX 2060 SUPER GPUs

We are just a few weeks away from the launch of NVIDIA's latest GeForce RTX 3060 graphics cards based on the new Ampere architecture, and there is already some news regarding the lineup position and its possible distortion. According to multiple sources over at Overclocking.com, NVIDIA is set to re-introduce its previous generation GeForce RTX 2060 and RTX 2060 SUPER graphics cards to the market. Once again. The source claims that NVIDIA is already pushing the stock over to its board partners and system integrators to use the last-generation product. So far, it is not clear why the company is doing this and we can only speculate on it.

The source also claims that the pricing structure of the old cards will be 300 EUR for RTX 2060 and 400 EUR for RTX 2060 SUPER in Europe. The latter pricing models directly competes with the supposed 399 EUR price tag of the upcoming GeForce RTX 3060 Ti model, which is based on the newer Ampere uArch instead of the last-gen Turing cards. The possibility for such a move is a possible scarce of GA106/GA104 silicon needed for the new cards, and the company could be aiming to try and satisfy the market with left-over stock from the previous generation cards.

Intel Launches Phantom Canyon NUCs: Tiger Lake and NVIDIA GPU Join Forces

Intel has today quietly launched its newest generation of Next Unit of Computing (NUC) devices with some nice upgrades over the prior generation. Codenamed the "Phantom Canyon", the latest NUC generation brings a major improvement for the "enthusiast" crowd, meant mostly at gamers who would like to use a small form-factor machine and have decent framerates. This is where the Enthusiast NUC 11 comes in. With its 28 Watt Intel Core i7-1165G7 Tiger Lake CPU, which features four cores and eight threads clocked at the maximum of 4.70 GHz, this Enthusiast NUC 11 mini-PC is rocking the latest technologies inside it.

To pair with the CPU, Intel has decided to put a discrete GPU, besides the Integrated Xe model, to power the frames needed. The dGPU in question is NVIDIA's GeForce RTX 2060 model with 6 GB of GDDR6 VRAM, based on the last generation "Turing" architecture. For I/O, Intel has equipped these machines with quite a lot of ports. There is Intel AX201 Wi-Fi 6 plus Bluetooth 5 module, a quad-mic array with beam-forming, far-field capabilities, and support for Alexa. There is a 2.5 Gb Ethernet port, along with two Thunderbolt 4.0 ports for internet connectivity and other purposes (TB ports support fast charging). When it comes to display output, the Enthusiast NUC 11 has HDMI 2.0b and a mini DisplayPort 1.4 port. You can run four monitors in total when using the Thunderbolt ports. On the front side, there is also an SD card reader, and the PC has six USB 3.1 Gen2 ports in total. You can find out more about the Enthusiast NUC 11 mini-PCs here.

NVIDIA Could Give a SUPER Overhaul to its GeForce RTX 3070 and RTX 3080 Graphics Cards

According to kopite7kimi, a famous leaker of information about NVIDIA graphics cards, we have some pieces of data about NVIDIA's plans to bring back its SUPER series of graphics cards. The SUPER graphics cards have first appeared in the GeForce RTX 2000 series "Turing" GPUs with GeForce RTX 2080 SUPER and RTX 2070 SUPER designs, after which RTX 2060 followed. Thanks to the source, we have information that NVIDIA plans to give its newest "Ampere" 3000 series of GeForce RTX GPUs a SUPER overhaul. Specifically, the company allegedly plans to introduce GeForce RTX 3070 SUPER and RTX 3080 SUPER SKUs to its offerings.

While there is no concrete information about the possible specifications of these cards, we can speculate that just like the previous SUPER upgrade, new cards would receive an upgrade in CUDA core count, and possibly a memory improvement. The last time a SUPER upgrade happened, NVIDIA just added more cores to the GPU and overclocked the GDDR6 memory and thus increased the memory bandwidth. We have to wait and see how the company plans to position these alleged cards and if we get them at all, so take this information with a grain of salt.
NVIDIA GeForce RTX 3080 SUPER Mock-Up
This is only a mock-up image and is not representing a real product.

Akasa Rolls Out Turing QLX Fanless Case for Intel NUC 9 Pro

Akasa today rolled out the Turing QLX, a fanless case for the Intel NUC 9 Pro "Quartz Canyon" desktop platform that consists of an Intel NUC 9 Pro Compute Element, and a PCIe backplane. This form-factor is essentially a modern re-imagining of the SBC+backplane desktops from the i486 era. The Turing QLX case is made almost entirely of anodized aluminium, and its body doubles up as a heatsink for the 9th Gen Core or Xeon SoC. You're supposed to replace the cooling assembly of your NUC 9 Pro Compute Element with the cold-plate + heat-pipe assembly of the case. NUC 9 Pro series SBCs compatible with the Turing QLX include the BXNUC9i9QNB, BXNUC9i7QNB, BXNUC9i5QNB, BKNUC9VXQNB, and the BKNUC9V7QNB. The case doesn't include a power supply, you're supposed to use a compatible power brick with the SBC+backplane combo. The Turing QLX measures 212 mm x 150 mm x 220 mm (DxWxH). The company didn't reveal pricing.

NVIDIA's Next-Gen Big GPU AD102 Features 18,432 Shaders

The rumor mill has begun grinding with details about NVIDIA's next-gen graphics processors based on the "Lovelace" architecture, with Kopite7kimi (a reliable source with NVIDIA leaks) predicting a 71% increase in shader units for the "AD102" GPU that succeeds the "GA102," with 12 GPCs holding 6 TPCs (12 SMs), each. 3DCenter.org extrapolates on this to predict a CUDA core count of 18.432 spread across 144 streaming multiprocessors, which at a theoretical 1.80 GHz core clock could put out an FP32 compute throughput of around 66 TFLOP/s.

The timing of this leak is interesting, as it's only 3 months into the market cycle of "Ampere." NVIDIA appears unsettled with AMD RDNA2 being competitive with "Ampere" at the enthusiast segment, and is probably bringing in its successor, "Lovelace" (after Ada Lovelace), out sooner than expected. Its previous generation "Turing" architecture saw market presence for close to two years. "Lovelace" could leverage the 5 nm silicon fabrication process and its significantly higher transistor density, to step up performance.

NVIDIA Updates Cyberpunk 2077, Minecraft RTX, and 4 More Games with DLSS

NVIDIA's Deep Learning Super Sampling (DLSS) technology uses advanced methods to offload sampling in games to the Tensor Cores, dedicated AI processors that are present on all of the GeForce RTX cards, including the prior Turing generation and now Ampere. NVIDIA promises that the inclusion of DLSS is promising to deliver up to a 40% performance boost, or even more. Today, the company has announced that DLSS is getting support in Cyberpunk 2077, Minecraft RTX, Mount & Blade II: Bannerlord, CRSED: F.O.A.D., Scavengers, and Moonlight Blade. The inclusion of these titles is now making NVIDIA's DLSS technology present in a total of 32 titles, which is no small feat for new technology.
Below, you can see the company provided charts about the performance of DLSS inclusion in the new titles, except the Cyberpunk 2077.
Update: The Cyberpunk 2077 performance numbers were leaked (thanks to kayjay010101 on TechPowerUp Forums), and you can check them out as well.

NVIDIA GeForce RTX 3060 Ti Confirmed, Beats RTX 2080 SUPER

It looks like NVIDIA will launch its 4th GeForce RTX 30-series product ahead of Holiday 2020, the GeForce RTX 3060 Ti, with VideoCardz unearthing a leaked NVIDIA performance guidance slide, as well as pictures of custom-design RTX 3060 Ti cards surfacing on social media. The RTX 3060 Ti is reportedly based on the same 8 nm "GA104" silicon as the RTX 3070, but cut down further. It features 38 out of 48 streaming multiprocessors physically present on the "GA104," amounting to 4,864 "Ampere" CUDA cores, 152 tensor cores, and 38 "Ampere" RT cores. The memory configuration is unchanged from the RTX 3070, which means you get 8 GB of 14 Gbps GDDR6 memory across a 256-bit wide memory interface, with 448 GB/s of memory bandwidth.

According to a leaked NVIDIA performance guidance slide for the RTX 3060 Ti, the company claims the card to consistently beat the GeForce RTX 2080 SUPER, a $700 high-end SKU from the previous "Turing" generation. The same slide also shows a roughly 40% performance gain over the previous generation RTX 2060 SUPER, which is probably the logical predecessor for this card. In related news, PC Master Race (OfficialPCMR) on its Facebook page posted pictures of boxes of an ASUS TUF Gaming GeForce RTX 3060 Ti OC graphics cards, which confirms the existence of this SKU. The picture of the card on the box reveals a design similar to other TUF Gaming RTX 30-series cards launched by ASUS so far. As for price, VideoCardz predicts a $399 MSRP for the SKU, which should nearly double the price-performance for this card over the RTX 2080 SUPER at NVIDIA's performance numbers.

NVIDIA RTX IO Detailed: GPU-assisted Storage Stack Here to Stay Until CPU Core-counts Rise

NVIDIA at its GeForce "Ampere" launch event announced the RTX IO technology. Storage is the weakest link in a modern computer, from a performance standpoint, and SSDs have had a transformational impact. With modern SSDs leveraging PCIe, consumer storage speeds are now bound to grow with each new PCIe generation doubling per-lane IO bandwidth. PCI-Express Gen 4 enables 64 Gbps bandwidth per direction on M.2 NVMe SSDs, AMD has already implemented it across its Ryzen desktop platform, Intel has it on its latest mobile platforms, and is expected to bring it to its desktop platform with "Rocket Lake." While more storage bandwidth is always welcome, the storage processing stack (the task of processing ones and zeroes to the physical layer), is still handled by the CPU. With rise in storage bandwidth, the IO load on the CPU rises proportionally, to a point where it can begin to impact performance. Microsoft sought to address this emerging challenge with the DirectStorage API, but NVIDIA wants to build on this.

According to tests by NVIDIA, reading uncompressed data from an SSD at 7 GB/s (typical max sequential read speeds of client-segment PCIe Gen 4 M.2 NVMe SSDs), requires the full utilization of two CPU cores. The OS typically spreads this workload across all available CPU cores/threads on a modern multi-core CPU. Things change dramatically when compressed data (such as game resources) are being read, in a gaming scenario, with a high number of IO requests. Modern AAA games have hundreds of thousands of individual resources crammed into compressed resource-pack files.

Microsoft Rolls Out DirectX 12 Feature-level 12_2: Turing and RDNA2 Support it

Microsoft on Thursday rolled out the DirectX 12 feature-level 12_2 specification. This adds a set of new API-level features to DirectX 12 feature-level 12_1. It's important to understand that 12_2 is not DirectX 12 Ultimate, even though Microsoft explains in its developer blog that the four key features that make up DirectX 12 Ultimate logo requirements were important enough to be bundled into a new feature-level. At the same time, Ultimate isn't feature-level 12_1, either. The DirectX 12 Ultimate logo requirement consists of DirectX Raytracing, Mesh Shaders, Sampler Feedback, and Variable Rate Shading. These four, combined with an assortment of new features make up feature-level 12_2.

Among the updates introduced with feature-level 12_2 are DXR 1.1, Shader Model 6.5, Variable Rate Shading tier-2, Resource Binding tier-3, Tiled Resources tier-3, Conservative Rasterization tier-3, Root Signature tier-1.1, WriteBufferImmediateSupportFlags, GPU Virtual Address Bits resource expansion, among several other Direct3D raster rendering features. Feature-level 12_2 requires a WDDM 2.0 driver, and a compatible GPU. Currently, NVIDIA's "Turing" based GeForce RTX 20-series are the only GPUs capable of feature-level 12_2. Microsoft announced that AMD's upcoming RDNA2 architecture supports 12_2, too. NVIDIA's upcoming "Ampere" (RTX 20-series successors) may support it, too.

KFA2 Intros GeForce GTX 1650 GDDR6 EX PLUS Graphics Card

GALAX's European brand KFA2 launched the GeForce GTX 1650 GDDR6 EX PLUS graphics card. The card looks identical to the one pictured below, but with the 6-pin PCIe power input removed, relying entirely on the PCIe slot for power. Based on the 12 nm "TU116" silicon, the GPU features 896 "Turing" CUDA cores, and talks to 4 GB of GDDR6 memory across a 128-bit wide memory interface. With a memory data rate of 12 Gbps, the chip has 192 GB/s of memory bandwidth on tap. The GPU max boost frequency is set at 1605 MHz, with a software-based 1635 MHz "one click OC" mode. The cooling solution consists of an aluminium mono-block heatsink that's ventilated by a pair of 80 mm fans. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and dual-link DVI-D. Available now in the EU, the KFA2 GeForce GTX 1650 GDDR6 EX PLUS is priced at 129€ (including taxes).

Video Memory Sizes Set to Swell as NVIDIA Readies 20GB and 24GB GeForce Amperes

NVIDIA's GeForce RTX 20-series "Turing" graphics card series did not increase video memory sizes in comparison to GeForce GTX 10-series "Pascal," although the memory itself is faster on account of GDDR6. This could change with the GeForce RTX 30-series "Ampere," as the company looks to increase memory sizes across the board in a bid to shore up ray-tracing performance. WCCFTech has learned that in addition to a variety of strange new memory bus widths, such as 320-bit, NVIDIA could introduce certain higher variants of its RTX 30-series cards with video memory sizes as high as 20 GB and 24 GB.

Memory sizes of 20 GB or 24 GB aren't new for NVIDIA's professional-segment Quadro products, but it's certainly new for GeForce, with only the company's TITAN-series products breaking the 20 GB-mark at prices due north of $2,000. Much of NVIDIA's high-end appears to be resting on segmentation of the PG132 common board design, coupled with the GA102 silicon, from which the company could carve out several SKUs spaced far apart in the company's product stack. NVIDIA's next-generation GeForce "Ampere" family is expected to debut in September 2020, with product launches in the higher-end running through late-Q3 and Q4 of 2020.

EVGA Introduces GeForce GTX 1650 KO with GDDR6

Introducing the EVGA GeForce GTX 1650 KO with GDDR6. The EVGA GeForce GTX 1650 KO gives you the best gaming performance at a value you cannot resist. Now it's updated with GDDR6 memory, giving you that extra edge to up your game to the next level.

Featuring concurrent execution of floating point and integer operations, adaptive shading technology, and a new unified memory architecture with twice the cache of its predecessor, Turing shaders enable awesome performance increases on today's games. Get 1.4X power efficiency over previous generation for a faster, cooler and quieter gaming experience that take advantage of Turing's advanced graphics features.

NVIDIA "Ampere" Designed for both HPC and GeForce/Quadro

NVIDIA CEO Jensen Huang in a pre-GTC press briefing stressed that the upcoming "Ampere" graphics architecture will spread across both the company's compute-accelerator and commercial graphics product lines. The architecture makes its debut later today with the Tesla A100 HPC processor for breakthrough AI acceleration. It's unlikely that any GeForce products will be formally announced this month, with rumors pointing to a GeForce "Ampere" product launch at a gaming-focused event in September, close to "Cyberpunk 2077" launch.

It was earlier believed that NVIDIA had forked its breadwinning IP into two lines, one focused on headless scalar compute, and the other on graphics products through the company's GeForce and Quadro product lines. To that effect, its "Volta" architecture focused on scalar-compute (with the exception of the forgotten TITAN V); and the "Turing" architecture focused solely on GeForce and Quadro. It was then believed that "Ampere" will focus on compute, and the so-called "Hopper" would be this generation's graphics-focused architecture. We now know that won't be the case. We've compiled a selection of GeForce Ampere rumors in this article.

TSMC Secures Orders from NVIDIA for 7nm and 5nm Chips

TSMC has reportedly secured orders from NVIDIA for chips based on its 7 nm and 5 nm silicon fabrication nodes, sources tell DigiTimes. If true, it could confirm rumors of NVIDIA splitting its next-generation GPU manufacturing between TSMC and Samsung. The Korean semiconductor giant is commencing 5 nm EUV mass production within Q2-2020, and NVIDIA is expected to be one of its customers. NVIDIA is expected to shed light on its next-gen graphics architecture at the GTC 2020 online event held later this month. With its "Turing" architecture approaching six quarters of market presence, it's likely that the decks are being cleared for a new architecture not just in HPC/AI compute product segment, but also GeForce and Quadro consumer graphics cards. Splitting manufacturing between TSMC and Samsung would help NVIDIA disperse any yield issue arriving from either foundry's EUV node, and give it greater bargaining power with both.

GALAX Extends Pink Edition Treatment to Even RTX 2080 Super

In a quick follow-up to our story from yesterday about the GALAX GeForce RTX 2070 Super EX Pink Edition graphics card, we are learning that the company is ready with a GeForce RTX 2080 Super graphics card based on the same board design. Bearing the model number "28ISL6MD71PE," the card is a costmetic variant of the company's RTX 2080 Super EX graphics card, featuring a bubblegum pink paintjob on the cooler shroud and back-plate. The PCB, although of the same design as the EX (1-click OC), is now fully white, like the HOF series. The RGB LED fans glow hot-pink out of the box. The Pink Edition card ships with factory-overclocked speeds of 1845 MHz GPU Boost (vs. 1815 MHz reference), and its software-based 1-click OC feature enables 1860 MHz boost frequencies. The memory is untouched, at 15.5 Gbps (GDDR6-effective).

The GeForce RTX 2080 Super maxes out the 12 nm "TU104" silicon, featuring 3,072 "Turing" CUDA cores, 192 TMUs, 64 ROPs, and a 256-bit wide GDDR6 memory interface holding 8 GB of memory. Much like its RTX 2070 Super sibling, this card pulls power from a combination of 8-pin and 6-pin PCIe power connectors; while its display outputs include three DisplayPorts and one HDMI. Expect an identical product to be launched under the KFA2 brand in certain markets. The company didn't reveal pricing.

NVIDIA Makes GDDR6 an Official GeForce GTX 1650 Memory Option

NVIDIA updated the product page of its GeForce GTX 1650 graphics card to make GDDR6 an official memory option besides the GDDR5 that the SKU launched with, back in Q2-2019. NVIDIA now has two product specs for the SKU, the GTX 1650 (G5), and GTX 1650 (G6). Both feature 896 "Turing" CUDA cores, 56 TMUs, and 32 ROPs; but differ entirely in memory configuration and clock speeds.

The GTX 1650 (G6) features 4 GB of GDDR6 memory clocked at 12 Gbps, across a 128-bit wide memory bus, compared to the original GTX 1650, which uses 4 GB of 8 Gbps GDDR5 across the same bus width. This results in a 50% memory bandwidth gain for the new SKU: 192 GB/s vs. 128 GB/s. On the other hand, the GPU clock speeds are lower than those of the original GTX 1650. The new G6 variant ticks at 1410 MHz base and 1590 MHz GPU Boost, compared to 1485/1665 MHz of the original GTX 1650. This was probably done to ensure that the new SKU fits within the 75 W typical board power envelope of the original, enabling card designs that lack additional power connectors. As for pricing, Newegg recently had an MSI GeForce GTX 1650 GDDR6 Gaming X listed for $159.

Next-Generation Laptop Hardware from Intel and NVIDIA Coming April 2nd

Intel and NVIDIA are preparing to refresh their hardware offering meant for laptop devices, and they are planning to do it on April 2nd. According to the Chinese website ITHome, Intel is going to launch its 10th generation Comet Lake-H CPUs for mobile devices, on April 2nd. The new models are going to bring improved frequency and core count, with top-end models reaching up to 8 cores with 16 threads. NVIDIA, on the other hand, will also update its mobile offerings with the arrival of Turing SUPER mobile cards. So far, we only had a choice of regular Turing series, however, there is soon going to be a SUPER variant of the existing cards.

Being that these cards are also expected to arrive on April 2nd, laptop manufacturers will integrate new products and showcase their solutions on that date. The availability of these devices, based on new Intel Comet Lake-H CPUs and NVIDIA Turing SUPER GPUs, is expected to follow soon after, precisely on April 15th. Additionally, it is notable that laptop manufacturer Mechrevo will hold an online press conference where they will showcase their "Z3" gaming laptop based on new technologies.
Mechrevo NVIDIA Turing SUPER Laptops

Microsoft DirectX 12 Ultimate: Why it Helps Gamers Pick Future Proof Graphics Cards

Microsoft Thursday released the DirectX 12 Ultimate logo. This is not a new API with any new features, but rather a differentiator for graphics cards and game consoles that support four key modern features of DirectX 12. This helps consumers recognize the newer and upcoming GPUs, and tell them apart from some older DirectX 12 capable GPUs that were released in the mid-2010s. For a GPU to be eligible for the DirectX 12 Ultimate logo, it must feature hardware acceleration for ray-tracing with the DXR API; must support Mesh Shaders, Variable Rate Shading (VRS), and Sampler Feedback (all of the four). The upcoming Xbox Series X console features this logo by default. Microsoft made it absolutely clear that the DirectX 12 Ultimate logo isn't meant as a compatibility barrier, and that these games will work on older hardware, too.

As it stands, the "Navi"-based Radeon RX 5000 series are "obsolete", just like some Turing cards from the GeForce GTX 16-series. At this time, the only shipping product which features the logo is NVIDIA's GeForce RTX 20-series and the TITAN RTX, as they support all the above features.

NVIDIA GeForce RTX GPUs to Support the DirectX 12 Ultimate API

NVIDIA graphics cards, starting from the current generation GeForce RTX "Turing" lineup, will support the upcoming DirectX 12 Ultimate API. Thanks to a slide obtained by our friends over at VideoCardz, we have some information about the upcoming iteration of the DirectX 12 API made by Microsoft. In the new API revision, called "DirectX 12 Ultimate", it looks like there are some enhancements made to the standard DirectX 12 API. From the leaked slide we can see the improvements coming in the form of a few additions.

The GeForce RTX lineup will support the updated version of API with features such as ray tracing, variable-rate shading, mesh shader, and sampler feedback. While we do not know why Microsoft decided to call this the "Ultimate" version, it is possibly used to convey clearer information about which features are supported by the hardware. In the leaked slide there is a mention of consoles as well, so it is coming to that platform as well.

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

As we approach the release of NVIDIA's Ampere GPUs, which are rumored to launch in the second half of this year, more rumors and information about the upcoming graphics cards are appearing. Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption.

Built using Samsung's 7 nm manufacturing node, Ampere is poised to be the new king among all future GPUs. The rumored 50% performance increase is not impossible, due to features and improvements that the new 7 nm manufacturing node brings. If utilizing the density alone, NVIDIA can extract at least 50% extra performance that is due to the use of a smaller node. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.

UL Benchmarks Outs 3DMark Feature Test for Variable-Rate Shading Tier-2

UL Benchmarks today announced an update to 3DMark, with the expansion of the Variable-Rate Shading (VRS) feature-test with support for VRS Tier-2. A component of DirectX 12, VRS Tier 1 is supported by NVIDIA "Turing" and Intel Gen11 graphics architectures (Ice Lake's iGPU). VRS Tier-2 is currently supported only by NVIDIA "Turing" GPUs. VRS Tier-2 adds a few performance enhancements such as lower levels of shading for areas of the scene with low contrast to their surroundings (think areas under shadow), yielding performance gains. The 3DMark VRS test runs in two passes, pass-1 runs with VRS-off to provide a point of reference; and pass-2 with VRS-on, to test performance gained. The 3DMark update with VRS Tier-2 test will apply for the Advanced and Professional editions.

DOWNLOAD: 3DMark v2.11.6846

NVIDIA Develops Tile-based Multi-GPU Rendering Technique Called CFR

NVIDIA is invested in the development of multi-GPU, specifically SLI over NVLink, and has developed a new multi-GPU rendering technique that appears to be inspired by tile-based rendering. Implemented at a single-GPU level, tile-based rendering has been one of NVIDIA's many secret sauces that improved performance since its "Maxwell" family of GPUs. 3DCenter.org discovered that NVIDIA is working on its multi-GPU avatar, called CFR, which could be short for "checkerboard frame rendering," or "checkered frame rendering." The method is already secretly deployed on current NVIDIA drivers, although not documented for developers to implement.

In CFR, the frame is divided into tiny square tiles, like a checkerboard. Odd-numbered tiles are rendered by one GPU, and even-numbered ones by the other. Unlike AFR (alternate frame rendering), in which each GPU's dedicated memory has a copy of all of the resources needed to render the frame, methods like CFR and SFR (split frame rendering) optimize resource allocation. CFR also purportedly offers lesser micro-stutter than AFR. 3DCenter also detailed the features and requirements of CFR. To begin with, the method is only compatible with DirectX (including DirectX 12, 11, and 10), and not OpenGL or Vulkan. For now it's "Turing" exclusive, since NVLink is required (probably its bandwidth is needed to virtualize the tile buffer). Tools like NVIDIA Profile Inspector allow you to force CFR on provided the other hardware and API requirements are met. It still has many compatibility problems, and remains practically undocumented by NVIDIA.

EK Unveils D-RGB Water Blocks for MSI Gaming X Trio Graphics Cards

EK, premium liquid cooling gear manufacturer based in Europe, is launching a new, addressable D-RGB version of EK-Vector Trio high-performance water blocks specially designed for MSI Gaming X Trio GeForce RTX 2080 Ti graphics cards. The block also features a D-RGB lit aesthetic cover over the block Terminal which is designed to showcase the graphics card model via addressable LEDs, visible from the side.

The EK-Quantum Vector Trio RTX D-RGB water blocks are specially designed for multiple MSI Trio GeForce RTX Turing based graphics cards. These water blocks use the signature EK single slot slim look and cover the entire PCB length. This sophisticated cooling solution will transform your powerful MSI graphics card into a minimalistic, elegant piece of hardware with accented D-RGB (addressable) LED lighting. The block also features a unique aesthetic cover over the block Terminal which is designed to showcase the graphics card model via addressable LEDs, visible from the side.

EK Introduces the The EK-Quantum Vector Strix RTX D-RGB Series Waterblocks

EK Water Blocks, the Slovenia based water cooling gear manufacturer, is introducing its new generation of EK-Vector RTX Strix D-RGB water blocks designed for ROG Strix GeForce RTX series graphics cards, based on Turing graphics processor. The EK-Quantum Vector Strix RTX D-RGB series water blocks feature four integrated addressable LED sources, two located in the terminal cover and one digital LED strip on each end of the water block.

EK-Quantum Vector Strix RTX D-RGB
The EK-Quantum Vector Strix RTX water blocks are specially designed for multiple ROG Strix GeForce RTX Turing based graphics cards. The water block itself uses the signature EK single slot slim look, and it covers the entire PCB length. This sophisticated cooling solution will transform your powerful ROG graphics card into a minimalistic, elegant piece of hardware with rich and addressable D-RGB LED lighting.
Return to Keyword Browsing