News Posts matching #Turing

Return to Keyword Browsing

GPU Hardware Encoders Benchmarked on AMD RDNA2 and NVIDIA Turing Architectures

Encoding video is one of the significant tasks that modern hardware performs. Today, we have some data of AMD and NVIDIA solutions for the problem that shows how good GPU hardware encoders are. Thanks to Chips and Cheese tech media, we have information about AMD's Video Core Next (VCN) encoder found in RDNA2 GPUs and NVIDIA's NVENC (short for NVIDIA Encoder). The site managed to benchmark AMD's Radeon RX 6900 XT and NVIDIA GeForce RTX 2060 GPUs. The AMD card features VCN 3.0, while the NVIDIA Turing card features a 6th generation NVENC design. Team red is represented by the latest work, while there exists a 7th generation of NVENC. C&C tested this because it means all that the reviewer possesses.

The metric used for video encoding was Netflix's Video Multimethod Assessment Fusion (VMAF) metric composed by the media giant. In addition to hardware acceleration, the site also tested software acceleration done by libx264, a software library used for encoding video streams into the H.264/MPEG-4 AVC compression format. The libx264 software acceleration was running on AMD Ryzen 9 3950X. Benchmark runs included streaming, recording, and transcoding in Overwatch and Elder Scrolls Online.
Below, you can find benchmarks of streaming, recording, transcoding, and transcoding speed.

NVIDIA GeForce MX550 Matches Ryzen 9 5900HS Vega iGPU in PassMark

The recently announced entry-level NVIDIA GeForce MX550 Turing-based discrete mobile graphics card for thin and light laptops has recently appeared on the PassMark video card benchmark site. The MX550 scores 5014 points in the G3D Mark test which places its performance nearly exactly with that of the integrated Vega 8 iGPU found in the Ryzen 9 5900HS that scores 4968 points in the same benchmark. There is only a single test result available for the MX550 so we will need to wait for further benchmarks to confirm its exact performance but either way it represents a significant performance improvement from the MX450 which scores just 3724 points. The MX550 is a PCIe 4.0 card featuring the 12 nm TU117 Turing GPU with 1024 shading units paired with 2 GB of GDDR6 memory.

Akasa Launches Turing ABX and Newton A50 Fanless Cases for Mini-PCs

Akasa, manufacturer of cooling solutions and computer cases, today updated two of its fanless compact cases designed to replace actively-cooled systems of mini-PCs. For starters, the new Akasa Turing ABX is a next-generation compact fanless case for GIGABYTE AMD Ryzen BRIX 4000U-Series Mini-PC with Radeon GPU. The Turing ABX case is compatible with the following GIGABYTE Ryzen BRIX models: GB-BRR3-4300, GB-BRR5-4500, GB-BRR7-4700, and GB-BRR7-480. It brings out all of the I/O ports that come standard with these BRIX models; however, the cooling system is replaced with Akasa's fanless design integrated within the case.

And last but not least, Akasa also launched Newton A50 fanless case for ASUS PN51 and PN50 mini-PCs. Coming in with a 1.3-liter design, this case represents a very compact solution capable of carrying 5000 and 4000 Series AMD Ryzen processors and Radeon Vega 7 Graphics. As far as I/O options, the case brings everything that ASUS PN51 and PN50 PCs have to offer; however, the cooling system is also replaced by Akasa's fanless design. You can learn more about Turing ABX here and Newton A50 here. For availability, you can expect these cases to become available in the next three weeks from Scan.co.uk, Amazon, Caseking, Jimms PC, Performance-PCs. Pricing is unknown.
Akasa Turing ABX Akasa Newton A50

Intel Adds Experimental Mesh Shader Support in DG2 GPU Vulkan Linux Drivers

Mesh shader is a relatively new concept of a programmable geometric shading pipeline, which promises to simplify the whole graphics rendering pipeline organization. NVIDIA introduced this concept with Turing back in 2018, and AMD joined with RDNA2. Today, thanks to the finds of Phoronix, we have gathered information that Intel's DG2 GPU will carry support for mesh shaders and bring it under Vulkan API. For starters, the difference between mesh/task and traditional graphics rendering pipeline is that the mesh edition is much simpler and offers higher scalability, bandwidth reduction, and greater flexibility in the design of mesh topology and graphics work. In Vulkan, the current mesh shader state is NVIDIA's contribution called the VK_NV_mesh_shader extension. The below docs explain it in greater detail:
Vulkan API documentationThis extension provides a new mechanism allowing applications to generate collections of geometric primitives via programmable mesh shading. It is an alternative to the existing programmable primitive shading pipeline, which relied on generating input primitives by a fixed function assembler as well as fixed function vertex fetch.

There are new programmable shader types—the task and mesh shader—to generate these collections to be processed by fixed-function primitive assembly and rasterization logic. When task and mesh shaders are dispatched, they replace the core pre-rasterization stages, including vertex array attribute fetching, vertex shader processing, tessellation, and geometry shader processing.

ASUS Intros GeForce RTX 2060 EVO 12GB DUAL Series

ASUS joined the GeForce RTX 2060 12 GB party with a pair of graphics card models under its DUAL series. NVIDIA earlier this month launched the RTX 2060 12 GB, a new SKU based on the "Turing" graphics architecture. This is more than a doubling in memory amount over the original RTX 2060. The new SKU features 2,176 CUDA cores, as compared to 1,920 on the original. NVIDIA is looking to target the Radeon RX 6600 with it.

The ASUS RTX 2060 EVO DUAL and DUAL OC graphics cards feature the company's latest iteration of the DUAL cooling solution, which features an aluminium fin-stack heatsink with heat-pipes tha make direct contact with the "TU106" GPU at the base; while a pair of the company's latest-generation Axial-Tech fans ventilate it. The DUAL OC SKU runs the GPU at 1680 MHz boost, while the DUAL sticks to NVIDIA-reference clock speeds of 1650 MHz boost. A software-based OC mode unlocks higher clocks on both SKUs. For the DUAL OC, this means 1710 MHz, and for the DUAL (standard) it means 1680 MHz. Both cards rely on a single 8-pin PCIe power connector. Display outputs include one DisplayPort 1.4, two HDMI 2.0, and one dual-link DVI-D. The cards are expected to be priced around 550€.

MSI Intros GeForce RTX 2060 12GB Ventus Graphics Card

MSI introduced its first two graphics cards on the necromanced GeForce RTX 2060 12 GB, The new RTX 2060 12 GB SKU formally launched on December 7, and pairs features 2,176 CUDA cores (compared to 1,920 on the original RTX 2060). It uses 12 GB of GDDR6 memory across a 192-bit wide memory bus, which is what separates it from the RTX 2060 SUPER. MSI is pairing it with the company's latest iteration of the Ventus 2X dual-fan cooling solution. The RTX 2060 12 GB Ventus sticks to NVIDIA-reference clock speeds of 1650 MHz boost; while the factory-overclocked Ventus OC runs the GPU at 1710 MHz boost. The memory is untouched on both cards at 14 Gbps. We have no prices at hand for the RTX 2060 12 GB, since NVIDIA didn't put out any SEP. We've seen these cards go for around $550.

Gainward Unveils GeForce RTX 2060 12GB GHOST Graphics Card

As the leading brand in enthusiastic graphics market, Gainward proudly presents the more powerful GeForce RTX 2060 with 12 GB - Gainward GeForce RTX 2060 12 GB Ghost Series. Gainward GeForce RTX 2060 12 GB Ghost Series are the reinvented graphics cards, accelerated by NVIDIA's revolutionary architecture - NVIDIA Turing GPU. Double the memory size and enhance the CUDA horse-power as tokens, the Gainward GeForce RTX 2060 12 GB Series fuses together the real-time ray tracing, artificial intelligence, and programmable shading. You've never enjoyed the games like this before.

Gainward GeForce RTX 2060 12 GB Ghost Series comes with dual low noise fan design, providing extremely high thermal performance with very low acoustic level even under heavy-loading gaming environment. With Gainward GeForce RTX 2060 12 GB Ghost, gamers will enjoy a more powerful GPU engine and double the frame buffer than the original GeForce RTX 2060 Series. The compact but powerful design allows users to experience a whole new class of performance enhanced with 4K gaming environment.

Palit Unveils GeForce RTX 2060 12GB Dual Series

Palit Microsystems Ltd, the biggest add-in-board partner of NVIDIA, today launched the GeForce RTX 2060 12 GB Dual Series graphics cards, accelerated by NVIDIA's revolutionary Turing architecture. The GeForce RTX 2060 12 GB is a premium version of it's predecessor- the RTX 2060 6 GB. Upgraded with doubled memory capacity and intensified CUDA cores, the new 12 GB variant equips you with ample horsepower to take on the latest games that are graphically demanding. You will also have complete access to the game-changing technologies, including NVIDIA DLSS, NVIDIA Reflex, real-time ray tracing and more.

The Palit GeForce RTX 2060 12 GB comes in the classic dual fan design featuring two 90 mm smart fans and optimized thermal solution to enhance the airflow and heat dissipation efficiency. The model offers cool temperature, minimum noise and maximum stability for gamers and creators to enjoy competitive performance.

ZOTAC Launches its GeForce RTX 2060 12GB Graphics Card

ZOTAC today joined several other NVIDIA GeForce board partners in launching its RTX 2060 12 GB graphics card. NVIDIA pulled the RTX 2060 out of retirement, gave it a few more CUDA cores, and doubled its memory to re-launch it, as a possible answer to AMD's recent Radeon RX 6600. The "Turing" graphics architecture can still be considered contemporary, as it offers full DirectX 12 Ultimate support. The chip features 2,176 CUDA cores, 34 RT cores, 272 Tensor cores, and a 192-bit wide GDDR6 memory interface, holding 12 GB of memory. ZOTAC's board design is a cost-effective fare, with a simple aluminium fin-stack heatsink ventilated by a pair of fans. The card draws power from a single 8-pin PCIe power connector. NVIDIA hasn't released an MSRP for the RTX 2060 12 GB, so this card could cost anything.

Inno3D Launches GeForce RTX 2060 12GB Twin X2 OC

INNO3D, a leading manufacturer of pioneering high-end multimedia components and innovations today announces the upgraded INNO3D NVIDIA GeForce RTX 2060 TWIN X2 OC now with 12 GB. Improving performance and power efficiency over previous models of the RTX 2060 family, the INNO3D GeForce RTX 2060 12 GB lets the gamer now enjoy faster, smoother gameplay, supporting the latest DirectX 12 Ultimate that features in both classic and latest game titles. The new GeForce RTX 2060 12 GB brings the incredible performance and power of real-time ray tracing and AI to the latest games—and every gamer.

Founded in 1998 with the vision of developing pioneering computer hardware products on a global scale. Fast forward to the present day, INNO3D is now well-established in the gaming community known for our innovative and daring approach to design and technology. We are Brutal by Nature in everything we do and are 201% committed to you for the best gaming experience in the world.

NVIDIA GeForce RTX 2060 12GB Has CUDA Core Count Rivaling RTX 2060 SUPER

NVIDIA's surprise launch of the GeForce RTX 2060 12 GB graphics card could stir things up in the 1080p mainstream graphics segment. Apparently, there's more to this card than just a doubling in memory amount. Specifications put out by NVIDIA point to the card featuring 2,176 CUDA cores, compared to 1,920 on the original RTX 2060 (6 GB). 2,176 is the same number of CUDA cores that the RTX 2060 SUPER was endowed with. What sets the two cards apart is the memory configuration.

While the RTX 2060 maxed out the "TU106" silicon, the RTX 2060 12 GB is likely based on the larger "TU104," in order to achieve its CUDA core count. The RTX 2060 SUPER features 8 GB of memory across a 256-bit wide memory bus, however, the RTX 2060 12 GB uses a narrower 192-bit wide bus, disabling 1/4th of the bus width of the "TU104." The memory data-rate on both SKUs is the same—14 Gbps. The segmentation between the two in the area of GPU clock speeds appears negligible. The original RTX 2060 ticks at 1680 MHz boost, while the new RTX 2060 12 GB does 1650 MHz boost. The typical board power is increased to 185 W compared to 160 W of the original RTX 2060, and 175 W of the RTX 2060 SUPER.

Update 15:32 UTC: NVIDIA has updated their website to remove the "Founders Edition" part from their specs page (3rd screenshot below). We confirmed with NVIDIA that there will be no RTX 2060 12 GB Founders Edition, only custom designs by their various board partners.

Gigabyte Registers Four NVIDIA GeForce RTX 2060 12 GB Graphics Cards With the EEC

The on-again, off-again relationship between NVIDIA and its Turing-based RTX 2060 graphics seems to be heading towards a new tipping point. As previously reported, NVIDIA is expected to be preparing another release cycle for its RTX 2060 graphics card - this time, paired with an as puzzling as it is gargantuan (for its shader performance) 12 GB of GDDR6 memory. Gigabyte has given us yet another tip at the card's expected launch by the end of this year or early 2022 by registering four different card models with the EEC (Eurasian Economic Commission). Gigabyte's four registered cards carry the model numbers GV-N2060OC-12GD, GV-N2060D6-12GD, GV-N2060WF2OC-12GD, and GV-N2060WF2-12GD. Do however remember that not all registered graphics cards actually make it to market.

NVIDIA's revival of the RTX 2060 towards the current market conditions speaks in volumes. While NVIDIA is producing as many 8 nm cards as it can with foundry partner Samsung, the current state of the graphics card pricing market leaves no doubts as to how successfully NVIDIA has been able to cope with both the logistics and materials constraints currently experienced by the semiconductor market. The 12 nm manufacturing process certainly has more available capacity than Samsung's 8 nm; at the same time, the RTX 2060's mining capabilities have been overtaken by graphics cards from the Ampere family, meaning that miners most likely will not look at these as viable options for mining, thus improving availability for consumers as well. If the card does keep close to its expected $300 price-point upon release, of course.

NVIDIA Reportedly Readies RTX 2060 12 GB SKUs for Early 2022 Launch

Videocardz, citing their own sources in the industry, claims that NVIDIA is readying a resurrection of sorts for the popular RTX 2060 graphics card. One of the hallmarks of the raytracing era, the Turing-based RTX 2060 routinely stands as the second most popular graphics card on Steam's hardware survey. Considering the still-ongoing semiconductor shortages and overreaching demand stretching logistics and supply lines thin, NVIDIA would thus be looking at a slight specs bump (double the GDDR6 memory to 12 GB) as a marketing point for the revised RTX 2060. This would also add to the company's ability to deliver mainstream-performance graphics cards in a high enough volume that enables the company to keep reaping benefits from the current Ampere line-up's higher ASP (Average Selling Price) across the board.

Videocardz' sources claim the revised RTX 2060 will be making use of the PG116 board, recycling it from the original GTX 1660 Ti design it was born unto. Apparently, NVIDIA has already warned board partners that the final design and specifications might be ready at years' end, with a potential re-release for January 2021. While the increase to a 12 GB memory footprint on an RTX 2060 graphics card is debatable, NVIDIA has to have some marketing flair to add to such a release. Remember that the RTX 2060 was already given a second lease of life earlier this year as a stopgap solution towards getting more gaming-capable graphics cards on the market; NVIDIA had allegedly moved its RTX 2060 manufacturing allocation back to Ampere, but now it seems that we'll witness a doubling-down on the RTX 2060. Now we just have to wait for the secondary market pricing to come down from its current $500 average... For a $349 MSRP, 2019 graphics card.

Data is Beautiful: 10 Years of AMD and NVIDIA GPU Innovation Visualized

Using our graphics card database, which is managed by our very own T4CFantasy, reddit user u/Vito_ponfe_Andariel created some basic charts mapping out the data points from our expansive, industry-leading GPU database. In these charts, the user compares technological innovation for both AMD and NVIDIA's GPUs in the last ten years, plotting out the performance evolution of the "best available GPU" per year in terms of performance, performance per dollar (using the database's launch price metric), energy consumption, performance per transistor, and a whole lot of other data correlation sets.

It's interesting to note technological changes in these charts and how they relate to the overall values. For example, if you look at the performance per transistor graph, you'll notice that performance per transistor has actually declined roughly 20% with the transition from NVIDIA's Pascal (GTX 1080 Ti) to the Turing (RTX 20-series) architecture. At the same time, AMD's performance per transistor exploded around 40% from Vega 64 to the RX 5700 XT graphics card. This happens, in part, due to the introduction of raytracing-specific hardware on NVIDIA's Turing, which takes up transistor counts without aiding in general shading performance - while AMD benefited from a new architecture in RDNA as well as the process transition from 14 nm to 7 nm. We see this declining performance behavior again with AMD's introduction of the RX 6800 XT from AMD, which loses some 40% in this performance per transistor metric - likely due to the introduction of RT cores and other architectural changes. There are of course other variables to the equation, but it is nonetheless interesting to note. Look after the break for the rest of the charts.

Grab the Stunning "Attic" NVIDIA RTX + DLSS Unreal Engine Interactive Demo, Works on even AMD

We are hosting the NVIDIA "Attic" RTX + DLSS interactive tech-demo in our Downloads section. Developed on Unreal Engine 4, the demo puts you in the bunny-slippers of a little girl playing around in her attic. This is no normal attic, it's her kingdom, complete with stuff to build a pillow fort, an old CRT TV playing retro NVIDIA commercials, a full-length mirror, really cool old stuff, and decorations. You can explore the place in a first-person perspective.

The interactive demo is brought to life with on-the-fly controls for RTX real-time raytracing and its various features, DLSS performance enhancement, a frame-rate counter, and controls for time-of-day, which alters lighting in the room. The demo shows off raytraced reflections, translucency, global-illumination, direct-illumination, and DLSS. You also get cool gadgets such as the "light cannon" or a reflective orb, that let you play around with dynamic lighting some more. To use this demo, you'll need a machine with an RTX 20-series "Turing" or RTX 30-series "Ampere" graphics card, and Windows 10. The demo also works on Radeon RX 6000 series GPUs. Grab it from the link below.

DOWNLOAD: NVIDIA Unreal Engine 4 RTX & DLSS Demo

First NVIDIA Palit CMP 30HX Mining GPU Available at a Tentative $723

NVIDIA's recently-announced CMP (Cryptocurrency Mining Processor) products seem to already be hitting the market - at least in some parts of the world. Microless, a retailer in Dubai, listed the cryptocurrency-geared graphics card for $723 - $723 which are equivalent to some 26 MH/s, as per NVIDIA, before any optimizatons have been enacted on the clock/voltage/BIOS level, as more serious miners will undoubtedly do.

The CMP 30HX is a re-released TU116 chip (Turing, sans RT hardware), which powered the likes of the GeForce GTX 1660 Super in NVIDIA's previous generation of graphics cards. The card features a a 1,530 MHz base clock; a 1,785 MHz boost clock; alongside 6 GB of GDDR6 memory that clocks in at 14 Gbps (which actually could soon stop being enough to hold the entire workload completely in memory). Leveraging a 192-bit memory interface, the graphics card supplies a memory bandwidth of up to 336 GB/s. It's also a "headless" GPU, meaning that it has no display outputs that would only add to cost in such a specifically-geared product. It's unclear how representative the pricing from Microless actually is of NVIDIA's MSRP for the 30HX products, but considering current graphics cards' pricing worldwide, this pricing seems to be in line with GeForce offerings capable of achieving the same hash rates, so its ability to concentrate demand from miners compared to other NVIDIA mainstream, GeForce offerings depends solely on the prices that are both set by NVIDIA and practiced by retailers.

NVIDIA's New 30HX & 40HX Crypto Mining Cards Are Based on Turing Architecture

We have recently discovered that NVIDIA's newly announced 30HX and 40HX Crypto Mining Processors are based on the last-generation Turing architecture. This news will come as a pleasant surprise to gamers as the release shouldn't affect the availability of Ampere RTX 30 Series GPUs. The decision to stick with Turing for these new devices is reportedly due to the more favorable power-management of the architecture which is vital for profitable cryptocurrency mining operations. The NVIDIA CMP 40HX will feature a custom TU106 processor while the 30HX will include a custom TU116. This information was discovered in the latest GeForce 461.72 WHQL drivers which added support for the two devices.

NVIDIA to Re-introduce GeForce RTX 2060 and RTX 2060 SUPER GPUs

We are just a few weeks away from the launch of NVIDIA's latest GeForce RTX 3060 graphics cards based on the new Ampere architecture, and there is already some news regarding the lineup position and its possible distortion. According to multiple sources over at Overclocking.com, NVIDIA is set to re-introduce its previous generation GeForce RTX 2060 and RTX 2060 SUPER graphics cards to the market. Once again. The source claims that NVIDIA is already pushing the stock over to its board partners and system integrators to use the last-generation product. So far, it is not clear why the company is doing this and we can only speculate on it.

The source also claims that the pricing structure of the old cards will be 300 EUR for RTX 2060 and 400 EUR for RTX 2060 SUPER in Europe. The latter pricing models directly competes with the supposed 399 EUR price tag of the upcoming GeForce RTX 3060 Ti model, which is based on the newer Ampere uArch instead of the last-gen Turing cards. The possibility for such a move is a possible scarce of GA106/GA104 silicon needed for the new cards, and the company could be aiming to try and satisfy the market with left-over stock from the previous generation cards.

Intel Launches Phantom Canyon NUCs: Tiger Lake and NVIDIA GPU Join Forces

Intel has today quietly launched its newest generation of Next Unit of Computing (NUC) devices with some nice upgrades over the prior generation. Codenamed the "Phantom Canyon", the latest NUC generation brings a major improvement for the "enthusiast" crowd, meant mostly at gamers who would like to use a small form-factor machine and have decent framerates. This is where the Enthusiast NUC 11 comes in. With its 28 Watt Intel Core i7-1165G7 Tiger Lake CPU, which features four cores and eight threads clocked at the maximum of 4.70 GHz, this Enthusiast NUC 11 mini-PC is rocking the latest technologies inside it.

To pair with the CPU, Intel has decided to put a discrete GPU, besides the Integrated Xe model, to power the frames needed. The dGPU in question is NVIDIA's GeForce RTX 2060 model with 6 GB of GDDR6 VRAM, based on the last generation "Turing" architecture. For I/O, Intel has equipped these machines with quite a lot of ports. There is Intel AX201 Wi-Fi 6 plus Bluetooth 5 module, a quad-mic array with beam-forming, far-field capabilities, and support for Alexa. There is a 2.5 Gb Ethernet port, along with two Thunderbolt 4.0 ports for internet connectivity and other purposes (TB ports support fast charging). When it comes to display output, the Enthusiast NUC 11 has HDMI 2.0b and a mini DisplayPort 1.4 port. You can run four monitors in total when using the Thunderbolt ports. On the front side, there is also an SD card reader, and the PC has six USB 3.1 Gen2 ports in total. You can find out more about the Enthusiast NUC 11 mini-PCs here.

NVIDIA Could Give a SUPER Overhaul to its GeForce RTX 3070 and RTX 3080 Graphics Cards

According to kopite7kimi, a famous leaker of information about NVIDIA graphics cards, we have some pieces of data about NVIDIA's plans to bring back its SUPER series of graphics cards. The SUPER graphics cards have first appeared in the GeForce RTX 2000 series "Turing" GPUs with GeForce RTX 2080 SUPER and RTX 2070 SUPER designs, after which RTX 2060 followed. Thanks to the source, we have information that NVIDIA plans to give its newest "Ampere" 3000 series of GeForce RTX GPUs a SUPER overhaul. Specifically, the company allegedly plans to introduce GeForce RTX 3070 SUPER and RTX 3080 SUPER SKUs to its offerings.

While there is no concrete information about the possible specifications of these cards, we can speculate that just like the previous SUPER upgrade, new cards would receive an upgrade in CUDA core count, and possibly a memory improvement. The last time a SUPER upgrade happened, NVIDIA just added more cores to the GPU and overclocked the GDDR6 memory and thus increased the memory bandwidth. We have to wait and see how the company plans to position these alleged cards and if we get them at all, so take this information with a grain of salt.
NVIDIA GeForce RTX 3080 SUPER Mock-Up
This is only a mock-up image and is not representing a real product.

Akasa Rolls Out Turing QLX Fanless Case for Intel NUC 9 Pro

Akasa today rolled out the Turing QLX, a fanless case for the Intel NUC 9 Pro "Quartz Canyon" desktop platform that consists of an Intel NUC 9 Pro Compute Element, and a PCIe backplane. This form-factor is essentially a modern re-imagining of the SBC+backplane desktops from the i486 era. The Turing QLX case is made almost entirely of anodized aluminium, and its body doubles up as a heatsink for the 9th Gen Core or Xeon SoC. You're supposed to replace the cooling assembly of your NUC 9 Pro Compute Element with the cold-plate + heat-pipe assembly of the case. NUC 9 Pro series SBCs compatible with the Turing QLX include the BXNUC9i9QNB, BXNUC9i7QNB, BXNUC9i5QNB, BKNUC9VXQNB, and the BKNUC9V7QNB. The case doesn't include a power supply, you're supposed to use a compatible power brick with the SBC+backplane combo. The Turing QLX measures 212 mm x 150 mm x 220 mm (DxWxH). The company didn't reveal pricing.

NVIDIA's Next-Gen Big GPU AD102 Features 18,432 Shaders

The rumor mill has begun grinding with details about NVIDIA's next-gen graphics processors based on the "Lovelace" architecture, with Kopite7kimi (a reliable source with NVIDIA leaks) predicting a 71% increase in shader units for the "AD102" GPU that succeeds the "GA102," with 12 GPCs holding 6 TPCs (12 SMs), each. 3DCenter.org extrapolates on this to predict a CUDA core count of 18.432 spread across 144 streaming multiprocessors, which at a theoretical 1.80 GHz core clock could put out an FP32 compute throughput of around 66 TFLOP/s.

The timing of this leak is interesting, as it's only 3 months into the market cycle of "Ampere." NVIDIA appears unsettled with AMD RDNA2 being competitive with "Ampere" at the enthusiast segment, and is probably bringing in its successor, "Lovelace" (after Ada Lovelace), out sooner than expected. Its previous generation "Turing" architecture saw market presence for close to two years. "Lovelace" could leverage the 5 nm silicon fabrication process and its significantly higher transistor density, to step up performance.

NVIDIA Updates Cyberpunk 2077, Minecraft RTX, and 4 More Games with DLSS

NVIDIA's Deep Learning Super Sampling (DLSS) technology uses advanced methods to offload sampling in games to the Tensor Cores, dedicated AI processors that are present on all of the GeForce RTX cards, including the prior Turing generation and now Ampere. NVIDIA promises that the inclusion of DLSS is promising to deliver up to a 40% performance boost, or even more. Today, the company has announced that DLSS is getting support in Cyberpunk 2077, Minecraft RTX, Mount & Blade II: Bannerlord, CRSED: F.O.A.D., Scavengers, and Moonlight Blade. The inclusion of these titles is now making NVIDIA's DLSS technology present in a total of 32 titles, which is no small feat for new technology.
Below, you can see the company provided charts about the performance of DLSS inclusion in the new titles, except the Cyberpunk 2077.
Update: The Cyberpunk 2077 performance numbers were leaked (thanks to kayjay010101 on TechPowerUp Forums), and you can check them out as well.

NVIDIA GeForce RTX 3060 Ti Confirmed, Beats RTX 2080 SUPER

It looks like NVIDIA will launch its 4th GeForce RTX 30-series product ahead of Holiday 2020, the GeForce RTX 3060 Ti, with VideoCardz unearthing a leaked NVIDIA performance guidance slide, as well as pictures of custom-design RTX 3060 Ti cards surfacing on social media. The RTX 3060 Ti is reportedly based on the same 8 nm "GA104" silicon as the RTX 3070, but cut down further. It features 38 out of 48 streaming multiprocessors physically present on the "GA104," amounting to 4,864 "Ampere" CUDA cores, 152 tensor cores, and 38 "Ampere" RT cores. The memory configuration is unchanged from the RTX 3070, which means you get 8 GB of 14 Gbps GDDR6 memory across a 256-bit wide memory interface, with 448 GB/s of memory bandwidth.

According to a leaked NVIDIA performance guidance slide for the RTX 3060 Ti, the company claims the card to consistently beat the GeForce RTX 2080 SUPER, a $700 high-end SKU from the previous "Turing" generation. The same slide also shows a roughly 40% performance gain over the previous generation RTX 2060 SUPER, which is probably the logical predecessor for this card. In related news, PC Master Race (OfficialPCMR) on its Facebook page posted pictures of boxes of an ASUS TUF Gaming GeForce RTX 3060 Ti OC graphics cards, which confirms the existence of this SKU. The picture of the card on the box reveals a design similar to other TUF Gaming RTX 30-series cards launched by ASUS so far. As for price, VideoCardz predicts a $399 MSRP for the SKU, which should nearly double the price-performance for this card over the RTX 2080 SUPER at NVIDIA's performance numbers.

NVIDIA RTX IO Detailed: GPU-assisted Storage Stack Here to Stay Until CPU Core-counts Rise

NVIDIA at its GeForce "Ampere" launch event announced the RTX IO technology. Storage is the weakest link in a modern computer, from a performance standpoint, and SSDs have had a transformational impact. With modern SSDs leveraging PCIe, consumer storage speeds are now bound to grow with each new PCIe generation doubling per-lane IO bandwidth. PCI-Express Gen 4 enables 64 Gbps bandwidth per direction on M.2 NVMe SSDs, AMD has already implemented it across its Ryzen desktop platform, Intel has it on its latest mobile platforms, and is expected to bring it to its desktop platform with "Rocket Lake." While more storage bandwidth is always welcome, the storage processing stack (the task of processing ones and zeroes to the physical layer), is still handled by the CPU. With rise in storage bandwidth, the IO load on the CPU rises proportionally, to a point where it can begin to impact performance. Microsoft sought to address this emerging challenge with the DirectStorage API, but NVIDIA wants to build on this.

According to tests by NVIDIA, reading uncompressed data from an SSD at 7 GB/s (typical max sequential read speeds of client-segment PCIe Gen 4 M.2 NVMe SSDs), requires the full utilization of two CPU cores. The OS typically spreads this workload across all available CPU cores/threads on a modern multi-core CPU. Things change dramatically when compressed data (such as game resources) are being read, in a gaming scenario, with a high number of IO requests. Modern AAA games have hundreds of thousands of individual resources crammed into compressed resource-pack files.
Return to Keyword Browsing
May 22nd, 2022 13:34 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts