News Posts matching #RTX

Return to Keyword Browsing

NVIDIA GeForce RTX 40 Series "AD104" Could Match RTX 3090 Ti Performance

NVIDIA's upcoming GeForce RTX 40 series Ada Lovelace graphics card lineup is slowly shaping up to be a significant performance uplift compared to the previous generation. Today, according to a well-known hardware leaker kopite7kimi, we are speculating that a mid-range AD104 SKU could match the performance of the last-generation flagship GeForce RTX 3090 Ti graphics card. The full AD104 SKU is set to feature 7680 FP32 CUDA cores, paired with 12 GB of 21 Gbps GDDR6X memory running on a 192-bit bus. Coming with a large TGP of 400 Watts, it should have a performance of the GA102-350-A1 SKU found in GeForce RTX 3090 Ti.

Regarding naming this complete AD104 SKU, it should end up as a GeForce RTX 4070 Ti model. Of course, we must wait and see what NVIDIA decides to do with the lineup and what the final models will look like.

GIGABYTE Launches New G5/G7 Gaming Laptop

GIGABYTE TECHNOLOGY Co. Ltd, the global leading brand of PC, launches GIGABYTE Gaming G5/G7 gaming laptops equipped with 10nm Intel 12th Gen Processor today. A laptop to meet the wide range of needs in multitasking, gaming, and entertainment, with 12th Gen Intel Core i5-12500H laptop CPU, which is comprised of 12-core, 16 threads, and a maximum clock rate of 4.5GHz, to meet the needs for telecommuting and online classes, the purchase of high-performance laptops has been made easier with the adoption of Core i5-12500H, the Core i5 processor is powerful enough to effortlessly handle users' routines. Equipped with the graphics cards of NVIDIA GeForce RTX 30 Series, also introduces MUX switch graphics card switching technology, discrete GPU can be directly output to the display with just one click, which can easily improve the game performance and increase the frame rate in fierce game battles. For offering authentic gaming specifications and flexible expandability of hardware. The series can satisfy the user's needs for playing multiple roles in life.

First Leaks of Upcoming Graphics Cards Model Names From Both AMD and NVIDIA Appears

Once again the Eurasian Economic Commission has been helpful by sharing the model names of multiple upcoming graphics cards from both AMD and NVIDIA, which was dug up by @harukaze5719. This time around it's AFOX, a fairly minor graphics card manufacturer based out of Hong Kong that has submitted products for trademark registration. If these are the final product names or not, it's not clear and there are some "irregularities" in the submission as well, but we'll get to that in a second. Looking at the AMD cards, all the model names are as expected, ranging from the Radeon RX 7500 to the RX 7900XT in even steps of 100, with non XT and XT models for each SKU.

On the NVIDIA side we have the RTX 4050 to the RTX 4090TI, again with even steps, but of 10 this time and TI models of all cards, which seems a bit odd on the lower-end. However, AFOX has also registered trademarks for four RTX 30x0 Super cards, suggesting that NVIDIA might refresh its lineup of Ampere cards before it launches the 4000-series. This is obviously just an indication of things that may happen and should be taken with a fair helping of salt.

NVIDIA GeForce RTX 40 Series Could Be Delayed Due to Flood of Used RTX 30 Series GPUs

NVIDIA's next generation of graphics cards, codenamed RTX 40 series, Ada Lovelace, is expected to arrive sometime in October. However, the latest information from the YouTube channel "Moore's Law Is Dead" suggests that NVIDIA could postpone the arrival of the new GPU generation to December. Why, you might be wondering? The report claims that the current GPU market is flooded with used GeForce RTX 30 series GPUs. Thus, NVIDIA could postpone the availability of the latest GPUs to keep the demand high and ensure that the market is searching for additional graphics cards.

Retailers are experiencing smaller demand as the used GPU market is full of devices used for cryptocurrency mining, and the recent crypto crash has helped the situation. What we could see is NVIDIA announcing Ada Lovelace GPUs in October, with availability arriving later in December. Of course, these are just the current industry rumors, and we are yet to see how the market and NVIDIA will respond.

NVIDIA RTX 40 Series Could Reach 800 Watts on Desktop, 175 Watt for Mobile/Laptop

Rumors of NVIDIA's upcoming Ada Lovelace graphics cards keep appearing. With every new update, it seems like the total power consumption is getting bigger, and today we are getting information about different SKUs, including mobile and desktop variants. According to a well-known leaker, kopite7kimi, we have information about the power limits of the upcoming GPUs. The new RTX 40 series GPUs will feature a few initial SKUs: AD102, AD103, AD104, and AD106. Every SKU, except the top AD102, will be available as well. The first in line, AD102, is the most power-hungry SKU with a maximum power limit rating of 800 Watts. This will require multiple power connectors and a very beefy cooling solution to keep it running.

Going down the stack, we have an AD103 SKU limited to 450 Watts on desktop and 175 Watts on mobile. The AD104 chip is limited to 400 Watts on desktop, while the mobile version is still 175 Watts. Additionally, the AD106 SKU is limited to 260 Watts on desktop and 140 Watts on mobile.

Alleged NVIDIA AD102 PCB Drawing Reveals NVLink is Here to Stay, Launch Timelines Revealed

An alleged technical drawing of the PCB of reference-design NVIDIA "Ada" AD102 silicon was leaked to the web, courtesy of Igor's Lab. It reveals a large GPU pad that's roughly the size of the GA102 (the size of the fiberglass substrate or package, only, not the die); surrounded by twelve memory chips, which are likely GDDR6X. There are also provision for at least 24 power phases, although not all of them are populated by sets of chokes and DrMOS in the final products (a few of them end up vacant).

We also spy the 16-pin ATX 3.0 power connector that's capable of delivering up to 600 W of power; and four display outputs, including a USB-C in lieu of a larger connector (such as DP or HDMI). A curious thing to note is that the card continues to have an NVLink connector. Multi-GPU is dead, which means the NVLink on the reference design will likely be rudimentary in the GeForce RTX product (unless used for implicit multi-GPU). The connector may play a bigger role in the professional-visualization graphics cards (RTX AD-series) based on this silicon.

NVIDIA GeForce RTX 4090 Twice as Fast as RTX 3090, Features 16128 CUDA Cores and 450W TDP

NVIDIA's next-generation GeForce RTX 40 series of graphics cards, codenamed Ada Lovelace, is shaping up to be a powerful graphics card lineup. Allegedly, we can expect to see a mid-July launch of NVIDIA's newest gaming offerings, where customers can expect some impressive performance. According to a reliable hardware leaker, kopite7kimi, NVIDIA GeForce RTX 4090 graphics card will feature AD102-300 GPU SKU. This model is equipped with 126 Streaming Multiprocessors (SMs), which brings the total number of FP32 CUDA cores to 16128. Compared to the full AD102 GPU with 144 SMs, this leads us to think that there will be an RTX 4090 Ti model following up later as well.

Paired with 24 GB of 21 Gbps GDDR6X memory, the RTX 4090 graphics card has a TDP of 450 Watts. While this number may appear as a very power-hungry design, bear in mind that the targeted performance improvement over the previous RTX 3090 model is expected to be a two-fold scale. Paired with TSMC's new N4 node and new architecture design, performance scaling should follow at the cost of higher TDPs. These claims are yet to be validated by real-world benchmarks of independent tech media, so please take all of this information with a grain of salt and wait for TechPowerUp reviews once the card arrives.

Gigabyte Debut New Flagship AORUS 17X Gaming Laptop with Extreme Performance

GIGABYTE today launched AORUS 17X gaming laptop, the brand-new flagship model that combines breakthrough performance with enhanced portability. The AORUS 17X is powered by the latest Intel 12th generation Core i9 HX processor that offers up to 16 cores. Pairing with the NVIDIA GeForce RTX 30 series graphics card, the powerful combo in the AORUS 17X can deliver a significant performance gain up to 32% over its predecessor, allowing enthusiasts to enjoy desktop-class gaming performance on the go. High-performance cores demand superior cooling. The latest generation of AORUS 17X adopts the exclusive WINDFORCE Infinity cooling system. It features a pair of highly efficient fans, 6 heat pipes, and multiple cooling fins that can efficiently remove heat produced by the CPU and the GPU, keeping the laptop cool while maintaining maximum performance throughout the workload.

Different from the conventional thin-bezel displays that are generally limited to the sides, the new AORUS 17X, as well as the AORUS 17, are the world's first four-sided super-thin bezel gaming laptops with the bent-type technology. Thanks to the innovation, the AORUS 17X screen's bottom border is trimmed by 30%, making the overall screen-to-body ratio reach 90%. The bent-type panel also makes it possible for the AORUS 17X to pack a big 17" screen in a nearly 15" tall chassis, greatly increasing its portability. The AORUS 17X is also engineered for fast-paced gaming, delivers ultra-high refresh rate of up to 360 Hz, which is six times faster than conventional laptops. The beastly AORUS 17X gaming laptop reshapes the game by bringing unmatched power and gaming visuals to a more compact form factor, making it a powerhouse laptop for heavy-duty gaming on the go. For more information on AORUS 17X and other AORUS series gaming laptops, check out the AORUS official website.

NVIDIA GeForce RTX 3090 Ti Gets Custom 890 Watt XOC BIOS

Extreme overclocking is an enthusiast discipline where overclockers try to push their hardware to extreme limits. Combining powerful cooling solutions like liquid nitrogen (LN2), which reaches sub-zero temperatures alongside modified hardware, the silicon can output tremendous power. Today, we are witnessing a custom XOC (eXtreme OverClocking) BIOS for the NVIDIA GeForce RTX 3090 Ti graphics card that can push the GA102 SKU to impressive 890 Watts of power, representing almost a two-fold increase to the stock TDP. Enthusiasts pursuing large frequencies with their RTX 3090 Ti are likely users of this XOC BIOS. However, most likely, we will see GALAX HOF or EVGA KINGPIN cards with dual 16-pin power connectors utilize this.

As shown below, MEGAsizeGPU, the creator of this BIOS, managed to push his ASUS GeForce RTX 3090 Ti TUF with XOC BIOS to 615 Watts, so KINGPIN and HOF designs will have to be used to draw all the possible heat. The XOC BIOS was uploaded to our VGA BIOS database, however, caution is advised as this can break your graphics card.

GRAID Supercharges RAID up to 19 Million IOPS, 110 GBps

GRAID Technology has announced the SupremeRAID SR-1010, which it claims is the "world's fastest NVMe and NVMeoF RAID card for PCIe Gen 4." A PCIe 4.0 evolution of the SupremeRAID SR-1000, the new SupremeRAID SR-1010 upgrades the on-board GPU RAID accelerator from the Nvidia T1000 (Turing) to an Nvidia RTX A2000 (Ampere) GPU. The improved hardware leverages the PCIe 4.0 protocol to achieve RAID speeds in excess of anything you've seen before, with sequential reads rated at 110 GBps and sequential write performance of 22 GBps. Read and write IOPS are set at 19M and 1.5M, respectively.

The change in GPU and PCIe interface means the upgraded SupremeRAID SR-1010 offers a 19% performance boost in read and an 83% write performance increase compared to the model it replaces. The GRAID SupremeRAID SR-1010 offers support for RAID 0, 1, 5, 6, and RAID 10 arrays with support for a maximum of four groups, and is capable of managing up to 32 NVMe SSDs under Linux and Windows Server 2019 and 2022 (though performance there takes a big haircut). Availability is pegged for May 1st, but GRAID didn't provide a sticker price for its GPU-powered RAID solution.

NVIDIA Allegedly Testing a 900 Watt TGP Ada Lovelace AD102 GPU

With the release of Hopper, NVIDIA's cycle of new architecture releases is not yet over. Later this year, we expect to see next-generation gaming architecture codenamed Ada Lovelace. According to a well-known hardware leaker for NVIDIA products, @kopite7kimi, on Twitter, the green team is reportedly testing a potent variant of the upcoming AD102 SKU. As the leak indicates, we could see an Ada Lovelace AD102 SKU with a Total Graphics Power (TGP) of 900 Watts. While we don't know where this SKU is supposed to sit in the Ada Lovelace family, it could be the most powerful, Titan-like design making a comeback. Alternatively, this could be a GeForce RTX 4090 Ti SKU. It carries 48 GB of GDDR6X memory running at 24 Gbps speeds alongside monstrous TGP. Feeding the card are two 16-pin connectors.

Another confirmation from the leaker is that the upcoming RTX 4080 GPU uses the AD103 SKU variant, while the RTX 4090 uses AD102. For further information, we have to wait a few more months and see what NVIDIA decides to launch in the upcoming generation of gaming-oriented graphics cards.

GPU Hardware Encoders Benchmarked on AMD RDNA2 and NVIDIA Turing Architectures

Encoding video is one of the significant tasks that modern hardware performs. Today, we have some data of AMD and NVIDIA solutions for the problem that shows how good GPU hardware encoders are. Thanks to Chips and Cheese tech media, we have information about AMD's Video Core Next (VCN) encoder found in RDNA2 GPUs and NVIDIA's NVENC (short for NVIDIA Encoder). The site managed to benchmark AMD's Radeon RX 6900 XT and NVIDIA GeForce RTX 2060 GPUs. The AMD card features VCN 3.0, while the NVIDIA Turing card features a 6th generation NVENC design. Team red is represented by the latest work, while there exists a 7th generation of NVENC. C&C tested this because it means all that the reviewer possesses.

The metric used for video encoding was Netflix's Video Multimethod Assessment Fusion (VMAF) metric composed by the media giant. In addition to hardware acceleration, the site also tested software acceleration done by libx264, a software library used for encoding video streams into the H.264/MPEG-4 AVC compression format. The libx264 software acceleration was running on AMD Ryzen 9 3950X. Benchmark runs included streaming, recording, and transcoding in Overwatch and Elder Scrolls Online.
Below, you can find benchmarks of streaming, recording, transcoding, and transcoding speed.

NVIDIA GeForce RTX 4090/4080 to Feature up to 24 GB of GDDR6X Memory and 600 Watt Board Power

After the data center-oriented Hopper architecture launch, NVIDIA is slowly preparing to transition the consumer section to new, gaming-focused designs codenamed Ada Lovelace. For starters, the source claims that NVIDIA is using the upcoming GeForce RTX 3090 Ti GPU as a test run for the next-generation Ada Lovelace AD102 GPU. Thanks to the authorities over at Igor's Lab, we have some additional information about the upcoming lineup. We have a sneak peek of a few features regarding the top-end GeForce RTX 4080 and RTX 4090 GPU SKUs. According to Igor's claims, NVIDIA is testing the PCIe Gen5 power connector and wants to see how it fares with the biggest GA102 SKU - GeForce RTX 3090 Ti.

Additionally, we find that the AD102 GPU is supposed to be pin-compatible with GA102. This means that the number of pins located on GA102 is the same as what we are going to see on AD102. There are 12 places for memory modules on the AD102 reference design board, resulting in up to 24 GB of GDDR6X memory. As much as 24 voltage converters surround the GPU, NVIDIA will likely implement uP9512 SKU. It can drive eight phases, resulting in three voltage converters per phase, ensuring proper power delivery. The total board power (TBP) is likely rated at up to 600 Watts, meaning that the GPU, memory, and power delivery combined output 600 Watts of heat. Igor notes that board partners will bundle 12+4 (12VHPWR) to four 8-pin (PCIe old) converters to enable PSU compatibility.

NVIDIA Unveils Grace CPU Superchip with 144 Cores and 1 TB/s Bandwidth

NVIDIA has today announced its Grace CPU Superchip, a monstrous design focused on heavy HPC and AI processing workloads. Previously, team green has teased an in-house developed CPU that is supposed to go into servers and create an entirely new segment for the company. Today, we got a more detailed look at the plan with the Grace CPU Superchip. The Superchip package represents a package of two Grace processors, each containing 72 cores. These cores are based on Arm v9 in structure set architecture iteration and two CPUs total for 144 cores in the Superchip module. These cores are surrounded by a now unknown amount of LPDDR5x with ECC memory, running at 1 TB/s total bandwidth.

NVIDIA Grace CPU Superchip uses the NVLink-C2C cache coherent interconnect, which delivers 900 GB/s bandwidth, seven times more than the PCIe 5.0 protocol. The company targets two-fold performance per Watt improvement over today's CPUs and wants to bring efficiency and performance together. We have some preliminary benchmark information provided by NVIDIA. In the SPECrate2017_int_base integer benchmark, the Grace CPU Superchip scores over 740 points, which is just the simulation for now. This means that the performance target is not finalized yet, teasing a higher number in the future. The company expects to ship the Grace CPU Superchip in the first half of 2023, with an already supported ecosystem of software, including NVIDIA RTX, HPC, NVIDIA AI, and NVIDIA Omniverse software stacks and platforms.
NVIDIA Grace CPU Superchip

NVIDIA Provides a Statement on MIA RTX 3090 Ti GPUs

NVIDIA's RTX 3090 Ti graphics card could very well be a Spartan from 343 Industries' Halo, in that it too is missing in action. Originally announced at CES 2022 for a January 27th release, the new halo product for the RTX 30-series family even had some of its specifications announced in a livestream. However, the due date has come and gone for more than half a month, and NVIDIA still hadn't said anything about the why and the how of it - or when should gamers hoping to snag the best NVIDIA graphics card of this generation ready their F5 keys (and bank accounts). Until now - in a statement to The Verge, NVIDIA spokesperson Jen Andersson said that "We don't currently have more info to share on the RTX 3090 Ti, but we'll be in touch when we do". Disappointed? So are we.

While the reasons surrounding the RTX 3090 Ti's delayed launch still aren't clear - and with NVIDIA's response, we're left wondering if they ever will be - there were some warning signs that not all the grass was green on the RTX 3090 Ti's launch. The consensus seems to be that NVIDIA found some last-minute production issues with the RTX 3090 Ti, which prompted an emergency delay on the cards' launch. The purported problems range from issues with the card's PCB, BIOS, and even GDDR6X 21 Gbps memory modules - but it's unclear which of these (or perhaps which combination) truly prompted the very real delay on the product launch.

Adobe Premiere Pro 22.2 Update Brings HEVC 10-Bit Encoding with Major Performance Increase for NVIDIA and Intel Graphics Cards

Adobe's Premiere Pro, one of the most common video editing tools in the industry, has received a February update today with version 22.2. The new version brings a wide array of features like Adobe Remix, an advanced audio retiming tool. Alongside that, the latest update accelerates offline text-to-speech capabilities by as much as three times. However, this is not the most significant feature, as we are about to see. Adobe also enabled 10-bit 420 HDR HEVC H/W encoding on Window with Intel and NVIDIA graphics. This feature allows the software to use advanced hardware built-in the NVIDIA Quadro RTX and Intel Iris Xe graphics cards.

The company managed to run some preliminary tests, and you can see the charts below. They significantly improve export times with the latest 22.2 software version that enables HEVC 10-Bit hardware encoding. For Intel GPUs, no special drivers need to be installed. However, for NVIDIA GPUs, Adobe is advising official Studio drivers in combination with Quadro RTX GPUs.

Elevate Your Vision with Gigabyte's AERO Laptop

The world's leading computer brand, GIGABYTE's new generation AERO creator series laptops featuring Intel's latest 12 generation processors and RTX 30 series graphics cards have hit shelves. The dual chip approach improved processing performance by 28% vastly improving the time efficiency for creators. The newly released AERO 16 utilizes a 16:10 golden ratio 4K OLED screen and a 4 side super narrow bezel design revolutionizing the field of vision for creators. In addition to boasting a film industry standard of 100% DCI-P3 wide color gamut, each laptop has undergone correction and certification by color authorities, X-Rite and Pantone, before leaving the factory ensuring the most accurate color display.

The AERO 16, designed specifically for creators, also captures the essence of creators with an aesthetic, chassis carefully integrated with CNC aluminium alloy, every detail skillfully crafted by GIGABYTE designers. Coming along with the AERO 16 and AERO 17, the AERO HUB is a must-have mobile workstation for content creators.

Gigabyte's AORUS Gaming Laptops Evolve, Reshaping the Game

The world's leading computer brand, GIGABYTE, launched its new generation of AORUS professional gaming laptops featuring Intel's 12th generation processors and Nvidia's RTX 30Ti series graphics cards. The upgraded dual-chip approach has resulted in a great leap in performance and smoother gaming experience, with a 11% increase in gaming performance. In addition to an upgraded performance, AORUS has pushed the limits of size dimensions of notebooks in order to improve visual enjoyment for gamers.

The AORUS 17, GIGABYTE's flagship model laptop debuted at CES 2022, changed the rules of the game by introducing a 17-inch screen panel inside a 15-inch tall laptop chassis. By adopting an extremely narrow four-sided frame design, AORUS increased the screen-to-body ratio to 90%, so you see more while carrying less. Without compromising any performance, the screen still boasts a 360 Hz refresh rate, allowing players to see more frames and win more games.

MAINGEAR Launches New NVIDIA GeForce RTX 3050 Desktops, Offering Next-Gen Gaming Features

MAINGEAR—an award-winning PC system integrator of custom gaming desktops, notebooks, and workstations—today announced that new NVIDIA GeForce RTX 3050 graphics cards are now available to configure within MAINGEAR's product line of award-winning custom gaming desktop PCs and workstations. Featuring support for real-time ray tracing effects and AI technologies, MAINGEAR PCs equipped with the NVIDIA GeForce RTX 3050 offer gamers next-generation ray-traced graphics and performance comparable to the latest consoles.

Powered by Ampere, the NVIDIA GeForce RTX 3050 features NVIDIA's 2nd gen Ray Tracing Cores and 3rd generation Tensor Cores. Combined with new streaming multiprocessors and high-speed G6 memory, the NVIDIA GeForce RTX 3050 can power the latest and greatest games. NVIDIA RTX on 30 Series GPUs deliver real-time ray tracing effects—including shadows, reflections, and Ambient Occlusion (AO). The groundbreaking NVIDIA DLSS (Deep Learning Super Sampling) 2.0 AI technology utilizes Tensor Core AI processors to boost frame rates while producing sharp, uncompromised visual fidelity comparable to high native resolutions.

Alphacool Launches Aurora Vertical GPU Mount & Eisblock Acryl GPX for Zotac RTX 3070Ti

With the new Aurora Vertical GPU Mount, Alphacool now offers the possibility to install the graphics card vertically inside compatible PC cases, whether they're air or liquid cooled. In addition, the mount features 11 digitally addressable 5V RGB LEDs that create a unique, very classy looking illumination. The digital aRGB LED lighting can be controlled either with a digital RGB controller or a digital RGB capable motherboard.

There is more new blood within the Ice Block range! In the future, Zotac RTX 3070Ti AMP Holo, Trinity and Trinity OC can also be water-cooled with Alphacool's Eisblock Aurora Acryl GPX custom cooler.

NVIDIA "GA103" GeForce RTX 3080 Ti Laptop GPU SKU Pictured

When NVIDIA announced the appearance of the GeForce RTX 3080 Ti mobile graphics card, we were left with a desire to see just what the GA103 silicon powering the GPU looks like. And thanks to the Chinese YouTuber Geekerwan, we have the first pictures of the GPU. Pictured below is GA103S/GA103M SKU with GN20-E8-A1 labeling. It features 58 SMs that make up for 7424 CUDA cores in total. The number of Tensor cores for this SKU is set to 232, while there are 58 RT cores. NVIDIA has decided to pair this GPU with a 256-bit memory bus and 16 GB GDDR6 memory.

As it turns out, the full GA103 silicon has a total of 7680 CUDA cores and a 320-bit memory bus, so this mobile version is a slightly cut-down variant. It sits perfectly between GA104 and GA102 SKUs, providing a significant improvement to the core count of GA104 silicon. Power consumption of the GA103 SKU for GeForce RTX 3080 Ti mobile is set to a variable 80-150 Watt range, which can be adjusted according to the system's cooling capacity. An interesting thing to point out is a die size of 496 mm², which is a quarter more significant compared to GA104, for a quarter higher number of CUDA cores.

Intel Arc Alchemist Xe-HPG Graphics Card with 512 EUs Outperforms NVIDIA GeForce RTX 3070 Ti

Intel's Arc Alchemist discrete lineup of graphics cards is scheduled for launch this quarter. We are getting some performance benchmarks of the DG2-512EU silicon, representing the top-end Xe-HPG configuration. Thanks to a discovery of a famous hardware leaker TUM_APISAK, we have a measurement performed in the SiSoftware database that shows Intel's Arc Alchemist GPU with 4096 cores and, according to the report from the benchmark, just 12.8 GB of GDDR6 VRAM. This is just an error on the report, as this GPU SKU should be coupled with 16 GB of GDDR6 VRAM. The card was reportedly running at 2.1 GHz frequency. However, we don't know if this represents base or boost speeds.

When it comes to actual performance, the DG2-512EU GPU managed to score 9017.52 Mpix/s, while something like NVIDIA GeForce RTX 3070 Ti managed to get 8369.51 Mpix/s in the same test group. Comparing these two cards in floating-point operations, Intel has an advantage in half-float, double-float, and quad-float tests, while NVIDIA manages to hold the single-float crown. This represents a 7% advantage for Intel's GPU, meaning that Arc Alchemist has the potential for standing up against NVIDIA's offerings.

The Power of AI Arrives in Upcoming NVIDIA Game-Ready Driver Release with Deep Learning Dynamic Super Resolution (DLDSR)

Among the broad range of new game titles getting support, we are in for a surprise. NVIDIA yesterday announced a feature list of its upcoming game-ready GeForce driver scheduled for public release on January 14th. According to the new blog post on NVIDIA's website, the forthcoming game-ready driver release will feature an AI-enhanced version of Dynamic Super Resolution (DSR), available in GeForce drivers for a while. The new AI-powered tech is, what the company calls, Deep Learning Dynamic Super Resolution or DLDSR shortly. It uses neural networks that require fewer input pixels and produces stunning image quality on your monitor.
NVIDIAOur January 14th Game Ready Driver updates the NVIDIA DSR feature with AI. DLDSR (Deep Learning Dynamic Super Resolution) renders a game at higher, more detailed resolution before intelligently shrinking the result back down to the resolution of your monitor. This downsampling method improves image quality by enhancing detail, smoothing edges, and reducing shimmering.

DLDSR improves upon DSR by adding an AI network that requires fewer input pixels, making the image quality of DLDSR 2.25X comparable to that of DSR 4X, but with higher performance. DLDSR works in most games on GeForce RTX GPUs, thanks to their Tensor Cores.
NVIDIA Deep Learning Dynamic Super Resolution NVIDIA Deep Learning Dynamic Super Resolution

NVIDIA GeForce RTX 3080 12 GB Edition Rumored to Launch on January 11th

During the CES 2022 keynote, we have witnessed NVIDIA update its GeForce RTX 30 series family with GeForce RTX 3050 and RTX 3090 Ti. However, this is not an end to NVIDIA's updates to the Ampere generation, as we now hear industry sources from Wccftech suggest that we could see a GeForce RTX 3080 GPU with 12 GB of GDDR6X VRAM enabled, launched as a separate product. Compared to the regular RTX 3080 that carries only 10 GB of GDDR6X, the new 12 GB version is supposed to bring a slight bump up to the specification list. The GA102-220 GPU SKU found inside the 12 GB variant will feature 70 SMs with 8960 CUDA, 70 RT cores, and 280 TMUs.

This represents a minor improvement over the regular GA102-200 silicon inside the 8 GB model. However, the significant difference is the memory organization. With the new 12 GB model, we have a 384-bit memory bus allowing GDDR6X modules to achieve a bandwidth of 912 GB/s, all while running at 19 Gbps speeds. The overall TDP will also receive a bump to 350 Watts, compared to 320 Watts of the regular RTX 3080 model. For more information regarding final clock speeds and pricing, we have to wait for the alleged launch date - January 11th.

GAINWARD Releases GeForce RTX 3050 Ghost and Pegasus Graphics Cards

As the leading brand in enthusiastic graphics market, Gainward proudly presents the new Gainward GeForce RTX 3050 Ghost and Gainward GeForce RTX 3050 Pegasus series. The GeForce RTX 3050 brings the performance and efficiency of the NVIDIA Ampere architecture to more gamers than ever before and is the first 50-class desktop GPU to power the latest ray traced games at over 60 FPS. The RTX 3050 comes equipped with 2nd generation RT cores for ray tracing and 3rd generation Tensor cores for DLSS and AI. Ray tracing is the new standard in gaming and the RTX 3050 makes it more accessible than ever before.

Like all RTX 30 Series GPUs, the RTX 3050 supports the trifecta of GeForce gaming innovations: NVIDIA DLSS, NVIDIA Reflex and NVIDIA Broadcast, which accelerate performance and enhance image quality. Together with real-time ray tracing, these technologies are the foundation of the GeForce gaming platform, which brings unparalleled performance and features to games and gamers everywhere.
Return to Keyword Browsing
May 10th, 2024 08:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts