News Posts matching #NVIDIA

Return to Keyword Browsing

Next-Generation NVIDIA DGX Systems Could Launch Soon with Liquid Cooling

During the 2024 SIEPR Economic Summit, NVIDIA CEO Jensen Huang acknowledged that the company's next-generation DGX systems, designed for AI and high-performance computing workloads, will require liquid cooling due to their immense power consumption. Huang also hinted that these new systems are set to be released in the near future. The revelation comes as no surprise, given the increasing power of GPUs needed to satisfy AI and machine learning applications. As computational requirements continue to grow, so does the need for more powerful hardware. However, with great power comes great heat generation, necessitating advanced cooling solutions to maintain optimal performance and system stability. Liquid cooling has long been a staple in high-end computing systems, offering superior thermal management compared to traditional air cooling methods.

By implementing liquid cooling in the upcoming DGX systems, NVIDIA aims to push the boundaries of performance while ensuring the hardware remains reliable and efficient. Although Huang did not provide a specific release date for the new DGX systems, his statement suggests that they are on the horizon. Whether the next generation of DGX systems uses the current NVIDIA H200 or the upcoming Blackwell B100 GPU as their primary accelerator, the performance will undoubtedly be delivered. As the AI and high-performance computing landscape continues to evolve, NVIDIA's position continues to strengthen, and liquid-cooled systems will certainly play a crucial role in shaping the future of these industries.

NVIDIA RTX 20-series and GTX 16-series "Turing" GPUs Get Resizable BAR Support Through NVStrapsReBAR Mod

February saw community mods bring resizable BAR support to several older platforms; and now we come across a mod that brings it to some older GPUs. The NVStrapsReBAR mod by terminatorul, which is forked out of the ReBarUEFI mod by xCurio, brings resizable BAR support to NVIDIA GeForce RTX 20-series and GTX 16-series GPUs based on the "Turing" graphics architecture. This mod is intended for power users, and can potentially brick your motherboard. NVIDIA officially implemented resizable BAR support since its RTX 30-series "Ampere" GPUs in response to AMD's Radeon RX 6000 RDNA 2 GPUs implementing the tech under the marketing name Smart Access Memory. While AMD would go on to retroactively enable the tech for even the older RX 5000 series RDNA GPUs, NVIDIA didn't do so for "Turing."

NVStrapsReBAR is a motherboard UEFI firmware mod. It modifies the way your system firmware negotiates BAR size with the GPU on boot. There are only two ways to go about modding a platform to enable resizable BAR on an unsupported platform—by modding the motherboard firmware, or the video BIOS. Signature checks by security processors in NVIDIA GPUs make the video BIOS modding route impossible for most users; thankfully motherboard firmware modding isn't as difficult. There is an extensive documentation by the author to go about using this mod. The author has tested the mod to work with "Turing" GPUs, however, it doesn't work with older NVIDIA GPUs, including "Pascal." Resizable BAR enables the CPU (software) to see video memory as a single contiguously addressable block, rather than through 256 MB apertures.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.

NVIDIA Introduces Generative AI Professional Certification

NVIDIA is offering a new professional certification in generative AI to enable developers to establish technical credibility in this important domain. Generative AI is revolutionizing industries worldwide, yet there's a critical skills gap and need to uplevel employees to more fully harness the technology. Available for the first time from NVIDIA, this new professional certification enables developers, career professionals, and others to validate and showcase their generative AI skills and expertise. Our new professional certification program introduces two associate-level generative AI certifications, focusing on proficiency in large language models and multimodal workflow skills.

"Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," NVIDIA founder and CEO Jensen Huang recently said. The certification will become available starting at GTC, where in-person attendees can also access recommended training to prepare for a certification exam. "Organizations in every industry need to increase their expertise in this transformative technology," said Greg Estes, VP of developer programs at NVIDIA. "Our goals are to assist in upskilling workforces, sharpen the skills of qualified professionals, and enable individuals to demonstrate their proficiency in order to gain a competitive advantage in the job market."

NVIDIA Data Center GPU Business Predicted to Generate $87 Billion in 2024

Omdia, an independent analyst and consultancy firm, has bestowed the title of "Kingmaker" on NVIDIA—thanks to impressive 2023 results in the data server market. The research firm predicts very buoyant numbers for the financial year of 2024—their February Cloud and Datacenter Market snapshot/report guesstimates that Team Green's data center GPU business group has the potential to rake in $87 billion of revenue. Omdia's forecast is based on last year's numbers—Jensen & Co. managed to pull in $34 billion, courtesy of an unmatched/dominant position in the AI GPU industry sector. Analysts have estimated a 150% rise in revenues for in 2024—the majority of popular server manufacturers are reliant on NVIDIA's supply of chips. Super Micro Computer Inc. CEO—Charles Liang—disclosed that his business is experiencing strong demand for cutting-edge server equipment, but complications have slowed down production: "once we have more supply from the chip companies, from NVIDIA, we can ship more to customers."

Demand for AI inference in 2023 accounted for 40% of NVIDIA data center GPU revenue—according Omdia's expert analysis—they predict further growth this year. Team Green's comfortable AI-centric business model could expand to a greater extent—2023 market trends indicated that enterprise customers had spent less on acquiring/upgrading traditional server equipment. Instead, they prioritized the channeling of significant funds into "AI heavyweight hardware." Omdia's report discussed these shifted priorities: "This reaffirms our thesis that end users are prioritizing investment in highly configured server clusters for AI to the detriment of other projects, including delaying the refresh of older server fleets." Late February reports suggest that NVIDIA H100 GPU supply issues are largely resolved—with much improved production timeframes. Insiders at unnamed AI-oriented organizations have admitted that leadership has resorted to selling-off of excess stock. The Omdia forecast proposes—somewhat surprisingly—that H100 GPUs will continue to be "supply-constrained" throughout 2024.

HP Unveils Industry's Largest Portfolio of AI PCs

HP Inc. today announced the industry's largest portfolio of AI PCs leveraging the power of AI to enhance productivity, creativity, and user experiences in hybrid work settings.

In an ever-changing hybrid work landscape, workers are still struggling with disconnection and digital fatigue. HP's 2023 Work Relationship Index reveals that only 27% of knowledge workers have a healthy relationship with work, and 83% believe it's time to redefine our relationships with work. Most employees believe AI will open new opportunities to enjoy work and make their jobs easier, but they need the right AI tools and technology to succeed.

NVIDIA and HP Supercharge Data Science and Generative AI on Workstations

NVIDIA and HP Inc. today announced that NVIDIA CUDA-X data processing libraries will be integrated with HP AI workstation solutions to turbocharge the data preparation and processing work that forms the foundation of generative AI development.

Built on the NVIDIA CUDA compute platform, CUDA-X libraries speed data processing for a broad range of data types, including tables, text, images and video. They include the NVIDIA RAPIDS cuDF library, which accelerates the work of the nearly 10 million data scientists using pandas software by up to 110x using an NVIDIA RTX 6000 Ada Generation GPU instead of a CPU-only system, without requiring any code changes.

NVIDIA Reportedly Sampling SK Hynix 12-layer HBM3E

South Korean tech insiders believe that SK Hynix has sent "12-layer DRAM stacked HBM3E (5th generation HBM)" prototype samples to NVIDIA—according a ZDNET.co.kr article, initial examples were shipped out last month. Reports from mid-2023 suggested that Team Green had sampled 8-layer HBM3E (4th gen) units around summer time—with SK Hynix receiving approval notices soon after. Another South Korean media outlet, DealSite, reckons that NVIDIA's memory qualification process has exposed HBM yield problems across a number of manufacturers. SK Hynix, Samsung and Micron are competing fiercely on the HBM3E front—with hopes of getting their respective products attached to NVIDIA's H200 AI GPU. DigiTimes Asia proposed that SK Hynix is ready to "commence mass production of fifth-generation HBM3E" at some point this month.

SK Hynix is believed to be leading the pack—insiders believe that yield rates are good enough to pass early NVIDIA certification, and advanced 12-layer samples are expected to be approved in the near future. ZDNET reckons that SK Hynix's forward momentum has placed it an advantageous position: "(They) supplied 8-layer HBM3E samples in the second half of last year and passed recent testing. Although the official schedule has not been revealed, mass production is expected to begin as early as this month. Furthermore, SK Hynix supplied 12-layer HBM3E samples to NVIDIA last month. This sample is an extremely early version and is mainly used to establish standards and characteristics of new products. SK Hynix calls it UTV (Universal Test Vehicle)... Since Hynix has already completed the performance verification of the 8-layer HBM3E, it is expected that the 12-layer HBM3E test will not take much time." SK Hynix's Vice President recently revealed that his company's 2024 HBM production volumes for were already sold out, and leadership is already preparing innovations for 2025 and beyond.

First GPUs Implementing GDDR7 Memory Could Stick with 16 Gbit Chips, 24 Gbit Possible

Some of the first gaming GPUs that implement the next-generation GDDR7 memory standard, will stick to 16 Gbit memory chip densities (2 GB), according to kopite7kimi, a reliable source with NVIDIA GeForce leaks. 16 Gbit is what is standard for the current RTX 40-series graphics cards, which ensures that a GPU with 256-bit memory bus gets 16 GB of video memory; the ones with 192-bit get 12 GB; and the ones with 128-bit get 8 GB. The flagship RTX 4090 uses twelve of these chips over its 384-bit memory bus for 24 GB.

Kopite7kimi's leak could have a different connotation, that much like the RTX 30-series "Ampere" and RTX 40-series "Ada," NVIDIA might not use JEDEC-standard GDDR7 on all product segments, and might co-engineer an exclusive standard with a DRAM company with memory bus signaling and power management technologies most optimal to its graphics architecture. It co-developed the GDDR6X with Micron Technology to do exactly this. GDDR7 comes with data-rates as high as 32 Gbps, which will be the top speed for the first round of GDDR7 chips that come out toward the end of 2024, heading into 2025. The second round of GDDR7 chips slated for late-2025 going into 2026, could go as fast as 36 Gbps. This is similar to how the first GDDR6 chips were 14-16 Gbps, and the next round did 18-20 Gbps.

NVIDIA GeForce NOW Gets G-SYNC and Reflex Support

Gamers can now seize the day with Day Passes, available to purchase for 24-hour continuous access to powerful cloud gaming with all the benefits of a GeForce NOW Ultimate or Priority membership - no commitment required. Publisher Cygames brings its next triple-A title to the cloud. Granblue Fantasy: Relink leads eight new games joining the GeForce NOW library this week. Plus, an update for GeForce NOW Windows and macOS adds support for G-SYNC in the cloud. By pairing it with new NVIDIA Reflex support for 60 and 120 frames per second streaming options, Ultimate members can experience ultra-low-latency streaming that's nearly indistinguishable from using a local PC.

Seize the Day
Day Passes offer access to 24 hours of GeForce RTX-powered cloud gaming. Users can get all the benefits of Ultimate and Priority memberships for a day without committing to longer-term monthly memberships, and choose how and when they access the cloud. Ultimate Day Pass users can stream at either 4K 120 FPS, up to 240 FPS, or with ultrawide resolutions. Plus, they can get all the same benefits as gamers using NVIDIA GeForce RTX 40 Series GPUs, with access to NVIDIA DLSS 3 and NVIDIA Reflex technologies for the smoothest gameplay and lowest latency, even on underpowered devices. Both Ultimate and Priority Day Pass users can turn RTX ON in supported games for immersive, cinematic gameplay.

Aetina to Unveil Fanless Edge AI Systems Powered by NVIDIA Jetson Orin NX and Orin Nano

The global edge artificial intelligence (AI) market is booming, which has accelerated the urgent demand for reliable, high-performance, and highly flexible edge AI inference devices. Aetina, a leading global edge AI solution provider, announces the release of its cutting-edge fanless versatile expansion edge AI systems - the AIE-PN33/43 series and AIE-PO23/33 series. These innovative systems are powered by NVIDIA Jetson Orin NX and Orin Nano modules, offering outstanding AI computing performance of up to 100 TOPS and 40 TOPS respectively, along with flexible I/O interface options. Designed to seamlessly handle the connectivity and real-time data processing needs of various sensors and devices at the edge, these solutions help empower industries worldwide to efficiently deploy an extensive range of AI applications.

According to Statista's 'Number of IoT connected devices worldwide 2019-2030' report, the number of Internet of Things (IoT) devices worldwide is forecasted to exceed 29 billion by 2030. In response to the growing number of IoT devices and the diverse I/O and storage needs of different AI applications, Aetina has unveiled a range of fanless edge AI systems. This lineup includes the AIE-PN33/43-2PSE and AIE-PN33/43-4PSE, boasting 2 or 4 sets of PSE ports respectively; the AIE-PN33/43-6USB and AIE-PO23/33-6USB, each integrating 6 sets of USB ports; and the AIE-PN33/43-3M and AIE-PO23/33-3M, equipped with 3 sets of M.2 M-Key slots. Users can choose the most suitable product model based on their specific AI application requirements, whether it involves addressing wiring challenges in wide-ranging environments like ports or traffic roadside monitoring, fulfilling multi-sensor connectivity needs for smart factory defect detection or automated production lines, or expanding storage capacity for real-time image recognition tasks. These solutions are poised to accelerate the deployment of advanced AI intelligence applications across diverse industries.

Palit and Gainward Announce RTX 4060 Infinity 2 and Python 2 Graphics Cards

Palit, and its sister brand Gainward, announced the GeForce RTX 4060 Infinity 2, and GeForce RTX 4060 Python 2 custom design graphics cards, respectively. Both cards feature an identical board design, differing only with their badging and outer boxes. Palit and Gainward seem to sell in the same markets, so you could pick either between the two. The card features identical dimensions to the Palit RTX 4060 DUAL V1 and the Gainward RTX 4060 Ghost V1 (which at least differ with their cooler shroud design).

Both cards get the same pair of 92 mm fans with idle fan-off. So why did Palit/Gainward come up with these? Apparently, cooler shroud of the Infinity 2 and Python 2 lack an RGB LED lighting element that you find on the Palit DUAL V1 and Gainward Ghost V1. The heatsink designs are changed, too. While the Palit DUAL V2 and Gainward Ghost V2 use an aluminium fin-stack heatsink, the newer cards come with an extruded aluminium monoblock heatsink that uses a copper heatpipe to spread heat. The shroud features an S-shaped design element going around the fan intakes, Palit sees the infinity symbol, while Gainward sees a python. Both cards stick to NVIDIA-reference clock speeds for the RTX 4060, of 2460 MHz boost, and 17 Gbps (GDDR6-effective) memory. Palit and Gainward may price the Infinity 2 and Phython at at the ever-shifting baseline price for the RTX 4060, which is now nearing $250 in some places.

NVIDIA Cracks Down on CUDA Translation Layers, Changes Licensing Terms

NVIDIA's Compute Unified Device Architecture (CUDA) has long been the de facto standard programming interface for developing GPU-accelerated software. Over the years, NVIDIA has built an entire ecosystem around CUDA, cementing its position as the leading GPU computing and AI manufacturer. However, rivals AMD and Intel have been trying to make inroads with their own open API offerings—ROCm from AMD and oneAPI from Intel. The idea was that developers could more easily run existing CUDA code on non-NVIDIA GPUs by providing open access through translation layers. Developers had created projects like ZLUDA to translate CUDA to ROCm, and Intel's CUDA to SYCL aimed to do the same for oneAPI. However, with the release of CUDA 11.5, NVIDIA appears to have cracked down on these translation efforts by modifying its terms of use, according to developer Longhorn on X.

"You may not reverse engineer, decompile or disassemble any portion of the output generated using Software elements for the purpose of translating such output artifacts to target a non-NVIDIA platform," says the CUDA 11.5 terms of service document. The changes don't seem to be technical in nature but rather licensing restrictions. The impact remains to be seen, depending on how much code still requires translation versus running natively on each vendor's API. While CUDA gave NVIDIA a unique selling point, its supremacy has diminished as more libraries work across hardware. Still, the move could slow the adoption of AMD and Intel offerings by making it harder for developers to port existing CUDA applications. As GPU-accelerated computing grows in fields like AI, the battle for developer mindshare between NVIDIA, AMD, and Intel is heating up.

NVIDIA DLSS Comes to Sons Of The Forest, Hellbreach: Vegas and REVEIL

Over 500 games and applications feature RTX technologies, and barely a week goes by without new blockbuster games and incredible indie releases integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects to deliver the definitive PC experience for GeForce RTX gamers. Following last week's launch of Brothers: A Tale of Two Sons Remake and Myth of Empires, this week we're highlighting the launch of REVEIL, Hellbreach: Vegas, and Sons Of The Forest, 3 new DLSS games. Additionally, we're showcasing the work of several screenshot pros, who've snapped jaw-dropping pics from DLSS and RTX games.

Sons Of The Forest Available Now With DLSS 2
Endnight Games' The Forest was an incredibly popular open-world survival and crafting game. Their critically acclaimed follow-up, Sons Of The Forest, recently exited Early Access with a massive content update. In Sons Of The Forest, you're sent to locate a missing billionaire on a remote island, but instead find yourself in a cannibal-infested hellscape. Craft, build, and struggle to survive, alone or with friends, in this terrifying new open-world survival horror simulator.

Simply NUC to Expand Onyx Family Product Line with Onyx Pro

Simply NUC, Inc, a leading custom computing company, proudly announced the expansion of the Onyx product line with Onyx Pro, a revolutionary Mini Workstation designed to cater to diverse computing needs. Powered by the Intel Core i9 vPro "Raptor Lake" 13900H CPU, Intel IRIS Xe integrated graphics, and featuring optional graphics cards such as NVIDIA T1000, Radeon Pro WX 3200, and Intel Arc A40, the Onyx Pro redefines computing in a compact form factor.

"Introducing the Onyx Pro marks a pivotal moment for Simply NUC." said Jonny Smith, CEO of Simply NUC "Powered by Intel Core i9 vPro "Raptor Lake", it's our first product with 10G fiber (x2) and standard quad network ports. With unmatched performance, AI-readiness, and advanced networking, Onyx Pro sets a new standard for compact workstations."

AMD Stalls on Instinct MI309 China AI Chip Launch Amid US Export Hurdles

According to the latest report from Bloomberg, AMD has hit a roadblock in offering its top-of-the-line AI accelerator in the Chinese market. The newest AI chip is called Instinct MI309, a lower-performance Instinct MI300 variant tailored to meet the latest US export rules for selling advanced chips to China-based entities. However, the Instinct MI309 still appears too powerful to gain unconditional approval from the US Department of Commerce, leaving AMD in need of an export license. Originally, the US Department of Commerce made a rule: Total Processing Performance (TPP) score should not exceed 4800, effectively capping AI performance at 600 FP8 TFLOPS. This rule ensures that processors with slightly lower performance may still be sold to Chinese customers, provided their performance density (PD) is sufficiently low.

However, AMD's latest creation, Instinct MI309, is everything but slow. Based on the powerful Instinct MI300, AMD has not managed to bring it down to acceptable levels to acquire a US export license from the Department of Commerce. It is still unknown which Chinese customer was trying to acquire AMD's Instinct MI309; however, it could be one of the Chinese AI labs trying to get ahold of more training hardware for their domestic models. NVIDIA has employed a similar tactic, selling A800 and H800 chips to China, until the US also ended the export of these chips to China. AI labs located in China can only use domestic hardware, including accelerators from Alibaba, Huawei, and Baidu. Cloud services hosting GPUs in US can still be accessed by Chinese companies, but that is currently under US regulators watchlist.

JEDEC Publishes GDDR7 Graphics Memory Standard

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, is pleased to announce the publication of JESD239 Graphics Double Data Rate (GDDR7) SGRAM. This groundbreaking new memory standard is available for free download from the JEDEC website. JESD239 GDDR7 offers double the bandwidth over GDDR6, reaching up to 192 GB/s per device, and is poised to meet the escalating demand for more memory bandwidth in graphics, gaming, compute, networking and AI applications.

JESD239 GDDR7 is the first JEDEC standard DRAM to use the Pulse Amplitude Modulation (PAM) interface for high frequency operations. Its PAM3 interface improves the signal to noise ratio (SNR) for high frequency operation while enhancing energy efficiency. By using 3 levels (+1, 0, -1) to transmit 3 bits over 2-cycles versus the traditional NRZ (non-return-to-zero) interface transmitting 2 bits over 2-cycles, PAM3 offers higher data transmission rate per cycle resulting in improved performance.

NVIDIA Releases GeForce 551.76 WHQL Game Ready Drivers

NVIDIA today released the latest version of its GeForce Game Ready drivers. Version 551.76 WHQL comes with optimization for "The Thaumaturge." This includes support for Reflex, and DLSS 3 Frame Generation in the game. Among the gaming bugs fixed with this release include game stability issues seen in "The Talos Principle 2" with DLSS 3 Frame Generation enabled; and general bugs fixed include NVENC accelerated video encoding in GeForce GTX 16-series GPUs causing video corruption or error messages (this issue was expeditiously fixed in a recent Hotfix). Steam component Steamwebhelper.exe causing notebook display mode switching to block, has also been fixed.

DOWNLOAD: NVIDIA GeForce 551.76 WHQL

NVIDIA GeForce GTX 16-series Finally Discontinued

NVIDIA has finally laid to rest the last GeForce GPUs to feature the "GTX" brand extension, the GTX 16-series "Turing." Although two generations older than the current RTX 40-series "Ada," the GTX 16-series formed the entry-level for NVIDIA, with certain SKUs continuing to ship to graphics card manufacturers, and more importantly, notebook ODMs as popular GeForce MX and GTX 16-series SKUs. With NVIDIA introducing further cut-down variants of its "Ampere" based GA107 silicon, such as the desktop RTX 3050 6 GB, the company has reportedly discontinued the GTX 16-series. All its inventories are drained on NVIDIA's end, and the channel is expected to consume the last remaining chips in the next 1-3 months, according to a source on Chinese forum Broad Channels.

NVIDIA had originally conceived the GTX 16-series to form the lower half of its 2018 product stack, with the upper half driven by the RTX 20-series. Both are based on the "Turing" graphics architecture, but the GTX 16-series has a reduced feature-set, namely the lack of RT cores and Tensor cores. The idea at the time behind the GTX 16-series, was that at their performance levels, ray tracing would be prohibitively slow at any resolution, and so these could be left with just the CUDA cores of "Turing," and made to power games with pure raster 3D graphics, so gamers could at least benefit from the higher IPC and 12 nm efficiency of "Turing" over the 16 nm "Pascal." Popular GPU models include the GTX 1650, and the GTX 1660 Super.

Dell Exec Confirms NVIDIA "Blackwell" B100 Doesn't Need Liquid Cooling

NVIDIA's next-generation AI GPU, the B100 "Blackwell," is now in the hands of the company's biggest ecosystem partners and customers for evaluation, and one of them is Dell. Jeff Clarke, the OEM giant's chief operating officer, speaking to industry analysts in an investor teleconference, said that he is excited about the upcoming B100 and B200 chips from NVIDIA. B100 is codename for the AI GPU NVIDIA designs for PCIe add-on card and the SXM socket, meant for systems powered by x86 CPUs such as the AMD EPYC or Intel Xeon Scalable. The B200 is its variant meant for machines powered by NVIDIA's in-house Arm-based processors, such as the successor to its Grace CPU, and its combination with an AI GPU, called Grace Hopper (GH200).

Perhaps the most interesting remark by Clarke about the B100 is that he doesn't think it needs liquid cooling, and can make do with high-airflow cooling like the H100. "We're excited about what happens at the B100 and the B200, and we think that's where there's actually another opportunity to distinguish engineering confidence. Our characterization in the thermal side, you really don't need to direct-liquid cooling to get to the energy density of 1000 W per GPU. That happens next year with the B200," he said. NVIDIA is planning a 2024 debut for "Blackwell" in the AI GPU space with the B100, with B200 slated for 2025, possibly alongside a new CPU.

MSI Preparing New RTX 4070 Ti Super Expert and RTX 4070 Ti Super Aero

MSI is apparently preparing to unveil two more GeForce RTX 4070 Ti Super graphics cards, the MSI RTX 4070 Ti Super Expert and the MSI RTX 4070 Ti Super Aero. The new MSI Expert series was unveiled at the CES show back in January, and currently, MSI only uses the design on the MSI RTX 4080 Super graphics card. On the other hand, the Aero series has been around for quite some time, but we haven't seen many of those cards, unless you count the Aero ITX version of the GeForce RTX 4060. As Videocardz.com notes, MSI actually has a total of 12 different RTX 4070 Ti Super designs, and that did not stop it from adding two more, despite the fact that the RTX 4070 Ti Super was only announced just over a month ago.

The MSI Expert lineup is quite an interesting design, announced back at the CES 2024 show, and uses a triple-slot design with a rather interesting fan placement taken from NVIDIA's Founder's Edition book. It features two fans, one at the front and one at the back of the card, pushing air in opposite directions. MSI has decided to call the new cooling solution the "MSI Flow Frozr". The front also features a mesh metal cover, which does look quite interesting.

NVIDIA GeForce GTX 16-series NVENC Issues Fixed with Hotfix Driver

NVIDIA released a Hotfix driver update to fix certain issues with the NVENC hardware encoder of GeForce GTX 16-series "Turing" GPUs, such as the popular GTX 1660, and GTX 1650 Ti, etc. Apparently, applications utilizing the hardware acceleration provided by the GPU's NVENC unit would result in corrupted videos or spring up error messages. The Hotfix driver is based on GeForce 551.68, and is not WHQL-certified. NVIDIA may include fixes contained in the hotfix in one of its upcoming GeForce Game Ready or Studio main trunk drivers. GeForce GTX 16-series "Turing" GPUs feature an NVENC unit that can accelerate H.264 and H.265 encoding.
DOWNLOAD: NVIDIA GeForce 551.68 Hotfix for GTX 16-series NVENC Issues

Microsoft DirectSR Super Resolution API Brings Together DLSS, FSR and XeSS

Microsoft has just announced that their new DirectSR Super Resolution API for DirectX will provide a unified interface for developers to implement super resolution in their games. This means that game studios no longer have to choose between DLSS, FSR, XeSS, or spend additional resources to implement, bug-test and support multiple upscalers. For gamers this is huge news, too, because they will be able to run upscaling in all DirectSR games—no matter the hardware they own. While AMD FSR and Intel XeSS run on all GPUs from all vendors, NVIDIA DLSS is exclusive to Team Green's hardware. With their post, Microsoft also confirms that DirectSR will not replace FSR/DLSS/XeSS with a new upscaler by Microsoft, rather that it builds on existing technologies that are already available, unifying access to them.

While we have to wait until March 21 for more details to be revealed at GDC 2024, Microsoft's Joshua Tucker stated in a blog post: "We're thrilled to announce DirectSR, our new API designed in partnership with GPU hardware vendors to enable seamless integration of Super Resolution (SR) into the next generation of games. Super Resolution is a cutting-edge technique that increases the resolution and visual quality in games. DirectSR is the missing link developers have been waiting for when approaching SR integration, providing a smoother, more efficient experience that scales across hardware. This API enables multi-vendor SR through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including NVIDIA DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS. DirectSR will be available soon in the Agility SDK as a public preview, which will enable developers to test it out and provide feedback. Don't miss our DirectX State of the Union at GDC to catch a sneak peek at how DirectSR can be used with your games!"

Blizzard's Battle.net Leaps into the Cloud With GeForce NOW

GFN Thursday celebrates this leap day with the addition of a popular game store to the cloud. Stream the first titles from Blizzard Entertainment's Battle.net, including Diablo IV, Overwatch 2, Call of Duty HQ and Hearthstone, now playable across more devices than ever. They're all part of the 30 new games coming to GeForce NOW in March, with eight available this week. Plus, Day Passes, announced at CES, are coming to the cloud next week, enabling gamers to experience the benefits of GeForce NOW Ultimate and Priority memberships for 24 hours at a time.

Welcome to the Cloud
Battle.net is Blizzard's digital storefront, a gateway to adventures in the Blizzard universe and home to a vibrant gaming community. Members who own Diablo IV, Overwatch 2, Call of Duty HQ and Hearthstone on Battle.net can now stream these triple-A titles from NVIDIA GeForce RTX-powered servers in the cloud without worrying about hardware specs or long download times. Battle the forces of evil in the dark, treacherous world of Diablo IV's Sanctuary at up to 4K resolution and 120 frames per second with an Ultimate membership, even on under-powered devices. Assemble a deck to cast legendary spells in Hearthstone, and engage in epic firefights in Overwatch 2 and Call of Duty HQ at ultra-low latency thanks to the power of NVIDIA Reflex technology. Read this article and search for Hearthstone for more details on supported devices for this title.

NVIDIA Reflex Comes to More Games, New Gaming Mice Get NVIDIA Reflex Analyzer Support

NVIDIA Reflex is a game-changer, reducing system latency on GeForce graphics cards and laptops so your actions occur quicker, giving you a competitive edge in multiplayer matches, and making single-player titles more responsive and enjoyable. Since its debut in September 2020, NVIDIA Reflex is now reducing system latency in over 100 games, over 90% of GeForce gamers enable Reflex. 9 of the top 10 competitive shooters feature Reflex support, including Apex Legends, Call of Duty: Modern Warfare III, Call of Duty: Warzone, Counter-Strike 2, Fortnite and Overwatch 2, along with critically acclaimed smash hit games such as Cyberpunk 2077, The Witcher 3: Wild Hunt, Diablo IV, Remnant 2, God of War, Microsoft Flight Simulator, Red Dead Redemption 2, Ratchet & Clank: Rift Apart, and Marvel's Spider-Man Remastered.

In 2023 alone, GeForce gamers played over 10 billion hours of their favorite titles with increased responsiveness thanks to Reflex's innovative system latency reducing technology. Adoption of Reflex continues to accelerate and in 2024 you'll see many more anticipated titles launch with support. Since our last update, Banishers: Ghosts of New Eden and Escape from Tarkov: Arena have added support for Reflex. And the latest seasons of Apex Legends, Call of Duty: Modern Warfare III, Call of Duty: Warzone, and Overwatch 2 are all enjoyed at their best with Reflex reducing system latency and further increasing responsiveness.
Return to Keyword Browsing
Apr 16th, 2024 19:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts