News Posts matching #NVIDIA

Return to Keyword Browsing

Supermicro Experiencing NVIDIA Chip Shortage

Charles Liang, CEO of Super Micro Computer Inc. (SMCI), was interviewed by Bloomberg Television last week—press interest is growing due to his firm benefiting greatly during a massive AI boom period. The San Jose, California-based firm specializes in high-performance and high-efficiency servers—its NVIDIA server product lines are particularly popular at the moment. According to BNN Bloomberg: "production capacity can support sales of as much as $25 billion," but Liang admitted that a shortage of AI GPUs is affecting his company's production goals. Wall Street predicts that Super Micro revenue is expected to double, projecting $14.5 billion "in the year ending in June," while shares have more than tripled since January 1 2024: "as investors bet on increasing need for the equipment that powers AI applications and training."

Liang stated in his interview with Bloomberg TV that: "market demand is so strong," but a simple supply stumbling block stands in the way. The Super Micro boss pointed the proverbial finger at a neighboring tech partner: "There is a chip shortage—once we have more supply from the chip companies, from NVIDIA, we can ship more to customers." His company has proudly announced on several occasions that it remains an industry leading supplier of Team Green AI HPC solutions, with recent launches of HGX and MGX-based HPC systems. NVIDIA AI GPU supply woes are not a novel thing at this point in time—many industry watchdogs believe that AMD is benefiting from a well timed launch of competing accelerators. Taiwanese supply chain insiders reckon that GIGABYTE's server division is wholeheartedly embracing Instinct MI300X.

NVIDIA RTX 50-series "Blackwell" to Debut 16-pin PCIe Gen 6 Power Connector Standard

NVIDIA is reportedly looking to change the power connector standard for the fourth successive time in a span of three years, with its upcoming GeForce RTX 50-series "Blackwell" GPUs, Moore's Law is Dead reports. NVIDIA began its post 8-pin PCIe journey with the 12-pin Molex MicroFit connector for the GeForce RTX 3080 and RTX 3090 Founders Edition cards. The RTX 3090 Ti would go on to standardize the 12VHPWR connector, which the company would debut across a wider section of its GeForce RTX 40-series "Ada" product stack (all SKUs with TGP of over 200 W). In the face of rising complains of the reliability of 12VHPWR, some partner RTX 40-series cards are beginning to implement the pin-compatible but sturdier 12V-2x6. The implementation of the 16-pin PCIe Gen 6 connector would be the fourth power connector change, if the rumors are true. A different source says that rival AMD has no plans to change from the classic 8-pin PCIe power connectors.

Update 15:48 UTC: Our friends at Hardware Busters have reliable sources in the power supply industry with equal access to the PCIe CEM specification as NVIDIA, and say that the story of NVIDIA adopting a new power connector with "Blackwell" is likely false. NVIDIA is expected to debut the new GPU series toward the end of 2024, and if a new power connector was in the offing, by now the power supply industry would have some clue. It doesn't. Read more about this in the Hardware Busters article in the source link below.

Update Feb 20th: In an earlier version of the article, it was incorrectly reported that the "16-pin connector" is fundamentally different from the current 12V-2x6, with 16 pins dedicated to power delivery. We have since been corrected by Moore's Law is Dead, that it is in fact the same 12V-2x6, but with an updated PCIe 6.0 CEM specification.

NVIDIA Accelerates Quantum Computing Exploration at Australia's Pawsey Supercomputing Centre

NVIDIA today announced that Australia's Pawsey Supercomputing Research Centre will add the NVIDIA CUDA Quantum platform accelerated by NVIDIA Grace Hopper Superchips to its National Supercomputing and Quantum Computing Innovation Hub, furthering its work driving breakthroughs in quantum computing.

Researchers at the Perth-based center will leverage CUDA Quantum - an open-source hybrid quantum computing platform that features powerful simulation tools, and capabilities to program hybrid CPU, GPU and QPU systems - as well as, the NVIDIA cuQuantum software development kit of optimized libraries and tools for accelerating quantum computing workflows. The NVIDIA Grace Hopper Superchip - which combines the NVIDIA Grace CPU and Hopper GPU architectures - provides extreme performance to run high-fidelity and scalable quantum simulations on accelerators and seamlessly interface with future quantum hardware infrastructure.

GIGABYTE Elevates Computing Horizons at SupercomputingAsia 2024

GIGABYTE, a global leader in high-performance computing solutions, collaborates with industry partner Xenon at SupercomputingAsia 2024, held at the Sydney International Convention and Exhibition Centre from February 19 to 22. This collaboration showcases cutting-edge technologies, offering diverse solutions that redefine the high-performance computing landscape.

GIGABYTE's Highlights at SCA 2024
At booth 19, GIGABYTE presents the G593-SD0, our flagship AI server, and the industry's first Nvidia-certified HGX H100 8-GPU Server. Equipped with 4th/5th Gen Intel Xeon Scalable Processors, it incorporates GIGABYTE's thermal design, ensuring optimal performance within its density-optimized 5U server chassis, pushing the boundaries of AI computing. Additionally, GIGABYTE introduces the 2U 4-node H263-S62 server, designed for 4th Gen Intel Xeon Scalable Processors and now upgraded to the latest 5th Gen, tailored for hybrid and private cloud applications. It features a DLC (Direct Liquid Cooling) solution to efficiently manage heat generated by high-performance computing. Also on display is the newly released W773-W80 workstation, supporting the latest NVIDIA RTX 6000 Ada and catering to CAD, DME, research, data and image analysis, and SMB private cloud applications. At SCA 2024, explore our offerings, including rackmount servers and motherboards, reflecting GIGABYTE's commitment to innovative and reliable solutions. This offers a valuable opportunity to discuss your IT infrastructure requirements with our sales and consulting teams, supported by GIGABYTE and Xenon in Australia.

SoftBank Founder Wants $100 Billion to Compete with NVIDIA's AI

Japanese tech billionaire and founder of the SoftBank Group, Masayoshi Son, is embarking on a hugely ambitious new project to build an AI chip company that aims to rival NVIDIA, the current leader in AI semiconductor solutions. Codenamed "Izanagi" after the Japanese god of creation, Son aims to raise up to $100 billion in funding for the new venture. With his company SoftBank having recently scaled back investments in startups, Son is now setting his sights on the red-hot AI chip sector. Izanagi would leverage SoftBank's existing chip design firm, Arm, to develop advanced semiconductors tailored for artificial intelligence computing. The startup would use Arm's instruction set for the chip's processing elements. This could pit Izanagi directly against NVIDIA's leadership position in AI chips. Son has a chest of $41 billion in cash at SoftBank that he can deploy for Izanagi.

Additionally, he is courting sovereign wealth funds in the Middle East to contribute up to $70 billion in additional capital. In total, Son may be seeking up to $100 billion to bankroll Izanagi into a chip powerhouse. AI chips are seeing surging demand as machine learning and neural networks require specialized semiconductors that can process massive datasets. NVIDIA and other names like Intel, AMD, and select startups have capitalized on this trend. However, Son believes the market has room for another major player. Izanagi would focus squarely on developing bleeding-edge AI chip architectures to power the next generation of artificial intelligence applications. It is still unclear if this would be an AI training or AI inference project, but given that the training market is currently bigger as we are in the early buildout phase of AI infrastructure, the consensus might settle on training. With his track record of bold bets, Son is aiming very high with Izanagi. It's a hugely ambitious goal, but Son has defied expectations before. Project Izanagi will test the limits of even his vision and financial firepower.

Jim Keller Offers to Design AI Chips for Sam Altman for Less Than $1 Trillion

In case you missed it, Sam Altman of OpenAI took the Internet by storm late last week with the unveiling of Sora, the generative AI that can congure up photoreal video clips based on prompts, with deadly accuracy. While Altman and his colleagues in the generative AI industry had a ton of fun generating videos based on prompts from the public on X; it became all too clear that the only thing holding back the democratization of generative AI is the volume of AI accelerator chips. Altman wants to solve this by designing his own AI acceleration hardware from the grounds up, for which he initially pitched an otherworldly $7 trillion in investment—something impossible with the financial markets, but one that's possible only by "printing money," or through sovereign wealth fund investments.

Jim Keller needs no introduction—the celebrity VLSI architect has been designing number crunching devices of all shapes and sizes for some of the biggest tech companies out there for decades, including Intel, Apple, and AMD, just to name a few. When as part of his "are you not entertained?" victory lap, Altman suggested that his vision for the future needs an even larger $8 trillion investment, Keller responded that he could design an AI chip for less than $1 trillion. Does Altman really need several trillions of Dollars to build a grounds-up AI chip at the costs and volumes needed to mainstream AI?

Play at Ultimate Quality With GeForce NOW, Courtesy of GeForce RTX 4080 SuperPOD

GFN keeps its fourth anniversary celebrations rolling by bringing Ubisoft's Skull and Bones and Microsoft's Halo Infinite to the cloud this week. They're part of five newly supported games, and thanks to the power of the cloud, members can play them at unrivaled quality across nearly any device. The Ultimate Upgrade, Instantly. When GeForce NOW launched in 2020, members flocked to take advantage of NVIDIA GeForce RTX 20 Series GPU-powered servers and experience real-time ray tracing on low-powered devices. For the first time, high-performance PC gaming was available to all.

Later, members gained access to the Ultimate upgrade, as NVIDIA cloud gaming servers brought GeForce RTX 3080-class power to users across the globe. Now, with the NVIDIA Ada Lovelace GPU architecture, cloud gaming has taken another leap forward, powered by the GeForce RTX 4080 SuperPOD.

Jensen Huang Believes That Every Country Needs Sovereign AI

Every country needs to own the production of their own intelligence, NVIDIA founder and CEO Jensen Huang told attendees Monday at the World Governments Summit in Dubai. Huang, who spoke as part of a fireside chat with the UAE's Minister of AI, His Excellency Omar Al Olama, described sovereign AI—which emphasizes a country's ownership over its data and the intelligence it produces—as an enormous opportunity for the world's leaders. "It codifies your culture, your society's intelligence, your common sense, your history - you own your own data," Huang told Al Olama during their conversation, a highlight of an event attended by more than 4,000 delegates from 150 countries.

"We completely subscribe to that vision," Al Olama said. "That's why the UAE is moving aggressively on creating large language models and mobilizing compute." Huang's appearance in the UAE comes as the Gulf State is moving rapidly to transform itself from an energy powerhouse into a global information technology hub. Dubai is the latest stop for Huang in a global tour that has included meetings with leaders in Canada, France, India, Japan, Malaysia, Singapore and Vietnam over the past six months. The Middle East is poised to reap significant benefits from AI, with PwC projecting a $320 billion boost to the region's economy by 2030.

MSI & Ubisoft Sail Together, "Skull and Bones" Hardware Bundles Announced

MSI, a prominent gaming hardware brand, is excited to reveal its renewed collaboration with Ubisoft, a distinguished game publisher. Anticipation builds as MSI and Ubisoft join forces to bring gamers an unparalleled experience in the highly anticipated game—Skull and Bones. Navigate Treacherous Waters, Unleash Pirate Fury! Embark on a thrilling maritime journey with Skull and Bones, a captivating co-op pirate open world action-RPG experience. Sail through treacherous seas, engage in epic ship battles, and become the legendary pirate you were destined to be. Beware of every wave, for the open world awaits with dangers and opportunities alike. Are you ready to rule the sea?

Let's begin your private journey with MSI's advanced technologies!
Embark on an exciting adventure with MSI's special offer—buy selected MSI products and a complimentary Skull and Bones for free. Elevate your gaming rig with MSI's cutting-edge motherboards, PC cases, power supplies, liquid coolers, and graphics card. Dive into the thrilling world of Skull and Bones for an immersive gaming experience.

NVIDIA DLSS Coming to Skull and Bones, Banishers: Ghosts of New Eden, and Smalland: Survive the Wilds

Last week, DLSS was available at launch in the Stormgate Steam Next Fest demo, and in The Inquisitor. Additionally, DLSS was also available for players of the new Atomic Heart: Trapped In Limbo expansion, and the newly-released Call of Duty: Warzone Season 2. This week, Banishers: Ghosts of New Eden launches with DLSS 3, while Smalland: Survive the Wilds and Skull and Bones launch with DLSS 2.

Banishers: Ghosts of New Eden Launches Today With DLSS 3 & DLAA
Focus Entertainment and DON'T NOD's Banishers: Ghosts of New Eden launches February 13th. In New Eden, 1695, communities of settlers are plagued by a dreadful curse. As Banishers, step into their lives, investigate the source of evil, unravel chilling mysteries, explore diverse landscapes, and interact with unforgettable characters whose fate lies in your hands. Immerse yourself in an intimate narrative Action-RPG, taking you on an exhilarating journey between life, death, love and sacrifices.

NVIDIA GeForce 551.52 WHQL Game Ready Drivers Released

NVIDIA today released the latest version of its GeForce Game Ready software. Version 551.52 WHQL comes with optimization for "Skull and Bones." Among the gaming related bugs fixed with this release are an intermittent micro-stutter noticed with V-Sync enabled; a stuttering issue with "Red Dead Redemption 2" on some Advanced Optimus notebooks; and stability issues seen with "Immortals of Aveum" over extended gameplay sessions. Non-gaming issues fixed with this release include a stutter observed with some web-browsers in certain system configurations.

DOWNLOAD: NVIDIA GeForce 551.52 WHQL

Nintendo Switch 2 Could Retain Backward Compatibility with The First-Gen Console

Reports are circulating online that Nintendo's upcoming successor to the Switch console, tentatively referred to as the "Switch 2," will offer backward compatibility for physical game cards and digital purchases from the current Switch library. While Nintendo has yet to officially announce the new console, speculation points to a potential reveal as early as next month for a 2024 launch. The backward compatibility claims first surfaced last year when Nintendo America President Doug Bowser hinted at supporting continuity between console generations to minimize the sales decline when transitioning hardware. New momentum behind the rumors comes from gaming industry insiders Felipe Lima and PH Brazil, who, during recent podcasts, stated the Switch 2 has backward compatibility functionality already being shared with game developers.

Well-known gaming leakers "NateTheHate" and others have corroborated that testing is underway for playing current Switch games on new hardware. If true, this backward compatibility would be a consumer-friendly move that breaks from Nintendo's past tendencies of forcing clean breaks between console ecosystems. While details remain unconfirmed by Nintendo, multiple credible sources point to the upcoming Switch successor allowing gamers to carry forward both their physical and digital libraries to continue enjoying this generation's releases. If the compatibility remains, the hardware platform could stay in the playing field of the same vendor—NVIDIA—who provided Nintendo with Tegra X1 SoC. The updated version of the SoC could use a fork of NVIDIA's Orin platform based on Ampere GPU with DLSS, but official details are yet to be seen.

NVIDIA Introduces NVIDIA RTX 2000 Ada Generation GPU

Generative AI is driving change across industries—and to take advantage of its benefits, businesses must select the right hardware to power their workflows. The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12 GB in professional workflows. From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card's capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities. Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16 GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews. With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies. Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance. And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

Widespread GeForce RTX 4080 SUPER Card Shortage Reported in North America

NVIDIA's decision to shave off $200 from its GeForce RTX 4080 GPU tier has caused a run on retail since the launch of SUPER variants late last monthVideoCardz has investigated an apparent North American supply shortage. The adjusted $999 base MSRP appears to be an irresistible prospect for discerning US buyers—today's report explains how: "a week after its release, that GeForce RTX 4080 SUPER cards are not available at any major US retailer for online orders." At the time of writing, no $999 models are available to purchase via e-tailers (for delivery)—BestBuy and Micro Center have a smattering of baseline MSRP cards (including the Founders Edition), but for in-store pickup only. Across the pond, AD103 SUPER's supply status is a bit different: "On the other hand, in Europe, the situation appears to be more favorable, with several retailers listing the cards at or near the MSRP of €1109."

The cheapest custom GeForce RTX 4080 SUPER SKU, at $1123, seems to be listed by Amazon.com. Almost all of Newegg's product pages are displaying an "Out of Stock" notice—ZOTAC GAMING's GeForce RTX 4080 SUPER Trinity OC White Edition model is on "back order" for $1049.99, while the only "in stock" option is MSI's GeForce RTX 4080 Super Expert card (at $1149.99). VideoCardz notes that GeForce RTX 4070 SUPER and RTX 4070 TI SUPER models are in plentiful supply, which highlights a big contrast in market conditions for NVIDIA's latest Ada Lovelace families. The report also mentions an ongoing shortage of GeForce RTX 4080 (Non-SUPER) cards, going back weeks prior to the official January 31 rollout: "Similar to the RTX 4090, finding the RTX 4080 at its $1200 price point has proven challenging." Exact sales figures are not available to media outlets—it is unusual to see official metrics presented a week or two after a product's launch—so we will have to wait a little longer to find out whether demand has far outstripped supply in the USA.

Cisco & NVIDIA Announce Easy to Deploy & Manage Secure AI Solutions for Enterprise

This week, Cisco and NVIDIA have announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era. "AI is fundamentally changing how we work and live, and history has shown that a shift of this magnitude is going to require enterprises to rethink and re-architect their infrastructures," said Chuck Robbins, Chair and CEO, Cisco. "Strengthening our great partnership with NVIDIA is going to arm enterprises with the technology and the expertise they need to build, deploy, manage, and secure AI solutions at scale." Jensen Huang, founder and CEO of NVIDIA said: "Companies everywhere are racing to transform their businesses with generative AI. Working closely with Cisco, we're making it easier than ever for enterprises to obtain the infrastructure they need to benefit from AI, the most powerful technology force of our lifetime."

A Powerful Partnership
Cisco, with its industry-leading expertise in Ethernet networking and extensive partner ecosystem, together with NVIDIA, the inventor of the GPU that fueled the AI boom, share a vision and commitment to help customers navigate the transitions for AI with highly secure Ethernet-based infrastructure. Cisco and NVIDIA have offered a broad range of integrated product solutions over the past several years across Webex collaboration devices and data center compute environments to enable hybrid workforces with flexible workspaces, AI-powered meetings and virtual desktop infrastructure.

ASRock Dives Into Why it Lacks NVIDIA GeForce Graphics Cards; Doesn't Rule Out Making Them in the Future

ASRock, ODM giant Pegatron's retail channel brand, built its reputation over its high cost/performance motherboards, and got into graphics cards rather recently (less than 5 years ago), beginning with AMD Radeon graphics cards, before expanding into Intel Arc GPUs. The company has shown with its high-end AMD Radeon cards that it can design complex custom-design graphics cards with heavy cooling solutions, especially given that AMD Radeon boards tend to have more elaborate power designs than NVIDIA. So then, where are the ASRock GeForce RTX graphics cards? Korean tech publication QuasarZone set to find out from ASRock.

Put simply, ASRock does not rule out making custom design GeForce RTX graphics cards in the future, but says that getting into that market right now, is "challenging." NVIDIA now commands the vast majority of discrete GPU market, and as such most of the top DIY PC retail channel brands (such as ASUS, MSI, GIGABYTE), sell both GeForce and Radeon products. They started making GeForce graphics cards decades ago, and have built market presence over the years. NVIDIA also has a set of board partners that exclusively sell GeForce—such as PNY, Palit-Gainward, Galax-KFA2, and Colorful, which make it all the more tough for ASRock to break in. On the specific question asked by Quasar Zone, here was ASRock's answer (machine translated to English by VideoCardz).

Zephyr x VK Valkyrie GeForce RTX 4080 SUPER Revealed

Zephyr and VK Valkyrie have collaborated on a very high-end custom GeForce RTX 4080 SUPER graphics card model, as revealed in a teaser video posted to the former's Bilibili account. VK Valkyrie is a well regarded DIY brand in the Chinese PC gaming market, while Zephyr is a relatively young manufacturer—their unusual GeForce RTX 3060 Ti Compact ITX design with a pink PCB was introduced last summer. TPU's June 2023 news report is featured prominently within their website's news section—greatly appreciated! The Zephyr x VK Valkyrie GeForce RTX 4080 SUPER will be a limited edition release—the two partners have been working together since last August, but a specific launch date and pricing were not revealed in Zephyr's teaser trailer.

Zephyr has, so far, only released air-cooled custom graphics cards—their upcoming VK Valkyrie collaborative model will mark a debut entry into liquid cooled territory. Their chunky 3-slot design consists of a substantial heatsink covered by an RGB-adorned silver shroud and metallic backplate, with an AIO liquid cooling solution. A 280 mm radiator (with 2 x 140 mm fans) is hooked up to the card via twin white braided tubes. A rear-firing 12VHPWR connector provides an elegant means of semi-concealing your 90-degree power cable, if need be. The promotional video includes benchmark results generated by 3DMARK Speed Way, Time Spy Extreme, and Fire Strike Ultra suites (check the relevant screenshot below). Zephyr claims that their limited edition GeForce RTX 4080 SUPER model was cool enough to not exceed 52 degrees Celsius during a heavy Furmark session. The company recommends that interested parties should check its social media accounts for further announcements. The Zephyr x VK Valkyrie GeForce RTX 4080 SUPER could arrive at some point after the Chinese Spring Celebration.

Update Feb 9th: Valkyrie informed us that, for the moment, this collaboration is specific to the Chinese Market, but they are discussing options internally whether it makes sense to bring the card to the West, too."

NVIDIA GeForce RTX 4070 Ti Drops Down to $699, Matches Radeon RX 7900 XT Price

The NVIDIA GeForce RTX 4070 Ti an now be found for as low as $699, which means it is now selling at the same price as the AMD Radeon RX 7900 XT graphics card. The GeForce RTX 4070 Ti definitely lags behind the Radeon RX 7900 XT, and packs less VRAM (12 GB vs. 20 GB), and the faster GeForce RTX 4070 Ti SUPER is selling for around $100 more. The Radeon RX 7900 XT is around 6 to 11 percent faster, depending on the game and the resolution.

The GeForce RTX 4070 Ti card in question comes from MSI and it is Ventus 2X OC model listed over at Newegg.com for $749.99 with a $50-off promotion code. Bear in mind that this is a dual-fan version from MSI and we are quite sure we'll see similar promotions from other NVIDIA AIC partners.

NVIDIA Releases Hotfix Driver to Fix Stuttering

NVIDIA has released a new GeForce Hotfix Driver Version 551.46 that should fix stuttering issues in some scenarios. According to the release notes, the new hotfix driver fixes micro-stuttering in some games when vertical sync is enabled, as well as stuttering when scrolling in web browser. It also fixes stuttering issues on Advanced Optimus Notebooks when running Red Dead Redemption 2 under Vulkan API, and stability issues in Immortals of Aveum under extended gameplay.

The new GeForce Hotfix Driver Version 551.46 is based on the latest GeForce WHQL driver, version 551.23. You can download the new GeForce Hotfix Driver Version 551.46 over at NVIDIA's support page.

GeForce NOW Celebrates Four Year Anniversary

The GeForce NOW anniversary celebrations continue with more games and a member-exclusive discount on the Logitech G Cloud. Among the six new titles coming to the cloud this week is The Inquisitor from Kalypso Media, which spotlights the GeForce NOW anniversary with a special shout-out. "Congrats to four years of empowering gamers to play anywhere, anytime," said Marco Nier, head of marketing and public relations at Kalypso Media. "We're thrilled to raise a glass to GeForce NOW for their four-year anniversary and commitment to bringing AAA gaming to gamers—here's to many more chapters in this cloud-gaming adventure!" Stream the dark fantasy adventure from Kalypso Media and more newly supported titles today across a variety of GeForce NOW-capable devices, whether at home, on a gaming rig, TV or Mac, or on the go with handheld streaming.

Gadgets Galore
Gone are the days of only being able to play full PC games on a decked-out gaming rig. GeForce NOW is a cloud gaming service accessible on a range of devices, from PCs and Macs to gaming handhelds, thanks to GeForce RTX-powered servers in the cloud. Dive into the cloud streaming experience with the dedicated GeForce NOW app for Windows and macOS. Even on underpowered PCs, gamers can enjoy stunning visuals and buttery-smooth frame rates streaming at up to 240 frames per second or at ultrawide resolutions for Ultimate members, a cloud-gaming first.

NVIDIA CG100 "Grace" Server Processor Benchmarked by Academics

The Barcelona Supercomputing Center (BSC) and the State University of New York (Stony Brook and Buffalo campuses) have pitted NVIDIA's relatively new CG100 "Grace" Superchip against several rival products in a "wide variety of HPC and AI benchmarks." Team Green marketing material has focused mainly on the overall GH200 "Grace Hopper" package—so it is interesting to see technical institutes concentrate on the company's "first true" server processor (ARM-based), rather than the ever popular GPU aspect. The Next Platform's article summarized the chip's internal makeup: "(NVIDIA's) Grace CPU has a relatively high core count and a relatively low thermal footprint, and it has banks of low-power DDR5 (LPDDR5) memory—the kind used in laptops but gussied up with error correction to be server class—of sufficient capacity to be useful for HPC systems, which typically have 256 GB or 512 GB per node these days and sometimes less."

Benchmark results were revealed at last week's HPC Asia 2024 conference (in Nagoya, Japan)—Barcelona Supercomputing Center (BSC) and the State University of New York also uploaded their findings to the ACM Digital Library (link #1 & #2). BSC's MareNostrum 5 system contains an experimental cluster portion—consisting of NVIDIA Grace-Grace and Grace-Hopper superchips. We have heard plenty about the latter (in press releases), but the former is a novel concept—as outlined by The Next Platform: "Put two Grace CPUs together into a Grace-Grace superchip, a tightly coupled package using NVLink chip-to-chip interconnects that provide memory coherence across the LPDDR5 memory banks and that consumes only around 500 watts, and it gets plenty interesting for the HPC crowd. That yields a total of 144 Arm Neoverse "Demeter" V2 cores with the Armv9 architecture, and 1 TB of physical memory with 1.1 TB/sec of peak theoretical bandwidth. For some reason, probably relating to yield on the LPDDR5 memory, only 960 GB of that memory capacity and only 1 TB/sec of that memory bandwidth is actually available."

GIGABYTE Highlights its GPU Server Portfolio Ahead of World AI Festival

The World AI Cannes Festival (WAICF) is set to be the epicenter of artificial intelligence innovation, where the globe's top 200 decision-makers and AI innovators will converge for three days of intense discussions on groundbreaking AI strategies and use-cases. Against the backdrop of this premier event, GIGABYTE has strategically chosen to participate, unveiling its exponential growth in the AI and High-Performance Computing (HPC) market segments.

The AI industry has witnessed unprecedented growth, with Cloud Service Providers (CSP's) and data center operators spearheading supercomputing projects. GIGABYTE's decision to promote its GPU server portfolio with over 70+ models, at WAICF is a testament to the increasing demands from the French market for sovereign AI Cloud solutions. The spotlight will be on GIGABYTE's success stories on enabling GPU Cloud infrastructure, seamlessly powered by NVIDIA GPU technologies, as GIGABYTE aims to engage in meaningful conversations with end-users and firms dependent on GPU computing.

Huawei Reportedly Prioritizing Ascend AI GPU Production

Huawei's Ascend 910B AI GPU is reportedly in high demand in China—we last learned that NVIDIA's latest US sanction-busting H20 "Hopper" model is lined up as a main competitor, allegedly in terms of both pricing and performance. A recent Reuters report proposes that Huawei is reacting to native enterprise market trends by shifting its production priorities—in favor of Ascend product ranges, while demoting their Kirin smartphone chipset family. Generative AI industry experts believe that the likes of Alibaba and Tencent have rejected Team Green's latest batch of re-jigged AI chips (H20, L20 and L2)—tastes have gradually shifted to locally developed alternatives.

Huawei leadership is seemingly keen to seize these growth opportunities—their Ascend 910B is supposedly ideal for workloads "that require low-to-mind inferencing power." Reuters has spoken to three anonymous sources—all with insider knowledge of goings-on at a single facility that manufacturers Ascend AI chips and the Kirin smartphone SoCs. Two of the leakers claim that this unnamed fabrication location is facing many "production quality" challenges, namely output being "hamstrung by a low yield rate." The report claims that Huawei has pivoted by deprioritizing Kirin 9000S (7 nm) production, thus creating a knock-on effect for its premium Mate 60 smartphone range.

AMD Radeon RX 7900 XT Now $100 Cheaper Than GeForce RTX 4070 Ti SUPER

Prices of the AMD Radeon RX 7900 XT graphics card hit new lows, with a Sapphire custom-design card selling for $699 with a coupon discount on Newegg. This puts its price a whole $100 cheaper (12.5% cheaper) than the recently announced NVIDIA GeForce RTX 4070 Ti SUPER. The most interesting part of the story is that the RX 7900 XT is technically from a segment above. Originally launched at $900, the RX 7900 XT is recommended by AMD for 4K Ultra HD gaming with ray tracing; while the RTX 4070 Ti SUPER is officially recommended by NVIDIA for maxed out gaming with ray tracing at 1440p, although throughout our testing, we found the card to be capable of 4K Ultra HD gaming.

The Radeon RX 7900 XT offers about the same performance as the RTX 4070 Ti SUPER, averaging 1% higher than it in our testing, at the 4K Ultra HD resolution. At 1440p, the official stomping ground of the RTX 4070 Ti SUPER, the RX 7900 XT comes out 2% faster. These are, of course pure raster 3D workloads. In our testing with ray tracing enabled, the RTX 4070 Ti SUPER storms past the RX 7900 XT, posting 23% higher performance at 4K Ultra HD, and 21% higher performance at 1440p.
Return to Keyword Browsing
Jun 2nd, 2024 16:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts