News Posts matching #NVIDIA

Return to Keyword Browsing

Acer Launches New Nitro 14 and Nitro 16 Gaming Laptops Powered by AMD Ryzen 8040 Series Processors

Acer today announced the new Nitro 14 and Nitro 16 gaming laptops, powered by AMD Ryzen 8040 Series processors with Ryzen AI[1]. With up to NVIDIA GeForce RTX 4060[2] Laptop GPUs supported by DLSS 3.5 technology, both are backed by NVIDIA's RTX AI platform, providing an array of capabilities in over 500 games and applications, enhanced by AI. Gamers are immersed in their 14- and 16-inch NVIDIA G-SYNC compatible panels with up to WQXGA (2560x1600) resolution.

Whether in call or streaming in-game, Acer PurifiedVoice 2.0 harnesses the power of AI to block out external noises, while Acer PurifiedView keeps users always front and center of all the action. Microsoft Copilot in Windows (with a dedicated Copilot key) helps accelerate everyday tasks on these AI laptops, and with one month of Xbox Game Pass Ultimate included with every device, players will enjoy hundreds of high-quality PC games. To seamlessly take command of device performance and customizations, one click of the NitroSense key directs users to the control center and the library of available AI-related functions through the new Experience Zone.

U.S. Updates Advanced Semiconductor Ban, Actual Impact on the Industry Will Be Insignificant

On March 29th, the United States announced another round of updates to its export controls, targeting advanced computing, supercomputers, semiconductor end-uses, and semiconductor manufacturing products. These new regulations, which took effect on April 4th, are designed to prevent certain countries and businesses from circumventing U.S. restrictions to access sensitive chip technologies and equipment. Despite these tighter controls, TrendForce believes the practical impact on the industry will be minimal.

The latest updates aim to refine the language and parameters of previous regulations, tightening the criteria for exports to Macau and D:5 countries (China, North Korea, Russia, Iran, etc.). They require a detailed examination of all technology products' Total Processing Performance (TPP) and Performance Density (PD). If a product exceeds certain computing power thresholds, it must undergo a case-by-case review. Nevertheless, a new provision, Advanced Computing Authorized (ACA), allows for specific exports and re-exports among selected countries, including the transshipment of particular products between Macau and D:5 countries.

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.

NVIDIA Releases DLSS 3.7.0 With Quality E Preset for Image Quality Improvements

Yesterday, NVIDIA released the latest version of its Deep Learning Super Sampling (DLSS) 3.7.0. The newest version promises to improve image quality. Among the most notable additions is the now default "E" quality preset. This builds upon the previous DLSS versions but introduces noticeably sharper images, generally improved fine detail stability, reduced ghosting, and better temporal stability in general compared to DLSS 3.5. It has been tested with Cyberpunk 2077 in the YouTube video with the comparison between DLSS 3.5.10, DLSS 3.6.0, and the newest DLSS 3.7.0. Additionally, some Reddit users reported seeing a noticeable difference on Horizon Forbidden West at 1440p.

Generally, the DLSS 3.7.0 version can be a drop-in replacement to the older DLSS versions. Using DLSS Tweaks, or even manually, users can patch in the latest DLSS 3.7.0 DLL and force games that weren't shipped initially or updated to support the latest DLSS 3.7.0 DLL file. We have the latest DLL download up on our Downloads section on TechPowerUp, so users can install DLSSTweaks and grab the desired file version on our website.

Grab the latest DLSS 3.7.0 DLL file here.

NVIDIA DLSS Support Coming to More Games, Including Call of Duty Season 3 and Midnight Ghost Hunt

Over 500 games and applications feature RTX technologies, and barely a week goes by without an incredible new game integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects to deliver the definitive PC experience for GeForce RTX players. Last week, Outpost: Infinity Siege launched with DLSS 3, Diablo IV added ray-traced effects, and Alone In The Dark and Lightyear Frontier launched with DLSS 2. This week, we're highlighting the start of Season 3 in Call of Duty: Modern Warfare III and Call of Duty: Warzone, the 1.0 launch of Midnight Ghost Hunt, and Tchia availability on Steam, all enhanced by NVIDIA DLSS. Read on for all the details.

Call of Duty Season 3 Available Now: Enhance Your Experience With DLSS 3, DLAA & Reflex
Call of Duty: Modern Warfare III and Call of Duty: Warzone players can drop into Season 3, available now. Headlining the season is the much-anticipated return of Rebirth Island in Call of Duty: Warzone, including all-new gameplay features that will change the dynamics of your experience on this fan-favorite map. In Modern Warfare III Multiplayer, jump into an unprecedented number of maps and modes that will surely deliver both new and familiar combat experiences, and stay tuned for a massive in-season Zombies update that includes a brand-new Dark Aether Rift and a showdown with the Dark Aether entity face-to-face.

Razer Unveils the New Razer Blade 18 Gaming Laptop

Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Blade 18 (2024) - the pinnacle of high-performance gaming laptops. Building on the success of the award-winning Blade 16 and Blade 14 unveiled at CES 2024, the Blade 18 sets a new benchmark for gaming and creative work with its unrivaled desktop-grade performance, housed within a sleek anodized aluminium unibody form factor.

"The Blade 18 embodies Razer's relentless drive to push the boundaries of what a gaming laptop can be, with a host of world's first innovations," says Travis Furst, Head of Notebook & Accessories Division at Razer. "We are excited to deliver the Blade 18 as it ushers in a new era of Razer products equipped with Thunderbolt 5, including upcoming cutting-edge Thunderbolt accessories. With these innovations, the Blade 18 blurs the line between desktop performance and mobile convenience, ensuring our users are equipped with nothing but the absolute best to dominate in gaming and creativity."

NVIDIA GeForce NOW Gets The Elder Scrolls Online and More Games in April

Rain or shine, a new month means new games. GeForce NOW kicks off April with nearly 20 new games, seven of which are available to play this week. GFN Thursday celebrates the 10-year anniversary of ZeniMax Online Studios' Elder Scrolls Online by bringing the award-winning online role-playing game (RPG) to the cloud this week. Plus, the GeForce NOW Ultimate membership comes to gamers in Japan for the first time, with new GeForce RTX 4080 SuperPODs online today.

The Rising Sun Goes Ultimate
GeForce NOW is rolling out the green carpet to gamers in Japan, expanding next-generation cloud gaming worldwide. The Ultimate membership tier is now available to gamers in the region, delivering up to 4K gaming at up to 120 frames per second, all at ultra-low latency - even on devices without the latest hardware. Gamers in Japan can now access from the cloud triple-A titles by some of the world's largest publishers. Capcom's Street Fighter 6 and Resident Evil Village will be coming to GeForce NOW at a later date for members to stream at the highest performance. GeForce NOW will operate in Japan alongside GeForce NOW Alliance partner and telecommunications company KDDI, which currently offers its customers access to GeForce RTX 3080-powered servers, in addition to its mobile benefits. Plus, new GFNA partners in other regions will be announced this year - stay tuned to GFN Thursdays for details.

NVIDIA GeForce 552.12 WHQL Game Ready Drivers Released

NVIDIA today released the latest version of GeForce Game Ready drivers. Version 552.12 WHQL comes with optimization for Call of Duty Warzone Season 3 and Call of Duty Modern Warfare III Season 3 update. This includes support for DLSS 3 Frame Generation and Reflex. Next up, the new drivers add optimization for Diablo IV's ray tracing update. GeForce Experience and NVIDIA App optimal settings are added for 9 new game titles. NVIDIA issued just one fix with this driver release. Apparently Horizon Forbidden West had an improper Resizable BAR implementation, which this driver release fixes on compatible RTX 30-series and RTX 40-series GPUs. Grab the driver from the link below.

DOWNLOAD: NVIDIA GeForce 552.12 WHQL

MSI Intros GeForce RTX 4060 Gaming Duke Graphics Card as a China-exclusive

MSI brought back its Gaming Duke line of graphics cards with a GeForce RTX 4060 model meant for the Chinese market. The Duke line of graphics cards have historically held the bottom-end of the company's custom-design product stack, and the same appears to be the case with the new card. The RTX 4060 Gaming Duke looks long and premium, thanks to its 2-slot, triple-fan cooling solution that uses a slender aluminium fin-stack heatsink. MSI has given this cooler its webbed TorX fans that guide all their airflow axially. There are some decent aesthetic touches such as a carbon fiber-print cooler shroud, and a metal backplate. There is no RGB lighting on offer, except a white illuminated MSI logo on the top. The card sticks to NVIDIA reference clock speeds of 2460 MHz boost, and 17 Gbps (GDDR6-effective) memory.

Apple M3 Ultra Chip Could be a Monolithic Design Without UltraFusion Interconnect

As we witness Apple's generational updates of the M series of chips, the highly anticipated SKU of the 3rd generation of Apple M series yet-to-be-announced top-of-the-line M3 Ultra chip is growing speculations from industry insiders. The latest round of reports suggests that the M3 Ultra might step away from its predecessor's design, potentially adopting a monolithic architecture without the UltraFusion interconnect technology. In the past, Apple has relied on a dual-chip design for its Ultra variants, using the UltraFusion interconnect to combine two M series Max chips. For example, the second generation M Ultra chip, M2 Ultra, boasts 134 billion transistors across two 510 mm² chips. However, die-shots of the M3 Max have sparked discussions about the absence of dedicated chip space for the UltraFusion interconnect.

While the absence of visible interconnect space on early die-shots is not conclusive evidence, as seen with the M1 Max not having visible UltraFusion interconnect and still being a part of M1 Ultra with UltraFusion, industry has led the speculation that the M3 Ultra may indeed feature a monolithic design. Considering that the M3 Max has 92 billion transistors and is estimated to have a die size between 600 and 700 mm², going Ultra with these chips may be pushing the manufacturing limit. Considering the maximum die size limit of 848 mm² for the TSMC N3B process used by Apple, there may not be sufficient space for a dual-chip M3 Ultra design. The potential shift to a monolithic design for the M3 Ultra raises questions about how Apple will scale the chip's performance without the UltraFusion interconnect. Competing solutions, such as NVIDIA's Blackwell GPU, use a high-bandwidth C2C interface to connect two 104 billion transistor chips, achieving a bandwidth of 10 TB/s. In comparison, the M2 Ultra's UltraFusion interconnect provided a bandwidth of 2.5 TB/s.

US Government Wants Nuclear Plants to Offload AI Data Center Expansion

The expansion of AI technology affects not only the production and demand for graphics cards but also the electricity grid that powers them. Data centers hosting thousands of GPUs are becoming more common, and the industry has been building new facilities for GPU-enhanced servers to serve the need for more AI. However, these powerful GPUs often consume over 500 Watts per single card, and NVIDIA's latest Blackwell B200 GPU has a TGP of 1000 Watts or a single kilowatt. These kilowatt GPUs will be present in data centers with 10s of thousands of cards, resulting in multi-megawatt facilities. To combat the load on the national electricity grid, US President Joe Biden's administration has been discussing with big tech to re-evaluate their power sources, possibly using smaller nuclear plants. According to an Axios interview with Energy Secretary Jennifer Granholm, she has noted that "AI itself isn't a problem because AI could help to solve the problem." However, the problem is the load-bearing of the national electricity grid, which can't sustain the rapid expansion of the AI data centers.

The Department of Energy (DOE) has been reportedly talking with firms, most notably hyperscalers like Microsoft, Google, and Amazon, to start considering nuclear fusion and fission power plants to satisfy the need for AI expansion. We have already discussed the plan by Microsoft to embed a nuclear reactor near its data center facility and help manage the load of thousands of GPUs running AI training/inference. However, this time, it is not just Microsoft. Other tech giants are reportedly thinking about nuclear as well. They all need to offload their AI expansion from the US national power grid and develop a nuclear solution. Nuclear power is a mere 20% of the US power sourcing, and DOE is currently financing a Holtec Palisades 800-MW electric nuclear generating station with $1.52 billion in funds for restoration and resumption of service. Microsoft is investing in a Small Modular Reactors (SMRs) microreactor energy strategy, which could be an example for other big tech companies to follow.

ASUS ROG Strix GeForce RTX 4090 D Tweaked to Match RTX 4090 FE Performance

NVIDIA's GeForce RTX 4090 D GPU was launched late last year in China—this weakened variant (of the standard RTX 4090) was designed with US trade regulations in mind. Chinese media outlets have toyed around with various custom models for several months—January 2024 evaluations indicated a 5% performance disadvantage when lined up against unrestricted models. The GeForce RTX 4090 D GPU is a potent beast despite a reduced core count and restricted TDP limit, but Chinese enthusiasts have continued to struggle with the implementation of worthwhile overclocks. HKEPC—a Hong Kong-situated PC hardware review outlet—has bucked that trend.

The mega-sized flagship ZOTAC RTX 4090 D PGF model has the technical credentials to break beyond the expected overclock increase of "2 to 5%," courtesy of a powerful 28-phase power PCB design and 530 W max. TGP limit. The Expreview team pulled a paltry 3.7% extra bit of performance from ZOTAC China's behemoth. In contrast, HKEPC wrangled out some bigger numbers with a sampled ASUS ROG STRIX RTX 4090 GAMING OC graphics card—matching unrestricted variants: "it turns out that NVIDIA only does not allow AIC manufacturers to preset overclocking, but it does not restrict users from overclocking by themselves. After a high degree of overclocking adjustment, the ROG Strix RTX 4090 D actually has a way to achieve the performance level of the RTX 4090 FE."

Latest Dragon's Dogma 2 Update Improves DLSS Quality, Fixes Bugs, and More

Capcom has released the newest update for Dragon's Dogma 2 on PC and PlayStation 5, while Xbox will get it in next few days. On the PC, the update improves quality when DLSS Super Resolution is enabled, and fixes various bugs. Earlier, Dragon's Dogma 2 Devs have already announced an investigation for PC performance and stability issues, specifically frame rate and various crashes, bugs and the option to start a new game, which has been addressed with this update.

According to the release notes release by Capcom, PC specific updates include improved quality when NVIDIA DLSS Super Resolution is enabled and a fix for an issue related to the display of models under some specific settings. On both PlayStation 5 and the PC, the update adds the option to start a new game when save game data already exists, changes the number of "Art of Metamorphosis" items available in Pawn Shops to 99, makes the "dwelling quest" available earlier in the game, as well as fixes various bugs and text display issues. On the PlayStation 5, the update also adds the option to switch on/off the Motion Blur and Ray Tracing, as well as to set the maximum frame rate to 30 FPS. Capcom also notes that motion blur and ray tracing options should not affect frame rate significantly, and that improvements to frame rate are planned for future updates. The release notes also say that updates to Xbox Series X/S version of the game are planned in the next few days.

PGL Investigating GeForce RTX 4080 GPU Driver Crash, Following Esports Event Disruption

The Professional Gamers League (PGL) showcased its newly upgraded tournament rig specification prior to the kick-off of their (still ongoing) CS2 Major Copenhagen 2024 esports event. As reported, over a week ago, competitors have been treated to modern systems decked out with AMD's popular gaming-oriented Ryzen 7 7800X3D CPU and NVIDIA GeForce RTX 4080 graphics cards, while BenQ's ZOWIE XL2566K 24.5" 360 Hz gaming monitor delivers a superfast visual feed. A hefty chunk of change has been spent on new hardware, but expensive cutting-edge tech can falter. Virtus.pro team member—Jame—experienced a major software crash during a match against rival group, G2.

PCGamesN noted that this frustrating incident ended the affected team's chance to grab a substantial cash reward. Their report put a spotlight on this unfortunate moment: "in the second round of a best of three, Virtus Pro were a few rounds away from qualifying for the playoffs, only for their aspirations to be squashed through no fault of their own...Jame experiences a graphics card driver crash that irrecoverably steers the round in G2's favor, culminating in Virtus Pro losing the match 11-13. Virtus Pro would then go on to lose the subsequent tie-break match as the round was not replayed. In effect, the graphics card driver crash partly cost the team their chance at winning an eventual $1.25 million prize pool." PGL revealed, via a social media post, that officials are doing some detective work: "we wish to clarify the situation involving Jame during the second map, Inferno, in the series against G2. A technical malfunction occurred due to an NVIDIA driver crash, resulting in a game crash. We are continuing our investigation into the matter." The new tournament rigs were "meticulously optimized" and tested in the weeks leading up to CS2 Major Copenhagen 2024—it is believed that the driver crash was a random anomaly. PGL and NVIDIA are currently working on a way to "identify and fix the issue."

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

GeForce NOW Thursday: Get Cozy With "Palia" & Five New Titles

Ease into spring with the warm, cozy vibes of Palia, coming to the cloud this GFN Thursday. It's part of six new titles joining the GeForce NOW library of over 1,800 games. Welcome Home. Escape to a cozy world with Palia, a free-to-play massively multiplayer online game from Singularity 6 Corporation. The game, which has made its way onto more than 200,000 wishlists on Steam, has launched in the cloud this week.

Farm, fish, craft and explore with friendly villagers across a stunning variety of different biomes—from sprawling flower fields to hilly forests and rocky beaches—in the world of Palia. Inhabit the land, furnish a dream home, unravel ancient mysteries and interact with a vibrant online community. Get ready for a captivating adventure across devices by streaming Palia from the cloud. GeForce NOW Ultimate and Priority members get faster access to servers and longer gaming sessions over Free members.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

NVIDIA GeForce RTX 4060 Slides Down to $279

With competition in the performance segment of graphics cards heating up, the GeForce RTX 4060 "Ada" finds itself embattled at its $299 price point, with the Radeon RX 7600 XT at $325, the RX 7600 (non-XT) down to $250. This has prompted a retailer-level price-cut for a Zotac-branded RTX 4060 graphics card. The Zotac RTX 4060 Twin Edge OC White is listed on Newegg for $279, which puts it $20 below the NVIDIA MSRP. The RTX 4060 is squarely a 1080p-class GPU, designed for AAA gameplay with maxed out settings, and ray tracing. The one ace the RTX 4060 wields over similarly-priced GPUs from the previous generation has to be DLSS 3 Frame Generation. Our most recent testing puts the RX 7600 within 2% of the RTX 4060 at 1080p raster workloads, although the ray tracing performance of the RTX 4060 is significantly ahead, by around 16%.

Outpost: Infinity Siege Launches With DLSS 3 & New DLSS 2 Games Out Now

Over 500 games and applications feature RTX technologies, and barely a week goes by without new blockbuster games and incredible indie releases integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects to deliver the definitive PC experience for GeForce RTX gamers.

This week, we're highlighting the release of DLSS 3-accelerated release of Outpost: Infinity Siege, and the launch of Alone In The Dark and Lightyear Frontier, which both feature DLSS 2. This batch of great new RTX releases follows the release of Horizon Forbidden West Complete Edition, which boasted day-one support for NVIDIA DLSS 3, NVIDIA DLAA, and NVIDIA Reflex. Additionally, Diablo IV's ray tracing update is out now—learn more about each new announcement below.

CyberpowerPC Releases New Tracer VIII Gaming Laptops

CyberPowerPC, a leading name in the gaming PC industry, today announced its latest gaming laptops, the Tracer VIII Series. These gaming laptops come in three different models with different feature sets for all levels of gaming. Features a crisp 17.4" WQXGA 2560x1600 240HZ screen complemented by 14th Gen Intel Mobile Processors and NVIDIA GeForce RTX 40 Series graphics. The keyboard is mechanical with RGB backlighting making for a comfortable gaming and typing experience. Comes ready for an optional detachable liquid cooler.

Features a clear 16" WQXGA 2560x1600 240 Hz SRGB 100% display with 14th Gen Intel Core Mobile Processors and NVIDIA GeForce RTX 40 Series graphics. Also has a mechanical RGB Backlit keyboard for a pleasant tactile typing experience. This model gives you the best of the Edge and Gaming models, such as 14th Gen Intel Mobile Processors and NVIDIA GeForce RTX 40 Series graphics, in a smaller, thinner package. A minimalistic design keeps things lightweight and portable. A 180-degree hinge on the screen allows the laptop to lay completely flat for ultimate viewing flexibility. The charger also sees a makeover, with a 240 W ultra slim adapter for better charging on the go. Available with 15.3" WQXGA 2560x1600 120 Hz SRGB 100% or 16" WQXGA 2560x1600 165 Hz SRGB 100% displays.

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

Product Pages of Samsung 28 Gbps and 32 Gbps GDDR7 Chips Go Live

Samsung is ready with a GDDR7 memory chip rated at an oddly-specific 28 Gbps. This speed aligns with the reported default memory speeds of next-generation NVIDIA GeForce RTX "Blackwell" GPUs. The Samsung GDDR7 memory chip bearing model number K4VAF325ZC-SC28, pictured below, ticks at 3500 MHz, yielding 28 Gbps (GDDR7-effective) memory speeds, and comes with a density of 16 Gbit (2 GB). This isn't Samsung's only GDDR7 chip at launch, the company also has a 32 Gbps high performance part that it built in hopes that certain high-end SKUs or professional graphics cards may implement it. The 32 Gbps GDDR7 chip, bearing the chip model number K4VAF325ZC-SC32, offers the same 16 Gbit density, but at a higher 4000 MHz clock. The Samsung website part-identification pages for both chips say that the parts are sampling to customers, which is usually just before it enters mass-production, and is marked "shipping."

Tiny Corp. Prepping Separate AMD & NVIDIA GPU-based AI Compute Systems

George Hotz and his startup operation (Tiny Corporation) appeared ready to completely abandon AMD Radeon GPUs last week, after experiencing a period of firmware-related headaches. The original plan involved the development of a pre-orderable $15,000 TinyBox AI compute cluster that housed six XFX Speedster MERC310 RX 7900 XTX graphics cards, but software/driver issues prompted experimentation via alternative hardware routes. A lot of media coverage has focused on the unusual adoption of consumer-grade GPUs—Tiny Corp.'s struggles with RDNA 3 (rather than CDNA 3) were maneuvered further into public view, after top AMD brass pitched in.

The startup's social media feed is very transparent about showcasing everyday tasks, problem-solving and important decision-making. Several Acer Predator BiFrost Arc A770 OC cards were purchased and promptly integrated into a colorfully-lit TinyBox prototype, but Hotz & Co. swiftly moved onto Team Green pastures. Tiny Corp. has begrudgingly adopted NVIDIA GeForce RTX 4090 GPUs. Earlier today, it was announced that work on the AMD-based system has resumed—although customers were forewarned about anticipated teething problems. The surprising message arrived in the early hours: "a hard to find 'umr' repo has turned around the feasibility of the AMD TinyBox. It will be a journey, but it gives us an ability to debug. We're going to sell both, red for $15,000 and green for $25,000. When you realize your pre-order you'll choose your color. Website has been updated. If you like to tinker and feel pain, buy red. The driver still crashes the GPU and hangs sometimes, but we can work together to improve it."

Samsung Introduces "Petabyte SSD as a Service" at GTC 2024, "Petascale" Servers Showcased

Leaked Samsung PBSSD presentation material popped up online a couple of days prior to the kick-off day of NVIDIA's GTC 2024 conference (March 18)—reports (at the time) jumped on the potential introduction of a "petabyte (PB)-level SSD solution," alongside an enterprise subscription service for the US market. Tom's Hardware took the time to investigate this matter—in-person—on the showroom floor up in San Jose, California. It turns out that interpretations of pre-event information were slightly off—according to on-site investigations: "despite the name, PBSSD is not a petabyte-scale solid-state drive (Samsung's highest-capacity drive can store circa 240 TB), but rather a 'petascale' storage system that can scale-out all-flash storage capacity to petabytes."

Samsung showcased a Supermicro Petascale server design, but a lone unit is nowhere near capable of providing a petabyte of storage—the Tom's Hardware reporter found out that the demonstration model housed: "sixteen 15.36 TB SSDs, so for now the whole 1U unit can only pack up to 245.76 TB of 3D NAND storage (which is pretty far from a petabyte), so four of such units will be needed to store a petabyte of data." Company representatives also had another Supermicro product at their booth: "(an) H13 all-flash petascale system with CXL support that can house eight E3.S SSDs (with) four front-loading E3.S CXL bays for memory expansion."

NVIDIA's Bryan Catanzaro Discusses Future of AI Personal Computing

Imagine a world where you can whisper your digital wishes into your device, and poof, it happens. That world may be coming sooner than you think. But if you're worried about AI doing your thinking for you, you might be waiting for a while. In a fireside chat Wednesday (March 20) at NVIDIA GTC, the global AI conference, Kanjun Qiu, CEO of Imbue, and Bryan Catanzaro, VP of applied deep learning research at NVIDIA, challenged many of the clichés that have long dominated conversations about AI. Launched in October 2022, Imbue made headlines with its Series B fundraiser last year, raising over $200 million at a $1 billion valuation.

The Future of Personal Computing
Qiu and Catanzaro discussed the role that virtual worlds will play in this, and how they could serve as interfaces for human-technology interaction. "I think it's pretty clear that AI is going to help build virtual worlds," said Catanzaro. "I think the maybe more controversial part is virtual worlds are going to be necessary for humans to interact with AI." People have an almost primal fear of being displaced, Catanzaro said, but what's much more likely is that our capabilities will be amplified as the technology fades into the background. Catanzaro compared it to the adoption of electricity. A century ago, people talked a lot about electricity. Now that it's ubiquitous, it's no longer the focus of broader conversations, even as it makes our day-to-day lives better.
Return to Keyword Browsing
Jun 2nd, 2024 15:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts