News Posts matching #Performance

Return to Keyword Browsing

Biwin Announces New OC Lab Gold Edition DW100 192 GB DDR5-6400 and DDR5-6000 Memory Kits

Biwin, a world-renowned innovator in cutting-edge memory and storage solutions, proudly introduces the Biwin Black Opal OC Lab Gold Edition DW100 RGB DDR5 192 GB Memory Kit (48 GB x 4), available in DDR5-6400 CL30-39-39-108 1.4 V and DDR5-6000 CL28-36-36-102 1.4 V specifications. Breaking the capacity limits of traditional consumer memory, this ultra-large 192 GB kit offers the performance boost needed for AI computing, large-scale data processing, and next-gen computing.

Push Memory Performance with Revolutionary 192 GB Memory Kit
Biwin Black Opal DW100, delivering an ultra-high-capacity 192 GB (48 GB x4) configuration, redefines what's possible with desktop memory and exceptional memory bandwidth, stability, and efficiency. This breakthrough enables users to take full advantage of DDR5's enhanced data throughput to power fast, out-of-the-box speeds for AI computing, large language models (LLMs), generative AI, and edge computing, and other data-rich workloads.

Saber Interactive Addresses RoadCraft Demo Performance & Gameplay Issues

Hello everyone, RoadCraft's demo has been rolling for the past week, and seeing so many of you give a try to our game filled us with joy! Thousands of you downloaded and tried out the demo and you covered altogether more than 2 billion km on the demo, which is absolutely awesome! We've custom-made this demo to give you a sample of the gameplay and vehicles, but be assured that you're far from having seen everything! Now, we are focused on carefully reading your feedback, and we want to address some of them before we release the full game on May 20th.

First, the elephant in the room, the absence of fuel consumption
We know that fuel consumption is a core element in SnowRunner or Expeditions. But RoadCraft is a totally new experience for us, in the simulation field. It has been developed on a new iteration of our engine, and its technical specifications differ from those of our other games. For those reasons, it doesn't offer the same gameplay features. Map exploration and traversal is not as central, rebuilding and reshaping terrain are. When we started the development, we initially had a fuel consumption system, however this mechanic was tedious and didn't bring anything interesting to the core gameplay of the game. However, it doesn't mean we want to throw away the idea without any reflection, and especially after seeing all your feedback on it. We might think about integrating fuel consumption, but later on, in a new game mode for example.

Apple's A18 4-core iGPU Benched Against Older A16 Bionic, 3DMark Results Reveal 10% Performance Deficit

Apple's new budget-friendly iPhone 16e model was introduced earlier this month; potential buyers were eyeing a device (starting at $599) that houses a selectively "binned" A18 mobile chipset. The more expensive iPhone 16 and iPhone 16 Plus models were launched last September, with A18 chips on-board; featuring six CPU cores, and five GPU cores. Apple's brand-new 16E smartphone seems to utilize an A18 sub-variant—tech boffins have highlighted this package's reduced GPU core count: of four. The so-called "binned A18" reportedly posted inferior performance figures—15% slower—when lined up against its standard 5-core sibling (in Geekbench 6 Metal tests). The iPhone 16E was released at retail today (February 28), with review embargoes lifted earlier in the week.

A popular portable tech YouTuber—Dave2D (aka Dave Lee)—decided to pit his iPhone 16E sample unit against older technology; contained within the iPhone 15 (2023). The binned A18's 4-core iGPU competed with the A16 Bionic's 5-core integrated graphics solution in a 3DMark Wild Life Extreme Unlimited head-to-head. Respective tallies—of 2882 and 3170 points—were recorded for posterity's sake. The more mature chipset (from 2022) managed to surpass its younger sibling by ~10%, according to the scores presented on Dave2D's comparison chart. The video reviewer reckoned that the iPhone 16E's SoC offers "killer performance," despite reservations expressed about the device not offering great value for money. Other outlets have questioned the prowess of Apple's latest step down model. Referencing current-gen 3DMark benchmark results, Wccftech observed: "for those wanting to know the difference between the binned A18 and non-binned variant; the SoC with a 5-core GPU running in the iPhone 16 finishes the benchmark run with an impressive 4007 points, making it a massive 28.04 percent variation between the two (pieces of) silicon. It is an eye-opener to witness such a mammoth performance drop, which also explains why Apple resorted to chip-binning on the iPhone 16e as it would help bring the price down substantially."

Colorful Launches New Budget-friendly X15 XS Gaming Laptop

Colorful Technology Company Limited, a leading brand in gaming PC components, gaming laptops, and Hi-fi audio products, introduces the COLORFUL X15 XS - a budget-friendly yet powerful gaming laptop designed for gamers and creators seeking performance and value. Building from the success of the X15 XS series gaming laptops, the latest model features a cleaner and simpler design and new packaging.

Powered by up to a 12th Gen Intel Core i5-12500H processor and an NVIDIA RTX 3050 laptop GPU, the COLORFUL X15 XS delivers the speed and efficiency needed to tackle a wide range of tasks. Whether you're working on demanding projects, studying for exams, or diving into immersive gaming sessions, the X15 XS ensures smooth multitasking, responsive performance, and stunning graphics. With advanced cooling and a high-refresh-rate display, the X15 XS is built to keep up with your productivity and entertainment needs without compromise.

Thermal Grizzly Offers Delidded CPUs with Warranty

Thermal Grizzly introduces the TG Delidded CPU series with delidded processors and assumes the manufacturer's warranty. As a provider of delidding tools and products for delidded processors, Thermal Grizzly is a well-known name among hardware enthusiasts and overclockers. With the TG Delidded CPU, even beginners now have a gateway into the world of maximum cooling performance.

With the TG Delidded CPU, Thermal Grizzly offers PC enthusiasts and gamers the ability to use direct-die cooling solutions—without the risk of delidding the CPU themselves. Delidding, or removing the heat spreader, allows coolers to be mounted directly where heat is generated: on the chips/chiplets of the CPU. However, since CPU dies are highly susceptible to damage, direct-die cooling solutions such as the Mycro Direct-Die water coolers or High Performance Heatspreaders should be used.

NVIDIA GeForce RTX 50 Cards Spotted with Missing ROPs, NVIDIA Confirms the Issue, Multiple Vendors Affected

TechPowerUp has discovered that there are NVIDIA GeForce RTX 5090 graphics cards in retail circulation that come with too few render units, which lowers performance. Zotac's GeForce RTX 5090 Solid comes with fewer ROPs than it should—168 are enabled, instead of the 176 that are part of the RTX 5090 specifications. This loss of 8 ROPs has a small, but noticeable impact on performance. During recent testing, we noticed our Zotac RTX 5090 Solid sample underperformed slightly, falling behind even the NVIDIA RTX 5090 Founders Edition card. At the time we didn't pay attention to the ROP count that TechPowerUp GPU-Z was reporting, and instead spent time looking for other reasons, like clocks, power, cooling, etc.

Two days ago, one of our readers who goes by "Wuxi Gamer," posted this thread on the TechPowerUp Forums, reporting that his retail Zotac RTX 5090 Solid was showing fewer ROPs in GPU-Z than the RTX 5090 should have. The user tried everything from driver to software re-installs, to switching between the two video BIOSes the card comes with, all to no avail. What a coincidence that we had this card in our labs already, so we then dug out our sample. Lo and behold—our sample is missing ROPs, too! GPU-Z is able to read and report these units counts, in this case through NVIDIA's NVAPI driver interface. The 8 missing ROPs constitute a 4.54% loss in the GPU's raster hardware capability, and to illustrate what this means for performance, we've run a couple of tests.

ASUS Announces All-New ExpertBook B5 Notebook Models

ASUS today announced the all-new ExpertBook B5 laptop series (B5405/B5605) - a lightweight, AI-enhanced marvel powered by the latest up to Intel Core Ultra processors (Series 2) with Intel vPro and built-in Intel Arc graphics, ushering in a new era of productivity. Packed with cutting-edge AI capabilities, including a dedicated NPU, ExpertBook B5 is built to streamline complex business tasks and accelerate productivity - especially during video calls, with exclusive ASUS AI ExpertMeet delivering optimized audio, video and real-time transcription for seamless collaboration.

Engineered for professionals who demand both performance and security, ExpertBook B5 is protected by ASUS ExpertGuardian, an enterprise-grade security suite that safeguards data, user identities and system integrity with multi-layered protection. With an up to 15 hours battery life for all-day performance on the go, this durable, MIL-STD-810H-tested business laptop empowers uninterrupted productivity. ExpertBook B5 also underscores the ASUS commitment to sustainability with the integration of a Digital Product Passport (DPP), providing transparent lifecycle tracking and reinforcing responsible environmental practices.

Intel Unveils High-Performance, Power-Efficient Ethernet Solutions

Intel today launched two new Ethernet product lines - the Intel Ethernet E830 Controllers and Network Adapters, and the Intel Ethernet E610 Controllers and Network Adapters - designed to meet the growing demands of enterprise, telecommunications, cloud, edge, high performance computing (HPC) and artificial intelligence (AI) applications. These next-generation solutions provide robust, high-performance connectivity while enhancing energy efficiency and security, and lowering total cost of ownership (TCO).

"In today's interconnected world, networking is essential to the success of business and technology transformation. With the launch of the Intel Ethernet E830 and E610 products, we are helping customers meet the growing demand for high-performance, energy-efficient solutions that optimize network infrastructures, lower operational costs and enhance TCO." -Bob Ghaffari, Intel vice president, Network and Edge Group

Apple M4 MacBook Air Gets Geekbenched, Leaked Results Suggest Near MacBook Pro-level Performance

Apple's unannounced M4 MacBook Air family is likely reaching market availability status at some point next month. Last December, slimline notebook enthusiasts started hearing about an updated lineup; macOS's Sequoia 15.2 update reportedly referenced upcoming MacBook Air M4 13-inch and 15-inch models. An early sample unit—named "Mac16,12"—has participated in a Geekbench 6.4.0 (macOS AArch64) gauntlet; results appeared online yesterday. The alleged "MacBook Air 13" candidate posted an overall Metal score of 54,806, and an overall OpenCL tally of 36,305. The two separate Geekbench Browser entries confirm that the sampled device makes use of a 10-core M4 processor, with Cluster 1 containing four performance cores. Cluster 2 consists of six power efficiency-oriented cores. Base frequency is listed at 4.41 GHz; reportedly the highest recorded for an M4 SoC. Said chip accessed 24 GB of unified memory, during its macOS 15.2 (Build 24C2101)-based test session.

Notebookcheck and Wccftech compared the aforementioned data points with slightly older M4-equipped hardware, including a premium model. Both outlets observed a "measly" five percent performance difference. Elaborating on their findings, Notebookcheck stated: "as always, we would recommend taking early benchmark results with a healthy amount of skepticism for the time being. With that being said, the MacBook Air 13 benchmarked falls about 5% short of the median Geekbench OpenCL and Geekbench Metal results we achieved so far when benchmarking the M4 versions of Apple's Mac Mini and MacBook Pro 14." The rumored next-gen MacBook Air is expected to operate with a fan-less cooling system—press outlets reckon that the MacBook Pro's air-cooled operation puts it at a slight advantage (in benchmarks).

GIGABYTE Showcases Comprehensive AI Computing Portfolio at MWC 2025

GIGABYTE, a global leader in computing innovation and technology, will showcase its full-spectrum AI computing solutions that bridge development to deployment at MWC 2025, taking place from March 3-6.

"AI+" and "Enterprise-Reinvented" are two of the themes for MWC. As enterprises accelerate their digital transformation and intelligent upgrades, the transition of AI applications from experimental development to democratized commercial deployment has become a critical turning point in the industry. Continuing its "ACCEVOLUTION" initiative, GIGABYTE provides the comprehensive infrastructure products and solutions spanning cloud-based supercomputing centers to edge computing terminals, aiming to accelerate the next evolution and empower industries to scale AI applications efficiently.

Moore Threads Claims 120% Gaming Performance Improvement for MTT S Series GPUs

Moore Threads has released version 290.100 of its MTT S Series Windows desktop driver; today's freshly published patch notes describe "performance and experience optimizations" for multiple modern games titles. Press coverage of the Chinese graphics card manufacturer's hardware portfolio has concentrated mostly on deficiencies, relative to Western offerings. Despite being the first desktop gaming graphics card to arrive with a PCI Express Gen 5 bus interface, Moore Threads' MTT S80 model has consistently struggled to keep up with mainstream competition. Most notably, their current 200 W TDP-rated flagship—packing 4096 "MUSA" cores—trailed behind AMD Radeon iGPUs, according to March 2024 benchmarks.

The latest Moore Threads driver improvements were tested out internally, prior to public release. Patch notes claim that Infinity Nikki (DirectX 12-only) average frame rates "increased by more than 40%." Another DX12 title was benched—Hideo Kojima's Death Stranding: "average frame rate has increased by more than 50%." The largest upgrade was observed when playing A Plague Tale: Requiem; the MTT engineering team claims that average in-game frame rates climbed by more than 120%. We hope that independent outlets will publish results based on their own testing methodologies, in the near future. Going back to September 2023, Moore Threads boasted about driver update 230.40.0.1 producing a 40% gaming performance uplift for MTT S80 and S70 cards. Outside the gaming sphere, Moore Threads has hinted about its MTT S80 GPU being a high achiever with DeepSeek's R1-Distill-Qwen-7B distilled model.

Radeon 8060S Early Reviews: RTX 4070 Laptop-Class Performance in an iGPU

Well, the wait is over and early reviews for AMD's Strix Halo APUs have finally dropped. For those who kept up with the leaks and rumors, the high-end RDNA 3.5 Radeon 8060S iGPU was repeatedly rumored to features up to 40 CUs, allowing for raw performance that keeps up with several discrete-class mobile GPUs. Now that we have concrete information, it appears that the Strix Halo iGPU does indeed trade blows with mid-range mobile GPUs, which is an undeniably impressive feat for an integrated unit. Some of the fastest x86 iGPUs - the Arc 140 V, Radeon 890M, are all left in the dust, although Apple's highest-end offerings are unsurprisingly well ahead.

Starting off with 3D Mark Time Spy, the 40-CU Radeon 8060S, housed in the 13-inch ROG Flow Z13, managed an impressive score of 10,200 points according to Notebookcheck. This puts the iGPU in close proximity to other RTX 4070-powered 14-inch gaming laptops, such as the Zephyrus G14 which managed to rake in around 10,300 points. Compared to the previous iteration of the ROG Flow Z13, which boasts a 65-watt RTX 4070, the Radeon 8060S-powered Z13 pulls ahead by around 5%. Laptops with more substantial power envelopes do race ahead significantly, such as the 140-watt RTX 4070 Laptop-powered Razer Blade 14 which managed over 13,000 points. In the Steel Nomad benchmark, however, the Radeon 8060S appears less impressive, trailing behind not only the RTX 4070 Laptop, but also systems with the RTX 4060 Laptop GPU (110 W).

Micron Unveils Its First PCIe Gen5 NVMe High-Performance Client SSD

Micron Technology, Inc., today announced the Micron 4600 PCIe Gen 5 NVMe SSD, an innovative client storage drive for OEMs that is designed to deliver exceptional performance and user experience for gamers, creators and professionals. Leveraging Micron G9 TLC NAND, the 4600 SSD is Micron's first Gen 5 client SSD and doubles the performance of its predecessor.

The Micron 4600 SSD showcases sequential read speeds of 14.5 GB/s and write speeds of 12.0 GB/s. These capabilities allow users to load a large language model (LLM) from the SSD to DRAM in less than one second, enhancing the user experience with AI PCs. For AI model loading times, the 4600 SSD reduces load times by up to 62% compared to Gen 4 performance SSDs ensuring rapid deployment of LLMs and other AI workloads. Additionally, the 4600 SSD provides up to 107% improved energy efficiency (MB/s per watt) compared to Gen 4 performance SSDs, enhancing battery life and overall system efficiency.

Osaka Scientists Unveil 'Living' Electrodes That Can Enhance Silicon Devices

Shrinking components was (and still is) the main way to boost the speed of all electronic devices; however, as devices get tinier, making them becomes trickier. A group of scientists from SANKEN (The Institute of Scientific and Industrial Research), at Osaka University has discovered another method to enhance performance: putting a special metal layer known as a metamaterial on top of a silicon base to make electrons move faster. This approach shows promise, but the tricky part is managing the metamaterial's structure so it can adapt to real-world needs.

To address this, the team looked into vanadium dioxide (VO₂). When heated, VO₂ changes from non-conductive to metallic, allowing it to carry electric charge like small adjustable electrodes. The researchers used this effect to create 'living' microelectrodes, which made silicon photodetectors better at spotting terahertz light. "We made a terahertz photodetector with VO₂ as a metamaterial. Using a precise method, we created a high-quality VO₂ layer on silicon. By controlling the temperature, we adjusted the size of the metallic regions—much larger than previously possible—which affected how the silicon detected terahertz light," says lead author Ai I. Osaka.

Intel Core Ultra 255H "Arrow Lake-H" Delivers 32% Single-Core Performance Improvement Over "Meteor Lake" Predecessor

Intel's Core Ultra 7 255H "Arrow Lake" processor has demonstrated impressive performance improvements in recent PassMark benchmarks, achieving a 32% higher single-core score compared to its "Meteor Lake" predecessor. The Arrow Lake-H chip recorded 4,631 points in single-threaded tests, significantly outpacing the Core Ultra 7 155H's 3,500 points while delivering a 15% overall improvement in CPU Mark ratings. The performance leap comes from Intel's architectural overhaul, implementing "Lion Cove" performance cores alongside "Skymont" efficiency cores on TSMC's N3B process node. This combination enables the 255H to achieve higher boost frequencies while maintaining the same core configuration as its predecessor—six P-cores, eight E-cores, and two Low Power Efficiency (LPE) cores.

Notable in this iteration is the absence of Hyper-Threading, resulting in 16 threads compared to the 155H's 22 threads. Arrow Lake-H maintains Intel's heterogeneous structure, incorporating up to eight Xe-LPG+ graphics cores derived from the Alchemist architecture. The neural processing unit (NPU) capabilities remain consistent with Meteor Lake, delivering 13 TOPS of INT8 performance. This positions the chip below Lunar Lake's 45 TOPS. Despite performance improvements, market success will largely depend on system integrators' ability to deliver compelling devices at competitive price points, particularly as AMD's Strix Point platforms maintain strong positioning in the $1,000 range. The battle of laptop chip supremacy is poised to be a good one in the coming quarters, especially as more Arm-based entries will force both Intel and AMD to compete harder.

UL Solutions Adds Support for DLSS 4 and DLSS Multi Frame Generation to the 3DMark NVIDIA DLSS Feature Test

We're excited to announce that in today's update to 3DMark, we're adding support for DLSS 4 and DLSS Multi Frame generation to the NVIDIA DLSS feature test. The NVIDIA DLSS feature test and this update were developed in partnership with NVIDIA. The 3DMark NVIDIA DLSS feature test lets you compare performance and image quality brought by enabling DLSS processing. If you have a new GeForce RTX 50 Series GPU, you'll also be able to compare performance with and without the full capabilities of DLSS 4.

You can choose to run the NVIDIA DLSS feature test using DLSS 4, DLSS 3 or DLSS 2. DLSS 4 includes the new DLSS Multi Frame Generation feature, and you can choose between several image quality modes—Quality, Balanced, Performance, Ultra Performance and DLAA. These modes are designed for different resolutions, from Full HD up to 8K. DLSS Multi Frame Generation uses AI to boost frame rates with up to three additional frames generated per traditionally rendered frame. In the 3DMark NVIDIA DLSS feature test, you are able to choose between 2x, 3x and 4x Frame Generation settings if you have an NVIDIA GeForce RTX 50 series GPU.

AMD Radeon 9070 XT Rumored to Outpace RTX 5070 Ti by Almost 15%

It would be fair to say that the GeForce RTX 5080 has been quite disappointing, being roughly 16% faster in gaming than the RTX 4080 Super. Unsurprisingly, this gives AMD a lot of opportunity to offer excellent price-to-performance with its upcoming RDNA 4 GPUs, considering that the RTX 5070 and RTX 5070 Ti aren't really expected to pull off any miracles. According to a recent tidbit shared by the renowned leaker Moore's Law is Dead, the Radeon RX 9070 XT is expected to be around 3% faster than the RTX 4080, if AMD's internal performance goals are anything to go by. MLID also notes that RDNA 4's performance is improving by roughly around 1% each month, which makes it quite likely that the RDNA 4 cards will exceed the targets.

If it does turn out that way, the Radeon RX 9070 XT, according to MLID, should be roughly around 15% faster than its competitor from the Green Camp, the RTX 5070 Ti, and roughly match the RTX 4080 Super in gaming performance. The Radeon RX 9070, on the other hand, is expected to be around 12% faster than the RTX 5070. Of course, these performance improvements are limited to rasterization performance, and when ray tracing is brought to the scene, the performance improvements are expected to be substantially more modest, as per tradition. Citing our data for Cyberpunk 4K with RT, MLID stated that his sources indicate that the RX 9070 XT falls somewhere between the RTX 4070 Ti Super and RTX 3090 Ti, whereas the RX 9070 should likely trade blows with the RTX 4070 Super. Considering AMD's track record with ray tracing, this sure does sound quite enticing.

AMD Believes EPYC CPUs & Instinct GPUs Will Accelerate AI Advancements

If you're looking for innovative use of AI technology, look to the cloud. Gartner reports, "73% of respondents to the 2024 Gartner CIO and Tech Executive Survey have increased funding for AI." And IDC says that AI: "will have a cumulative global economic impact of $19.9 trillion through 2030." But end users aren't running most of those AI workloads on their own hardware. Instead, they are largely relying on cloud service providers and large technology companies to provide the infrastructure for their AI efforts. This approach makes sense since most organizations are already heavily reliant the cloud. According to O'Reilly, more than 90% of companies are using public cloud services. And they aren't moving just a few workloads to the cloud. That same report shows a 175% growth in cloud-native interest, indicating that companies are committing heavily to the cloud.

As a result of this demand for infrastructure to power AI initiatives, cloud service providers are finding it necessary to rapidly scale up their data centers. IDC predicts: "the surging demand for AI workloads will lead to a significant increase in datacenter capacity, energy consumption, and carbon emissions, with AI datacenter capacity projected to have a compound annual growth rate (CAGR) of 40.5% through 2027." While this surge creates massive opportunities for service providers, it also introduces some challenges. Providing the computing power necessary to support AI initiatives at scale, reliably and cost-effectively is difficult. Many providers have found that deploying AMD EPYC CPUs and Instinct GPUs can help them overcome those challenges. Here's a quick look at three service providers who are using AMD chips to accelerate AI advancements.

AMD Teases Ryzen AI Max+ 395 "Strix Halo" APU 1080p Gaming Performance, Claims 68% Faster than RTX 4070M

AMD has just published its "How to Sell" Ryzen AI MAX series guide—several news outlets have pored over the "claimed" gaming performance charts contained within this two-page document. Team Red appears to be in a boastful mood—their 1080p benchmark results reveal compelling numbers, as produced by their flagship Zen 5 "Strix Halo" processor (baseline 55 W TDP). According to Team Red's marketing guidelines, the Ryzen AI Max+ 395 APU: "competes with a GeForce RTX 4070 Mobile GPU at similar TDP and form factor." The first-party produced comparison points to their Radeon 8060S integrated graphics solution being up to 68% faster—in modern gaming environments at 1080p settings—than the competing Team Green dedicated laptop-oriented GPU, limited to 65 W TGP due to form factor restrictions. Overall, the AMD test unit does better by 23.2% on average (referring to Wccftech's calculations).

According to the document, AMD's reference system was lined up against an ASUS ROG Flow Z13 (2023) gaming laptop specced with an Intel Core i9-13900H processor, and a GeForce RTX 4070 mobile graphics card. The Ryzen AI Max+ 395's "massive iGPU" can unleash the full force of forty RDNA 3.5 compute units, paired with up to 96 GB of unified on-board memory (from a total pool of 128 GB). Non-gaming benchmarks place the flagship Team Red processor above Intel Core Ultra 9 288V and Apple M4 Pro (12-core) CPUs—as always, it is best to wait for verification from independent evaluators. Saying that, the "Strix Halo" APU family has generated a lot of excitement—even going back to early leaks—and the latest marketed performance could drum up further interest.

Ubisoft Unveils Assassin's Creed Shadows Recommended PC Specs

Hi everyone, Assassin's Creed Shadows is launching March 20, inviting you to experience the intertwined stories of Naoe, an adept shinobi Assassin, and Yasuke, a powerful African samurai. Today, you can pre-order the game on console and PC, and read up on Shadows' upcoming expansion, Claws of Awaji, which brings 10 hours of additional content free with your pre-order.

For those of you playing on PC, we've got all of Assassin's Creed Shadows' recommended PC specs listed in this article. Assassin's Creed Shadows will support raytraced global illumination and reflections, and will feature an in-game benchmark tool for performance analysis, ultra-wide resolutions, an uncapped framerate, and more. Check out all the specs chart below.

AMD Ryzen 9 9950X3D & 9900X3D Gaming Performance Akin to Ryzen 7 9800X3D

AMD's Ryzen 9 9950X3D and Ryzen 9 9900X3D "Zen 5" processors are scheduled for launch around March time, with many a hardcore PC enthusiast salivating at the prospect of an increase in core counts over already released hardware—the ever popular Ryzen 7 9800X3D CPU makes do with eight cores (and sixteen threads). Under normal circumstances, higher core counts do not provide a massive advantage in gaming applications—over the years, Team Red's 8-core 3D V-Cache-equipped models have reigned supreme in this so-called "sweet spot." Many have wondered whether new-gen 12 and 16-core SKU siblings had any chance of stealing some gaming performance thunder—a recently published VideoGamer article provides a definitive answer for the "Granite Ridge" generation.

The publication managed to extract key quotes from Martijn Boonstra—a Team Red product and business development manager—providing a slightly tepid outlook for the incoming Ryzen 9 9950X3D and 9900X3D models. The company executive stated: "(our) new chips will provide similar overall gaming performance to the Ryzen 7 9800X3D. There will be some games that perform a bit better—if the game engine utilizes more cores and threads—and some games will perform a little worse (if the game engine favors a single CCD configuration), but on the whole, the experience is comparable." Boonstra did not reveal any details regarding forthcoming prices—the Ryzen 7 9800X3D has an MSRP of $479 (if you are lucky enough to find one)—but he hinted that finalized digits will be announced "closer to launch." He signed off with standard marketing spiel: "Ryzen 9000X3D Series desktop processors are perfect for gamers and content creators alike...whether you are already on the AM5 platform, on AM4 or another platform, these products are sure to impress."

NVIDIA GeForce RTX 5090 3DMark Performance Reveals Impressive Improvements

The RTX 50-series gaming GPUs have the gaming community divided. While some appreciate the DLSS 4 and MFG technologies driving impressive improvements in FPS through AI wizardry, others are left disappointed by the seemingly poor improvements in raw performance. For instance, when DLSS and MFG are taken out of the equation, the RTX 5090, RTX 5080, and RTX 5070 are around 33%, 15%, and 20% faster than their predecessors respectively in gaming performance. That said, VideoCardz has tapped into its sources, and revealed the 3DMark scores for the RTX 5090 GPU, and the results certainly do appear to exceed expectations.

In the non-ray traced Steel Nomad test at 4K, the RTX 5090 managed to score around 14,133 points, putting it roughly 53% ahead of its predecessor. In the Port Royal test, which does utilize ray tracing, the RTX 5090 raked in 36,667 points - a 40% improvement over the RTX 4090. The results are much the same in the older Time Spy and Fire Strike tests as well, indicating at roughly a 31% and 38% jump in performance respectively. Moreover, according to the benchmarks, the RTX 5090 appears to be roughly twice as powerful as the RTX 4080 Super. Of course, synthetic benchmarks do not entirely dictate gaming performance, and VideoCardz clearly mentions that gaming performance (without MFG) will witness a substantially more modest improvement. There is no denying that Blackwell's vastly superior memory bandwidth is helping a lot with the synthetic tests, with the 33% extra shaders doing the rest of the work.

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.

NVIDIA GeForce RTX 5090 Performance in Cyberpunk 2077 With and Without DLSS 4 Detailed

It is no secret that NVIDIA's RTX 50-series launch was welcomed with a mixed reception. On one hand, DLSS 4 with Multi-Frame Generation has allowed for obscene jumps in performance, much to the dismay of purists who would rather do away with AI-powered wizardry. A recent YouTube video has detailed what the RTX 5090 is capable of in Cyberpunk 2077 with Path Tracing at 4K, both with and without the controversial AI features. With DLSS set to performance mode and 4x frame generation (three generated frames), the RTX 5090 managed around 280 FPS. Pretty good, especially when considering the perfectly acceptable latency of around 52 ms, albeit with occasional spikes.

Turning DLSS to quality, the frame rate drops to around 230 FPS, with latency continuing to hover around 50 ms. Interestingly, with frame generation set to 3x or even 2x, the difference in latency was borderline negligible between the two, right around 44 ms or so. However, the FPS takes a massive nosedive when frame generation is turned off entirely. With DLSS set to quality mode and FG turned off, the RTX 5090 barely managed around 70 FPS in the game. Taking things a step further, the presenter turned off DLSS as well, resulting in the RTX 5090 struggling to hit 30 FPS in the game, with latency spiking to 70 ms. Clearly, DLSS 4 and MFG allows for an incredible uplift in performance with minimal artefacting, at least in Cyberpunk 2077 unless one really looks for it.

Nintendo Switch 2 Docked and Handheld Performance Revealed By Tipster

It is a known fact that the Switch 2 is by no means planning on being a performance beast. Nintendo's focus has always been on their ecosystem, and not on raw performance, which will continue being the case. As such, the Switch 2 is widely expected to sport an NVIDIA Tegra SoC paired with 12 GB of LPDDR5 system memory and an Ampere-based GPU. Now, a fresh leak has detailed the docked and handheld mode performance that can be expected from the widely anticipated Switch successor, and the numbers seem to fall right around what was initially expected.

The leak, sourced from a Nintendo forum, reveals that in docked mode, the Nintendo Switch 2's GPU will be clocked at 1000 MHz, up from 768 MHz for the soon-to-be previous generation Switch, allowing for 3.1 TFLOPS of performance. In handheld mode, unsurprisingly, the GPU clock will be limited to 561 MHz, allowing for 1.71 TFLOPS of raw performance. These numbers are far from impressive for 2025, although Nintendo will likely make up for the lack of raw horsepower using upscaling technologies similar to DLSS, allowing for a vastly improved experience than what its otherwise unimpressive hardware could have afforded.
Return to Keyword Browsing
Mar 6th, 2025 22:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts