News Posts matching #Benchmarks

Return to Keyword Browsing

AMD Ryzen 7 8700G & Ryzen 5 8600G APUs Geekbenched

AMD announced its Ryzen 8000G series of Zen 4-based desktop APUs earlier this month, with an official product launch date: January 31. The top models within this range are the "Hawk Point" Ryzen 7 8700G and Ryzen 5 8600G processors—Olrak29_ took to social media after spotting pre-release examples popping up on the Geekbench Browser database. It is highly likely that evaluation samples are in the hands of reviewers, and more benchmarked results are expected to be uploaded over the next week and a half. The Ryzen 7 8700G (w/ Radeon 780M Graphics) was benched on an ASUS ROG STRIX B650-A GAMING WIFI board with 32 GB (6398 MT/s) of DDR5 system memory. Leaked figures appeared online last weekend, originating from an Ryzen 5 8600G (w/ Radeon 760M Graphics) paired with an MSI B650 GAMING PLUS WIFI (MS-7E26) motherboard and 32 GB (6400 MT/s) of DDR5 RAM.

The Geekbench 6 results reveal that the Ryzen 7 8700G and Ryzen 5 8600G APUs are slightly less performant than "Raphael" Ryzen 7000 non-X processors—not a massive revelation, given the underlying technological similarities between these AMD product lines. Evaluations could change with the publication of official review data, but the 8000G series is at a natural disadvantage here—lower core clock frequencies and smaller L3 cache designations are the likely culprits. The incoming APUs are also somewhat hobbled with PCIe support only reaching 4.0 standards. VideoCardz, Tom's Hardware and Wccftech have taken the time to compile the leaked Geekbench 6 results into handy comparison charts—very much worth checking out.

AMD Ryzen Threadripper Pro 7995WX & 7975WX Specs Leaked

A pair of Dell Precision workstations have been tested in SiSoftware's Sandra benchmark suite—database entries for the 7875 Tower (Dell 00RP38) and 7875 Tower (Dell 00RP38) reveal specifications of next generation AMD Ryzen Threadripper Pro CPUs. The 32 core 7975WX model was outed a couple of weeks ago, but the Sandra benchmark database has been updated with additional scores. Its newly leaked sibling is getting a lot of attention—the recently benchmarked 7995WX sample appears to possess 96 Zen 4 cores, and 192 threads (via SMT) with a 5.14 GHz maximum single-core boost clock. Tom's Hardware is intrigued by benchmark data showing that the CPU has: "a 3.2 GHz all-core turbo frequency."

There are 12 CCDs onboard, with a combined total of 384 MB of L3 cache (each CCD has access to 32 MB of L3)—therefore Wccftech believes that "this chip is based on the Genoa SP5 die and will adopt the top 8-channel and SP5 socket platform. The chip also features 96 MB of L2 cache and the top clock speed was reported at 5.14 GHz." The repeat benched Ryzen Threadripper Pro 7975WX CPU is slightly less exciting—with 32 Zen 4 cores, 64 threads, 128 MB of L3 cache, and 32 MB of L2 cache. According to older information, this model is believed to have a TDP rating of 350 W and apparent clock speeds peaking at 4.0 GHz—Wccftech reckons that this frequency reflects an all-core boost. They have produced a bunch of comparative performance charts and further analysis—well worth checking out.

NVIDIA GH200 Superchip Aces MLPerf Inference Benchmarks

In its debut on the MLPerf industry benchmarks, the NVIDIA GH200 Grace Hopper Superchip ran all data center inference tests, extending the leading performance of NVIDIA H100 Tensor Core GPUs. The overall results showed the exceptional performance and versatility of the NVIDIA AI platform from the cloud to the network's edge. Separately, NVIDIA announced inference software that will give users leaps in performance, energy efficiency and total cost of ownership.

GH200 Superchips Shine in MLPerf
The GH200 links a Hopper GPU with a Grace CPU in one superchip. The combination provides more memory, bandwidth and the ability to automatically shift power between the CPU and GPU to optimize performance. Separately, NVIDIA HGX H100 systems that pack eight H100 GPUs delivered the highest throughput on every MLPerf Inference test in this round. Grace Hopper Superchips and H100 GPUs led across all MLPerf's data center tests, including inference for computer vision, speech recognition and medical imaging, in addition to the more demanding use cases of recommendation systems and the large language models (LLMs) used in generative AI.

Intel Arc Linux Gaming Performance Boosted by Vastly Improved Vulkan Drivers

Intel's Alchemist engineering team has been working on improving its open-source Vulkan drivers for Linux—recent coverage from Phoronix shows that Team Blue's hard work is paying off, especially in the area of gaming performance. The site's founder, Michael Larabel, approves of the latest Mesa work produced by Intel engineers, and has commended them on their efforts to better the Arc Graphics family. His mid-month testings—on a Linux 6.4-based system running an Intel Arc A770 GPU—demonstrated a "~10% speed-up for the Intel Arc Graphics on Linux." He has benchmarked this system again over the past weekend, following the release of a new set of optimizations for Mesa 23.3-devel: "The latest performance boost for Intel graphics on Linux is by supporting the I915_FORMAT_MOD_4_TILED_DG2_RC_CCS modifier. Indeed it's panning out nicely for furthering the Intel Arc Graphics Vulkan performance."

He apologized for the limited selection of games, due to: "the Intel Linux graphics driver still not having sparse support in place, but at least that will hopefully be here in the coming months when the Intel Xe kernel driver is upstreamed. Another recent promising development for the Intel open-source graphics driver support is fake sparse support to at least help some games and that code will hopefully be merged soon." First up was Counter-Strike: Global Offensive—thanks to the optimized Vulkan drivers it: "enjoyed another nice boost to the performance as a result of this latest code. For CS Linux gamers, it's great seeing the 21% boost just over the past month."

Intel N100 Quad E-Core Gaming Performance Assessed

Team Pandory has tested the gaming potential of an Intel Alder-Lake-N SoC—not many outlets have bothered to give the N100 much coverage in this aspect, since the chip's makeup is E-core only and it only offers single-channel memory support. Team Blue has emphasized power efficiency rather than raw performance with its super low budget 2022 successor to old Pentium and Celeron processor product lines. The utilization of modern Gracemount CPU cores does it some favors—notably granting L3 cache support, but the chip has been designed with entry-level productivity in mind.

Naturally, in-game testing focuses attention on the N100's integrated GPU, based on Team Blue's Xe-LP architecture—it features 24 execution units (EUs), support for AV1 decode capabilities, and 8K 60 FPS video playback. Arc Alchemist offers roughly double the performance when compared to the Xe-LP iGPU, so we are not expecting a big "wow factor" to be delivered by the plucky Alder-Lake-N SoC (6 W TDP). Team Pandory benchmarked a laptop sporting a single stick of 8 GB DDR5 RAM and the N100 quad E-core CPU (capable of 3.4 GHz turbo boosting), with 6 MB of L3 cache. The ultra portable device was able to hit 60 FPS in a couple of older games, but the majority of tested titles ran at 20 to 30 20 FPS (on average). Graphics settings were universally set to minimum, with a resolution of 1280 x 720 (720p) across ten games: CS:GO, Dota 2, Forza Horizon 4, Genshin Impact, GTA V, Grid Autosport, Minecraft, Resident Evil 5, Skyrim, and Sleeping Dogs.

Geekbench Leak Suggests NVIDIA GeForce RTX 4060 Nearly 20% Faster than RTX 3060

NVIDIA is launching its lower end GeForce RTX 4060 graphics card series next week, but has kept schtum about the smaller Ada Lovelace AD107 GPU's performance level. This more budget-friendly offering (MSRP $299) is rumored to have 3,072 CUDA cores, 24 RT cores, 96 Tensor cores, 96 TMUs, and 32 ROPs. It will likely sport 8 GB of GDDR6 memory across a 128-bit wide memory bus. Benchleaks has discovered the first set of test results via a database leak, and posted these details on social media earlier today. Two Geekbench 6 runs were conducted on a test system comprised of an Intel Core i5-13600K CPU, ASUS Z790 ROG APEX motherboard, DDR5-6000 memory and the aforementioned GeForce card.

The GPU Compute test utilizing the Vulkan API resulted in a score of 99419, and another using OpenCL achieved 105630. We are looking at a single sample here, so expect variations when other units get tested in Geekbench prior to the June 29 launch. The RTX 4060 is about 12% faster (in Vulkan) than its direct predecessor—RTX 3060. The gap widens with its Open CL performance, where it offers an almost 20% jump over the older card. The RTX 3060 Ti presents around 3-5% faster performance over the RTX 4060. We hope to see actual in-game benchmarking carried out soon.

Moore Threads MTT S80 GPU Benchmarked by PC Watch Japan

The Moore Threads MTT S80 gaming-oriented graphics card has been tested mostly by Chinese hardware publications, but Japan's PC Watch has managed to get hold of a sample unit configured with 16 GB GDDR6 (14 Gbps) for evaluation purposes and soon published their findings in a "HotHot REVIEW!" The MTT S80 GPU appears to be based on PowerVR architecture (developed by Imagination Technologies), but official Moore Threads literature boasts that their own Chunxaio design is behind all proceedings with 4096 "MUSA" cores. The GPU's clock speed is set at 1.8 GHz, and maximum compute performance has been measured at 14.2 TFLOPS. A 256-bit memory bus grants a bandwidth transfer rate of 448 GB/s. PC Watch notes that the card's support for PCIe Gen 5 x 16 (offering up to 128 GB/s bandwidth) is quite surprising, given the early nature of this connection standard.

Moore Threads has claimed in the past that their cards support Direct X, but PC Watch has discovered that the S80 does not work with DX12, and their tests also demonstrated significant compatibility issues under DX11—with plenty of system freezes and error messages logged. The reviewer(s) had to downshift in some cases to DX9 game environments, in order to gather reliable/stable data. TPU's GPU-Z utility is shown to have no registration information for the S80, and it cannot read the GPU's clock. PC Watch compared their sample unit to an NVIDIA GeForce GTX 1050 Ti graphics card—the entry level 2016-era GPU managed to best the newer competition in terms of in-game performance and power efficiency.

NVIDIA RTX 4000 Ada Lovelace GPU Roughly Equivalent to GeForce RTX 3060 Ti, Consumes 65% Less Power

The NVIDIA RTX 4000 SFF Ada Generation graphics card was released to the public in late April, but very few reviews and benchmarks have emerged since then. Jisaku Hibi, a Japanese hardware site, has published an in-depth evaluation that focuses mostly on gaming performance. The RTX 4000 Ada SFF has been designed as a compact workstation graphics card, but its usage of an AD104 die makes it a sibling of NVIDIA's GeForce RTX 4070 and 4070 Ti gaming-oriented cards. Several PC hardware sites have posited that the 70 W RTX 4000 Ada SFF would "offer GeForce RTX 3070-like performance," but Jisaku Hibi's investigation points to the RTX 3060 Ti being the closest equivalent card (in terms of benchmark results).

According to the TPU GPU database: "NVIDIA has disabled some shading units on the RTX 4000 SFF Ada Generation to reach the product's target shader count. It features 6144 shading units, 192 texture mapping units, and 80 ROPs. Also included are 192 tensor cores which help improve the speed of machine learning applications. The card also has 48 ray tracing acceleration cores. NVIDIA has paired 20 GB GDDR6 memory with the RTX 4000 SFF Ada Generation, which are connected using a 160-bit memory interface. The GPU is operating at a frequency of 1290 MHz, which can be boosted up to 1565 MHz, memory is running at 1750 MHz (14 Gbps effective)." The SKU's 70 W TGP and limited memory interface are seen as the card's main weak points, resulting in average clock speeds and a maximum memory bandwidth of only 280 GB/s.

AMD Ryzen 7040HS and 7040H "Phoenix" Laptop CPUs Get Tested

AMD is late in releasing its Phoenix Zen 4 lineup of mobile APUs - the original April launch has been missed, and laptops bearing Ryzen 7000HS & H-series are expected to arrive at some point this month. Preview hardware has made its way into the hands of testers, and one particular outlet - Golden Pig Upgrade, a content creator on the Chinese Bilibili video site - has performed benchmark tests. He seems to be the first reviewer to get hands-on time with AMD Ryzen 7040 Phoenix APUs, and his findings point to class leading performance results in terms of graphical capabilities - the 7840HS (packing a Radeon 780M RDNA3 iGPU) is compared to the Rembrandt-based 7735H, as well as a pair of Intel Raptor Lake CPUs - the 13700H and 13500H models.

AMD's newest Phoenix APU is the group leader in GPU performance stakes, but the jump up from the last-gen Rembrandt (RDNA2 iGPU) chip is not all that significant. VideoCardz reckons that the Radeon 780M integrated GPU is roughly equivalent to an NVIDIA GeForce MX550 dGPU and not far off from a GeForce GTX 1650 Max-Q graphics card (in terms of benchmark performance). According to AMD's internal documentation the RDNA 3 core architecture utilized in Phoenix APUs is referred to as "2.5" so this perhaps explains why the 780M is not doing laps around its older silbing(s).

Samsung Exynos 2400 SoC Performance Figures Leaked, Prototype Betters Next Gen Snapdragon GPU

Samsung's unannounced Exynos 2400 mobile chipset has been linked to the upcoming Galaxy S24 smartphone family, but internet tipsters have speculated that the in-house SoC will be reserved for the baseline model only. The more expensive Plus and Ultra variants could be the main targets for flagship smartphone fetishists - it is possible that Qualcomm's upper echelon Snapdragon 8 Gen 3 chipset is set to feature within these premium devices. Samsung's Exynos processors are not considered to be fan favorites, but industry insiders reckon that the latest performance figures indicate that Samsung's up-and-comer has the potential to turn some heads. Exact specifications for the Exynos 2400 are not public knowledge - one of the tipsters suggests that a 10-core layout has been settled on by Samsung, as well as a recent bump up in GPU core count - from 6 to 12. The company's own 4 nm SF4P process is the apparent choice set for production line.

A leaker has posted benchmark scores generated by an unknown device that was running an Exynos 2400 SoC - the Geekbench 5 results indicate an average single-core score of 1530 with a peak of 1711. The multi-core average score is shown to be 6210, and the highest number achieved is 6967. Therefore the Exynos 2400 is 31% percent faster (in multi-core performance) than the refreshed Snapdragon 8 Gen 2 variant currently found in Galaxy S23 Ultra smartphones, but the divide between the two in terms of single-core performance is not so great. The 2400 manages to outpace (by 30%) Apple's present generation Bionic A16's average multi-core score, although the latter beats the presumed engineering sample's single-core result by 20%. The Exynos 2400 will face a new lineup of rival mobile processors in 2024 - namely Apple's next generation Bionic A17 and Qualcomm's Snapdragon 8 Gen 3, so it is difficult to extrapolate today's leaked figures into a future scenario.

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

Intel Xeon W-3400/2400 "Sapphire Rapids" Processors Run First Benchmarks

Thanks to the attribution of Puget Systems, we have a preview of Intel's latest Xeon W-3400 and Xeon W-2400 workstation processors based on Sapphire Rapids core technology. Delivering up to 56 cores and 112 threads, these CPUs are paired with up to eight TeraBytes of eight-channel DDR5-4800 memory. For expansion, they offer up to 112 PCIe 5.0 lanes come with up to 350 Watt TDP; some models are unlocked for overclocking. This interesting HEDT family for workstation usage comes at a premium with an MSRP of $5,889 for the top-end SKU, and motherboard prices are also on the pricey side. However, all of this should come as no surprise given the expected performance professionals expect from these chips. Puget Systems has published test results that include: Photoshop, After Effects, Premiere Pro, DaVinci Resolve, Unreal Engine, Cinebench R23.2, Blender, and V-Ray. Note that Puget Systems said that: "While this post has been an interesting preview of the new Xeon processors, there is still a TON of testing we want to do. The optimizations Intel is working on is of course at the top, but there are several other topics we are highly interested in." So we expect better numbers in the future.
Below, you can see the comparison with AMD's competing Threadripper Pro HEDT SKUs, along with power usage using different Windows OS power profiles:

Alleged NVIDIA AD106 GPU Tested in 3DMark and AIDA64

Benchmarks and specifications of an alleged NVIDIA AD106 GPU have tipped up on Chiphell, although the original poster has since removed all the details. Thanks to @harukaze5719 on Twitter, who posted the details, we still get an insight into what we might be able to expect from NVIDIA's upcoming mid-range cards. All these details should be taken as is, as the original source isn't exactly what we'd call trustworthy. Based on the data in the TPU GPU database, the GPU in question should be the GeForce RTX 4070 Mobile with much higher clock speeds or an equivalent desktop part that offers more CUDA cores than the RTX 4060 Ti. Whatever the specific AD106 GPU is, it's being compared to the GeForce RTX 2080 Super and the RTX 3070 Ti.

The GPU was tested in AIDA64 and 3DMark and it beats the RTX 2080 Super in all of the tests, while drawing some 55 W less power at the same time. In some of the benchmarks the wins are within the margin of testing error, for example when it comes to the memory performance in AIDA64. However, we're looking at a GPU connected to only half the memory bandwidth here, as the AD106 GPU only has a 128-bit memory bus, compared to 256-bit for the RTX 2080 Super, although the memory clocks are much higher, but the overall memory bandwidth is still nearly 36 percent higher in the RTX 2080 Super. Yet, the AD106 GPU manages to beat the RTX 2080 Super in all of the memory benchmarks in AIDA64.

First Alleged AMD Radeon RX 7900-series Benchmarks Leaked

With only a couple of days to go until the AMD RX 7900-series benchmarks go live, some alleged benchmarks from both the RX 7900 XTX and RX 7900 XT have leaked on Twitter. The two cards are being compared to a NVIDIA RTX 4080 card in no less than seven different game titles, all running at 4K resolution. The games are God of War, Cyberpunk 2077, Assassin's Creed Valhalla, Watchdogs Legion, Red Dead Redemption 2, Doom Eternal and Horizon Zero Dawn. The cards were tested on a system with a Core i9-12900K CPU which was paired with 32 GB of RAM of unknown type.

It's too early to draw any real conclusions from this test, but in general, the RX 7900 XTX comes out on top, ahead of the RTX 4080, so no surprises here. The RX 7900 XT is either tied with the RTX 4080 or a fair bit slower, with the exception being Red Dead Redemption 2, where the RTX 4080 is the slowest card, although it also appears to have some issues, since the one percent lows are hitting 2 FPS. Soon, the reviews will be out and everything will become more clear, but it appears that AMD's RX 7900 XTX will give NVIDIA's RTX 4080 a run for its money, if these benchmarks are anything to go by.

Update Dec 11th: The original tweet has been removed, for unknown reasons. It could be because the numbers were fake, or because they were in breach of AMD's NDA.

UL Launches New 3DMark Feature Test for Intel XeSS

We're excited to release a new 3DMark feature test for Intel's new XeSS AI-enhanced upscaling technology. This new feature test is available in 3DMark Advanced and Professional Editions. 3DMark feature tests are special tests designed to highlight specific techniques, functions, or capabilities. The Intel XeSS feature test shows you how XeSS affects performance.

The 3DMark Intel XeSS frame inspector tool helps you compare image quality with an interactive side-by-side comparison of XeSS and native-resolution rendering. Check out the images below to see an example comparison of native resolution rendering and XeSS in the new 3DMark feature test.

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Puget Systems Publishes Windows 11 Content Creation Benchmarks

Puget Systems has recently performed a variety of tests to determine if Windows 11 or 10 is the fastest for content-creation tasks such as photo and video editing. The tests were each conducted on four systems with an AMD Threadripper 5995WX, AMD Threadripper 5975WX, AMD Ryzen 9 5950X, and Intel Core i9 12900K each paired with an RTX 3080 and 64/128 GB of memory. The benchmarks were primarily taken from the PugetBench suite of tests with each test run multiple times.

The video editing tests were conducted using Premiere Pro, After Effects, and DaVinci Resolve where Premiere Pro saw a small performance improvement in Windows 10 while the other programs performed similarly on both operating systems. The photo editing tests used Photoshop, and Lightroom Classic with average performance equal across Windows 10 and 11. The CPU rendering benchmarks featured Cinebench, V-Ray, and Blender where once again the results were all within the margin of error. The GPU rendering tests using Octane, V-Ray, and Blender showed some differences with V-Ray and Blender both performing best in Windows 11. The final section was Game Development in Unreal Engine where a small advantage could be had by using Windows 11.

Intel's Upcoming Core i9-13900K Appears on Geekbench

New week, new leak, as an engineer sample of Intel's upcoming Raptor Lake based Core i9-13900K has appeared in the infamous Geekbench database. It seems to be one of the ES samples that has been making the rounds over the past few weeks, but this is the first time we get an indication of what the performance might be like. There are no real surprises in terms of the specifications, we're looking at a base clock of 3 GHz, with a boost clock of 5.5 GHz, which has already been reported for these chips. The 24-core, 32-thread CPU was paired with 32 GB of 6400 MHz DDR5 memory and an ASUS ROG Maximus Z690 Extreme motherboard. Unfortunately the test results are reported as invalid, due to "an issue with the timers" on the system.

That said, we can still compare the results with a similar system using a Core i9-12900K on an ASUS ROG Strix Z690-F Gaming board, that's also paired up with 32 GB of 6400 MHz DDR5 memory. The older Alder Lake system is actually somewhat faster in the single core tests where it scores 2,142 points versus 2133 points for the Raptor Lake based system, despite having a maximum frequency of 5.1 GHz. The Raptor Lake system is faster in the multi-core test at 23,701 vs. 21312 points. However, it's no point doing any kind of analysis here, as the Raptor Lake results are all over the place, with it beating the Alder Lake CPU by a significant amount in some tests and losing against it in others, where it shouldn't be falling behind, simply based on the higher clock speed and additional power efficient cores. At least this shows that Raptor Lake is running largely as intended on current 600-series motherboards, so for those considering upgrading to the 13th gen of Intel CPUs, there shouldn't be any big hurdles to overcome.

Samsung RDNA2-based Exynos 2200 GPU Performance Significantly Worse than Snapdragon 8 Gen1, Both Power Galaxy S22 Ultra

The Exynos 2200 SoC powering the Samsung Galaxy S22 Ultra in some regions such as the EU, posts some less-than-stellar graphics performance numbers, for all the hype around its AMD-sourced RDNA2 graphics solution, according to an investigative report by Erdi Özüağ, aka "FX57." Samsung brands this RDNA2-based GPU as the Samsung Xclipse 920. Further, Özüağ's testing found that the Exynos 2200 is considerably slower than the Qualcomm Snapdragon 8 Gen 1 powering the S22 Ultra in certain other regions, including the US and India. He has access to both kinds of the S22 Ultra.

In the UL Benchmarks 3DMark Wildlife test, the Exynos 2200 posted a score of 6684 points, compared to 9548 points by the Snapdragon 8 Gen 1 (a difference of 42 percent). What's even more interesting, is that the Exynos 2200 is barely 7 percent faster than the previous-gen Exynos 2100 (Arm Mali GPU) powering the S21 Ultra, which scored 6256 points. The story repeats with the GFXBench "Manhattan" off-screen render benchmark. Here, the Snapdragon 8 Gen 1 is 30 percent faster than the Exynos 2200, which performs on-par with the Exynos 2100. Find a plethora of other results in the complete review comparing the two flavors of the S22 Ultra.

Steam Deck Developer Unit Benchmarks Leak, Shows 60 FPS is Doable

Remember those early developer units of the Steam Deck that Valve was shipping out to developers? Well, one of them ended up in the hands of someone in China, who decided to share a few benchmarks on a local forum. Judging by the pictures posted, Valve still has a lot of work to do when it comes to the translation of the UI, but this shouldn't affect anyone using it in English.

The hardware appears to function according to the announced specs, so there were no surprises here, good or bad. Only four games were tested, which consists of Shadow of the Tomb Raider, Doom, Cyberpunk 2077 and DOTA 2. Let's just say that Cyberpunk 2077 isn't going to be what you want to play on the Steam Deck, as it was fluctuating between 20 to 30 FPS, although this was at the high quality setting.

Intel Core i9-12900K Beats AMD Ryzen 9 5950X in Leaked Geekbench Score

We recently saw the Intel Core i7-12700 appear on Geekbench 5 where it traded blows with the AMD Ryzen 7 5800X, we have now discovered the flagship Core i9-12900K also making an appearance. The benchmarked Intel Core i9-12900K features a hybrid design with 8 high-performance cores, 8 high-efficiency cores, and 24 threads running at a base clock of 3.2 GHz. The test was performed on a Windows 11 Pro machine allowing for full use of the Intel Thread Director technology paired with 32 GB of DDR5 memory. The processor achieved a single-core score of 1834/1893 in the two tests which gives it the highest score on the official Benchmarks coming in 12% faster than the Ryzen 9 5950X. The processor also achieved an impressive multi-core score of 17299/17370 which places it 3% above the Ryzen 9 5950X and 57% above the previous generation flagship 8-core i9-11900K. These leaked benchmarks highlight the impressive potential of Intel's upcoming 12th Generation Core series which is expected to launch in November.

3DMark Updated with New CPU Benchmarks for Gamers and Overclockers

UL Benchmarks is expanding 3DMark today by adding a set of dedicated CPU benchmarks. The 3DMark CPU Profile introduces a new approach to CPU benchmarking that shows how CPU performance scales with the number of cores and threads used. The new CPU Profile benchmark tests are available now in 3DMark Advanced Edition and 3DMark Professional Edition.

The 3DMark CPU Profile introduces a new approach to CPU benchmarking. Instead of producing a single number, the 3DMark CPU Profile shows how CPU performance scales and changes with the number of cores and threads used. The CPU Profile has six tests, each of which uses a different number of threads. The benchmark starts by using all available threads. It then repeats using 16 threads, 8 threads, 4 threads, 2 threads, and ends with a single-threaded test. These six tests help you benchmark and compare CPU performance for a range of threading levels. They also provide a better way to compare different CPU models by looking at the results from thread levels they have in common.

Blizzard Benchmarks NVIDIA's Reflex Technology in Overwatch

Blizzard, a popular game developer, has today implemented NVIDIA's latest technology for latency reduction into its first-person shooter—Overwatch. Called NVIDIA Reflex, the technology aims to reduce system latency by combining the NVIDIA GPUs with G-SYNC monitors, and specially certified peripherals, all of which can be found on the company website. NVIDIA Reflex dynamically reduces system latency by combining both GPU and game optimizations, which game developers implement, and the gamer is left with a much more responsive system that can edge out a competitive advantage. Today, we get to see just how much the new technology helps in the latest Overwatch update that brings NVIDIA's Reflex with it.

Blizzard has tested three NVIDIA GPUs: GeForce RTX 3080, RTX 2060 SUPER, and GTX 1660 SUPER. All three GPUs cover three different segments, so they are a good sign of what you can expect from your system. Starting from the GeForce GTX 1660 Super, the system latency, which was measured in milliseconds, was cut by over 50%. The middle-end RTX 2060 SUPER GPU experienced a similar gain, while the RTX 3080 was seen with the smallest gain, however, it did achieve the lowest latency out of all GPUs tested. You can check out the results for yourself below.

UL Benchmarks Releases Creator Focused Procyon Benchmark Suite

Over the last year, we've seen a lot of interest in a new category of PCs designed for content creators. With high-end specifications and serious styling, these new creator PCs are being marketed to animators, designers, photographers, videographers, musicians and other digital content creators. Today, to meet growing demand from our press and retail partners, we're releasing two new benchmarks for measuring the performance of creator PCs. These two benchmarks use popular Adobe applications to test PC performance for photo editing and video editing work.

Enthusiast creators now have easy access to many of the same software tools used by professionals. Add increasingly powerful, yet affordable PC hardware, and creators have everything they need to develop their talent and unlock their potential. For creators who make a living from their craft, focus and productivity are key. When the process gets in the way, creativity suffers. Even the smallest interruption can break the flow. Longer delays from loading images or exporting video files are even more frustrating. Creator PCs promise to smooth out the wrinkles in the production process. Many manufacturers are now offering dedicated systems for content creators. Benchmark scores offer an easy way for creators to compare the performance of these different systems.

UL Benchmarks Updates 3DMark with Ray-Tracing Feature Test

The launch of AMD Radeon RX 6000 Series graphics cards on November 18 will end NVIDIA's monopoly on real-time raytracing. For the first time, gamers will have a choice of GPU vendors when buying a raytracing-capable graphics card. Today, we're releasing a new 3DMark feature test that measures pure raytracing performance. You can use the 3DMark DirectX Raytracing feature test to compare the performance of the dedicated raytracing hardware in the latest graphics cards from AMD and NVIDIA.

Real-time raytracing is incredibly demanding. The latest graphics cards have dedicated hardware that's optimized for raytracing operations. Despite the advances in GPU performance, the demands are still too high for a game to rely on raytracing alone. That's why games use raytracing to complement traditional rendering techniques. The 3DMark DirectX Raytracing feature test is designed to make raytracing performance the limiting factor. Instead of relying on traditional rendering, the whole scene is ray-traced and drawn in one pass.
DOWNLOAD: 3DMark v2.15.7078
Return to Keyword Browsing
Jun 14th, 2025 01:21 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts