News Posts matching #Benchmark

Return to Keyword Browsing

Crytek Releases Hardware-Agnostic Raytracing Benchmark "Neon Noir"

Crytek today released the final build for their hardware-agnostic raytracing benchmark. Dubbed Neon Noir, the benchmark had already been showcased in video form back in March 2019, but now it's finally available for download for all interested parties from the Crytek Marketplace. The benchmark currently doesn't support any low-level API such as Vulkan or DX 12, but support for those - and the expected performance improvements - will be implemented in the future.

Neon Noir has its raytracing chops added via an extension of CRYENGINE's SVOGI rendering tool that currently Crytek's games use, including Hunt: Showdown, which will make it easier for developers to explore raytracing implementations that don't require a particular hardware implementation (such as RTX). However, the developer has added that they will add hardware acceleration support in the future, which should only improve performance, and will not add any additional rendering features compared to those that can be achieved already. What are you waiting for? Just follow the link below.

Intel Core i9-10980XE "Cascade Lake-X" Benchmarked

One of the first reviews of Intel's new flagship HEDT processor, the Core i9-10980XE, just hit the web. Lab501.ro got their hands on a freshly minted i9-10980XE and put it through their test bench. Based on the "Cascade Lake-X" silicon, the i9-10980XE offers almost identical IPC to "Skylake-X," but succeeds the older generation with AI-accelerating DLBoost instruction-set, an improved multi-core boosting algorithm, higher clock speeds, and most importantly, a doubling in price-performance achieved by cutting the cores-per-Dollar metric by half, across the board.

Armed with 18 cores, the i9-10980XE is ahead of the 12-core Ryzen 9 3900X in rendering and simulation tests, although not by much (for a chip that has 50% more cores). This is probably attributed to the competing AMD chip being able to sustain higher all-core boost clock speeds. In tests that not only scale with cores, but are also hungry for memory bandwidth, such as 7-zip and Media, Intel extends its lead thanks to its quad-channel memory interface that's able to feed its cores with datasets faster.

Intel Iris Plus Graphics G7 iGPU Beats AMD RX Vega 10: Benchmarks

Intel is taking big strides forward with its Gen11 integrated graphics architecture. Its performance-configured variant, the Intel Iris Plus Graphics G7, featured in the Core i7-1065G7 "Ice Lake" processor, is found to beat AMD Radeon RX Vega 10 iGPU, found in the Ryzen 7 2700U processor ("Raven Ridge"), by as much as 16 percent in 3DMark 11, a staggering 23 percent in 3DMark FireStrike 1080p. Notebook Check put the two iGPUs through these, and a few game tests to derive an initial verdict that Intel's iGPU has caught up with AMD's RX Vega 10. AMD has since updated its iGPU incrementally with the "Picasso" silicon, providing it with higher clock speeds and updated display and multimedia engines.

The machines tested here are the Lenovo Ideapad S540-14API for the AMD chip, and Lenovo Yoga C940-14IIL with the i7-1065G7. The Iris Plus G7 packs 64 Gen11 execution units, while the Radeon RX Vega 10 has 640 stream processors based on the "Vega" architecture. Over in the gaming performance, and we see the Intel iGPU 2 percent faster than the RX Vega 10 at Bioshock Infinite at 1080p, 12 percent slower at Dota 2 Reborn 1080p, and 8 percent faster at XPlane 11.11.

AMD Radeon RX 5500 Gets Benchmarked

AMD is preparing lower-end variants of its NAVI GPUs based on new RDNA graphics card architecture, which will replace all the existing cards based on aging GCN architecture. Today, AMD's upcoming Radeon RX 5500, as it is called, got benchmarked in GFXBench - a cross-platform benchmark which features various kinds of test for Windows, MacOS, iOS and Android.

The benchmark was run on Windows OS using OpenGL API. It only ran the "Manhattan" high-level test, which yielded a result of 5430 frames in total or about 87.6 frames per second. When compared to something like RX 5700 XT, which scored 8905 frames in total and 143.6 FPS, RX 5500 clearly seems positioned at the lower end of NAVI GPU stack. Despite the lack of details, we can expect this card to compete against NVIDIA's GeForce GTX 1660/1660 Ti GPUs where AMD has no competing offer so far.

3DMark Introduces Variable Rate Shading Benchmark

3DMark today announced they've introduced a new benchmarking feature. Specifically developed to test Variable Rate Shading (VRS) performance and image quality differences, the new feature allows users to actually visualize the performance and image quality differences associated with more aggressive (or less aggressive) VRS settings. The algorithm is a smart one - it aims to reduce the number of pixel shader operations on surfaces where detail isn't as important (such as frame edges, fast-moving objects, darkened areas, etc) so as to improve performance and shave some precious milliseconds in the deployment of each frame.

To run this test, you will need Windows 10 version 1903 or later and a DirectX 12 GPU that supports Tier 1 VRS and the "AdditionalShadingRatesSupported" capability, such as an NVIDIA Turing-based GPU or an Intel Ice Lake CPU. The VRS feature test is available now as a free update for 3DMark Advanced Edition, or from now until September 2, 3DMark is 75% off when you buy it from Steam or the UL benchmarks website.

NVIDIA GeForce RTX 2080 Super Appears in FFXV Benchmark Database

Results of NVIDIA's upcoming GeForce RTX 2080 Super graphics cards have been revealed in Final Fantasy XV benchmark database, where the card is compared against other offerings at 2560 x 1440 resolution using high quality settings. The card scored 8736 points, while its predecessor, RTX 2080, scored 8071 points at same resolution and settings. This shows around 8 % improvement in favor of newer model, which is to be expected given the increase in memory speed going from 14 Gbps to 15.5 Gbps and CUDA core count which increased from 2944 cores to 3072. With this improvement, RTX 2080 Super is now only 105 points (about one percent) behind TITAN V graphics card in FFXV benchmark. If you wish to compare results for yourself, you can do so here.

GeForce RTX 2070 Super Beats Radeon 5700 XT in FFXV Benchmark

In a recent submission to the Final Fantasy XV Benchmark database, upcoming new NVIDIA GeForce RTX 2070 Super GPU has been benchmarked. The new submission is coming just a few days before the Super series officially launches. On benchmark's tests, RTX 2070 Super has scored 7479 points at 1440p resolution on high quality settings, which is almost 12% increase from previous generation 2070, which scored 6679. The performance seem to be attributed to increased CUDA core count, which is rumored to increase about 11%, making the result seem pretty realistic.

When compared to AMD's upcoming Radeon 5700 XT, which also got submitted to FFXV Benchmark database and has scored 5575 at same settings, the RTX 2070 Super is about 34% faster.

UL Releases PCI Express Feature Test For 3DMark Ahead of PCIe 4.0 Hardware

With PCI-Express 4.0 graphics cards and motherboards soon to arrive, UL has released their PCI Express feature test for 3DMark. This latest addition has been designed to verify the bandwidth available to the GPU over a computer's PCI Express interface. To accomplish this, the test will make bandwidth the limiting factor for performance and does so by uploading a large amount of vertex and texture data to the GPU for each frame. The end goal is to transfer enough data over the PCIe 4.0 interface to thoroughly saturate it. Once the test is complete, the end result will be a look at the average bandwidth achieved during the test.

Intel Puts Out Benchmarks Showing Minimal Performance Impact of MDS Mitigation

Intel Tuesday once again shook the IT world by disclosing severe microarchitecture-level security vulnerabilities affecting its processors. The Microarchitectural Data Sampling (MDS) class of vulnerabilities affect Intel CPU architectures older than "Coffee Lake" to a greater extent. Among other forms of mitigation software patches, Intel is recommending that users disable HyperThreading technology (HTT), Intel's simultaneous multithreading (SMT) implementation. This would significantly deplete multi-threaded performance on older processors with lower core-counts, particularly Core i3 2-core/4-thread chips.

On "safer" microarchitectures such as "Coffee Lake," though, Intel is expecting a minimal impact of software patches, and doesn't see any negative impact of disabling HTT. This may have something to do with the 50-100 percent increased core-counts with the 8th and 9th generations. The company put out a selection of benchmarks relevant to client and enterprise (data-center) use-cases. On the client use-case that's we're more interested in, a Core i9-9900K machine with software mitigation and HTT disabled is negligibly slower (within 2 percent) of a machine without mitigation and HTT enabled. Intel's selection of benchmarks include SYSMark 2014 SE, WebXprt 3, SPECInt rate base (1 copy and n copies), and 3DMark "Skydiver" with the chip's integrated UHD 630 graphics. Comparing machines with mitigations applied but toggling HTT presents a slightly different story.

Announcing DRAM Calculator for Ryzen v1.5.0 with an Integrated Benchmark

Yuri "1usmus" Bubliy, who practically wrote the book on AMD Ryzen memory overclocking, presents DRAM Calculator for Ryzen v1.5.0, the latest version of the most powerful tool available to help you overclock memory on PCs powered by AMD Ryzen processors. The biggest feature-addition is MEMBench, a new internal memory benchmark that tests performance of your machine's memory sub-system, and can be used to test the stability of your memory overclock. Among the other feature-additions include the "Compare Timings" button, which gives you a side-by-side comparison of your machine's existing settings, with what's possible or the settings you've arrived at using the app.

Motherboards vary by memory slot topology, and DRAM Calculator for Ryzen can now be told what topology your board has, so it can better tune settings such as procODT and RTT. The author also de-cluttered the main screen to improve ease of use. Among the under-the-hood changes are improved SoC voltage prediction for each generation of Ryzen. The main timing calculation and prediction algorithms are improved with the addition of the likes of GDM prediction. Also added is support for 4-DIMM system configurations. A bug in which the imported HTML profiles were automatically assumed to be specific to Samsung b-die mode. A number of minor changes were made, detailed in the change-log below.

DOWNLOAD: DRAM Calculator for Ryzen by 1usmus

Maxon Releases Cinebench R20 Benchmark

Maxon Tuesday unveiled its Cinebench R20 benchmark designed to test CPU performance at photorealistic rendering using the company's Cinema 4D R20 technology. The benchmark runs on any PC with at least 4 GB of memory and SSE3 instruction-set support, although it can scale across any number of cores, memory, and supports exotic new instruction-sets such as AVX2. Maxon describes Cinebench R20 as using four times the memory, and eight times the CPU computational power as Cinebench R15. The benchmark implements Intel Embree ray-tracing engine. Maxon is distributing Cinebench R20 exclusively through the Microsoft Store on the Windows platform.

Unlike its predecessor, Cinebench R20 lacks a GPU test. The CPU test scales by the number of CPU cores and SMT units available. It consists of a tiled rendering of a studio apartment living room scene by Render Baron, which includes ray-traced elements, high resolution textures, illumination, and reflections. The number of logical processors available determines the number of rendering instances. The benchmark does indeed have a large memory footprint, and rewards HTT or SMT and high clock-speeds, as our own quick test shows. A 4-core/8-thread Core i7-7700K beats our Core i5-9400F 6-core/6-thread processor.

Update (11th March): We have removed the portable version download at Maxon's request.
DOWNLOAD: Maxon Cinebench R20 (Microsoft Store)

UL Corporation Announces Two New Benchmarks Coming to PCMark 10

UL Corporation today announces that two new benchmark tests that will soon be coming to PCMark 10. The first is our eagerly awaited PCMark 10 battery life benchmark. The second is a new benchmark test based on Microsoft Office applications.

PCMark 10 Battery Life benchmark
Battery life is one of the most important criteria for choosing a laptop, but consumers and businesses alike find it hard to compare systems fairly. The challenge, of course, is that battery life depends on how the device is used. Unfortunately, manufacturers' claims are often based on unrealistic scenarios that don't reflect typical use. Figures for practical, day-to-day battery life, which are usually much lower, are rarely available.

NVIDIA GTX 1660 Ti to Perform Roughly On-par with GTX 1070: Leaked Benchmarks

NVIDIA's upcoming "Turing" based GeForce GTX 1660 Ti graphics card could carve itself a value proposition between the $250-300 mark that lets it coexist with both the GTX 1060 6 GB and the $350 RTX 2060, according to leaked "Final Fantasy XV" benchmarks scored by VideoCardz. In these benchmarks, the GTX 1660 Ti was found to perform roughly on par with the previous-generation GTX 1070 (non-Ti), which is plausible given that the 1,536 CUDA cores based on "Turing," architecture, with their higher IPC and higher GPU clocks, are likely to catch up with the 1,920 "Pascal" CUDA cores of the GTX 1070, while 12 Gbps 192-bit GDDR6 serves up more memory bandwidth than 8 Gbps 256-bit GDDR5 (288 GB/s vs. 256 GB/s). The GTX 1070 scores in memory size, with 8 GB of it. NVIDIA is expected to launch the GTX 1660 Ti later this month at USD $279. Unlike the RTX 20-series, these chips lack NVIDIA RTX real-time raytracing technology, and DLSS (deep-learning supersampling).

Anthem VIP Demo Benchmarked on all GeForce RTX & Vega Cards

Yesterday, EA launched the VIP demo for their highly anticipated title "Anthem". The VIP demo is only accessible to Origin Access subscribers or people who preordered. For the first hours after the demo launch, many players were plagued by servers crashes or "servers are full" messages. Looks like EA didn't anticipate the server load correctly, or the inrush of login attempts revealed a software bug that wasn't apparent with light load.

Things are running much better now, and we had time to run some Anthem benchmarks on a selection of graphics cards, from AMD and NVIDIA. We realized too late that even the Anthem Demo comes with a five activation limit, which gets triggered on every graphics card change. That's why we could only test eight cards so far.. we'll add more when the activations reset.

AMD Zen 2 12-Core, 24-Thread Matisse CPU Spotted in UserBenchmark

A new development could shake up our expectations on AMD's new Ryzen 2 CPUs, which if true, could mean that previous rumors of much increased core-counts at the top of AMD's offerings were true. User TUM Apisak, who has been involved in multiple information leaks and scouting for the hardware world, has digged enough to find a submitted UserBenchmark that screams of a 12-core, 24-thread AMD Matisse part (an engineering sample at that, so keep your hats on for the presented clock speeds).

The benchmark list the used CPU via product code 2D3212BGMCWH2_37 / 34_N (H2 is indicative of a Matisse CPU The benchmark is listing a base clock speed of 3.4 GHz and an average boost clock speed of 3.6 GHz. The rest of the system specs are very, very basic, with 4 GB of 1333 MHz DDR4 memory being used on a new AMD platform, based on the Myrtle-MTS based chipset. The processor is listed having a 105 watts TDP and 32 MB of L3 cache.

Basemark GPU 1.1 Update Released, Adds DirectX 12 Support

Today Basemark releases version 1.1 of its multi-platform graphics hardware evaluation tool Basemark GPU. Basemark GPU has been made available for free to download and use for personal users. Additionally, Basemark has provided professional versions for Benchmark Development Program members, corporate and commercial users.

Basemark GPU 1.1 Benchmark offers unparalleled, objective comparisons between Vulkan, OpenGL, OpenGL ES and now DirectX 12 for graphics performance analysis across both mobile and desktop platforms. Our desktop Linux version of Basemark GPU 1.1 will be available in the next few days utilizing the easily installable universal Flatpak delivery format.

Basemark GPU is available for download now.

Final Fantasy XV Benchmark Gets DLSS Update, GeForce RTX 2080 Performance Tested

Square Enix has just updated their Final Fantasy XV Benchmark to version 1.2, adding support for NVIDIA's DLSS (Deep Learning Super-Sampling) technology. The new release will still allow users to test any graphics card(s) they have just as it did before. That said, owners of NVIDIA's RTX 2070, 2080, and 2080 Ti get the benefit of having access to DLSS for improved image quality and performance. NVIDIA claims that performance will improve by up to 38% with DLSS alone. In order to verify that we ran a few tests of our own to find out.

Preliminary testing was done using Corsair's Vengeance 5180 Gaming PC, which is equipped with an Intel i7-8700, 16 GB 2666 MHz DDR4 and an NVIDIA GeForce RTX 2080. At 3840x2160 with the highest possible settings, DLSS offered a 36% increase in performance. This is very close to NVIDIA's specified increase and within the expected margin of error. When compared to the older GTX 1080 Ti which was paired with a stock Intel i7-8700K, and 32 GB of 3466 MHz memory we see the GeForce RTX 2080 and GTX 1080 Ti offer roughly the same level of performance. Therefore DLSS really is the difference maker here allowing for better performance and image quality. It should also be noted both systems used the same NVIDIA 416.94 WHQL drivers.

AMD Vega 20 Possible Performance Spotted in Final Fantasy XV Benchmark

It would appear AMD's 7nm Vega 20 has been benchmarked in Final Fantasy XV. While the details are scarce, what we do know is the hardware device ID 66AF:C1 can be linked to Vega 20 via the Linux patches back in April. Now considering AMD has not confirmed any 7nm Vega graphics cards for consumers, It is more likely this version is an engineering sample for the new Radeon Instinct or Pro series cards.

Alleged AMD RX 590 3D Mark Time Spy Scores Surface

Benchmark scores for 3D Mark's Time Spy have surface, and are purported to represent the performance level of an unidentified "Generic VGA" - which is being identified as AMD's new 12 nm Polaris revision. The RX 590 product name makes almost as much sense as it doesn't, though; for one, there's no real reason to release another entire RX 600 series, unless AMD is giving the 12 nm treatment to the entire lineup (which likely wouldn't happen, due to the investment in fabrication process redesign and node capacity required for such). As such, the RX 590 moniker makes sense if AMD is only looking to increase its competitiveness in the sub-$300 space as a stop-gap until they finally have a new graphics architecture up their shader sleeves.

In Wake of Controversy, Intel-Paid Principled Technologies Retesting AMD Ryzen Processors

Well, that proverbial storm of dirty undies did serve to rile up some people over at Intel and their paid-for stint with Principled Technologies, whose name begs for a change for honesty's sake. In wake of the controversy regarding its... flawed... testing of AMD's Ryzen 2700X performance in gaming workloads, Principled technologies has now come forward to say it's retesting AMD's processors in less... biased circumstances.

Let's start with the glass half-full part of this retesting: initial reports of memory timings on AMD's system being set in an almost "whatever" kind of way apparently weren't fair, since Principled Technologies have said they used D.O.C.P. settings for AMD's XMP-equivalent memory settings (not properly disclosed in the initial report, so, it's their own fault this happened). The good stuff ends there, though; numerous other flaws in the methodology, such as the usage of AMD's stock cooling solutions against a Noctua cooler for the Intel system (which they'll now fix on retesting), and the usage of AMD's Game Mode on their consumer Ryzen processors, which meant the usually 8-core processor was working in a 4-core mode (really, now?)... The company will now retest both CPUs in a more even-footed way. How's that for a change?

Intel's 9th Gen Core Gaming Benchmarks Flawed and Misleading

At its 9th Generation Core processor launch extravaganza earlier this week, Intel posted benchmark numbers to show just how superior its processors are to AMD 2nd generation Ryzen "Pinnacle Ridge." PC enthusiasts worth their salt were quick to point out that Intel's numbers are both flawed and misleading as they misrepresent both test setups - by optimizing Intel processors beyond their out-of-the-box performance, and by running AMD processors with sub-optimal settings.

Intel paid Principled Technologies, a third-party performance testing agency, to obtain performance numbers comparing the Core i9-9900K with the Ryzen 7 2700X across a spectrum of gaming benchmarks, instead of testing the two chips internally, and posting their test setup data in end-notes, as if to add a layer of credibility/deniability to their charade. The agency posted its numbers that were almost simultaneously re-posted PCGamesN, gleaming the headline "Up to 50% Faster than Ryzen at Gaming." You could fertilize the Sahara with this data.

Latest 3DMark Update adds Night Raid DX12 Benchmark for Integrated Graphics

With update 2.6.6174, released today, 3DMark now includes a new benchmark dubbed Night Raid. This latest addition to the popular 3DMark suite offers DX12 performance testing for laptops, tablets and other devices with integrated graphics. It also offers full support for ARM based processors in the latest always-connected PCs running Microsoft's Windows 10 on ARM. Users running 3DMark Basic Edition which is free will have access to this latest addition upon installing the update.

The Night Raid benchmark continues the trend of offering two graphics tests and a CPU test. While not as visually stunning as previous entries this is to be expected considering it is targeted at integrated graphics processors and entry level systems. Even so, it makes use of numerous graphical features with graphics test 1 including; dynamic reflections, ambient occlusion, and deferred rendering. Graphics test 2 features; tessellation, complex particle systems and depth of field effects with forward-rendering. Finally, the CPU test will measures performance through a combination of physics simulation, occlusion culling, and procedural generation.

NVIDIA RTX 2080 / 2080 Ti Results Appear For Final Fantasy XV

The online results database for the Final Fantasy XV Benchmark has been partially updated to include NVIDIA's RTX 2080 and 2080 Ti. Scores for both standard and high quality settings at 2560x1440 and 3840x2160 are available. While the data for 1920x1080 and lite quality tests are not.

Taking a look at the RTX 2080 Ti results, show it beating out the GTX 1080 Ti by 26% and 28% in the standard and high quality tests respectively, at 2560x1440. Increasing the resolution to 3840x2160, again shows the RTX 2080 Ti ahead, this time by 20% and 31% respectively. The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33% in the same standard and high quality tests. Once again, increasing the resolution to 3840x2160 results in performance being 33% and 36% better than the GTX 1080. Overall, both graphics cards are shaping up to be around 30% faster than the previous generation without any special features. With Final Fantasy XV getting DLSS support in the near future, it is likely the performance of the RTX series will further improve compared to the previous generation.

First Time Spy Benchmark of Upcoming NVIDIA RTX 2080 Graphics Card Leaks

A Time Spy benchmark score of one of NVIDIA's upcoming RTX 20-series graphics cards has come out swinging in a new leak. We say "one of NVIDIA's" because we can't say for sure which core configuration this graphics card worked on: the only effective specs we have are the 8 GB of GDDR6 memory working at 14 Gbps, which translates to either NVIDIA's RTX 2070 or RTX 2080 graphics cards. If we were of the betting type, we'd say these scores are likely from an NVIDIA RTX 2080, simply because the performance improvement over the last generation 1080 (which usually scores around the 7,300's) sits pretty at some 36% - more or less what NVIDIA has been doing with their new generation introductions.

The 10,030 points scored in Time Spy by this NVIDIA RTX graphics card brings its performance levels up to GTX 1080 Ti levels, and within spitting distance of the behemoth Titan Xp. This should put to rest questions regarding improved performance in typical (read, non-raytracing) workloads on NVIDIA's upcoming RTX series. It remains to be seen, as it comes to die size, which part of this improvement stems from actual rasterization performance improvements per core, or if this comes only from increased number of execution units (NVIDIA says it doesn't, by the way).

UL's Raytracing Benchmark Not Based on Time Spy, Completely New Development

After we've covered news of UL's (previously known as 3D Mark) move to include a raytracing benchmark mode on Time Spy, the company has contacted us and other members of the press to clarify their message and intentions. As it stands, the company will not be updating their Time Spy testing suite with Raytracing technologies. Part of the reason is that this would need an immense rewrite of the benchmark itself, which would be counterproductive - and this leads to the rest of the reason why it's not so: such a significant change would invalidate previous results that didn't have the Raytracing mode activated.

As such, UL has elected to develop a totally new benchmark, built from the ground up to use Microsoft's DirectX Raytracing (DXR). This new benchmark will be added to the 3D Mark app as an update. The new test will produce its own benchmarking scores, very much like Fire Strike and Time Spy did, and will provide users with yet another ladder to climb on their way to the top of the benchmarking scene. Other details are scarce - which makes sense. But the test should still be available on or around the time of NVIDIA's 20-series launch, come September 20th.
Return to Keyword Browsing