News Posts matching #Benchmark

Return to Keyword Browsing

3DMark Introduces Variable Rate Shading Benchmark

3DMark today announced they've introduced a new benchmarking feature. Specifically developed to test Variable Rate Shading (VRS) performance and image quality differences, the new feature allows users to actually visualize the performance and image quality differences associated with more aggressive (or less aggressive) VRS settings. The algorithm is a smart one - it aims to reduce the number of pixel shader operations on surfaces where detail isn't as important (such as frame edges, fast-moving objects, darkened areas, etc) so as to improve performance and shave some precious milliseconds in the deployment of each frame.

To run this test, you will need Windows 10 version 1903 or later and a DirectX 12 GPU that supports Tier 1 VRS and the "AdditionalShadingRatesSupported" capability, such as an NVIDIA Turing-based GPU or an Intel Ice Lake CPU. The VRS feature test is available now as a free update for 3DMark Advanced Edition, or from now until September 2, 3DMark is 75% off when you buy it from Steam or the UL benchmarks website.

NVIDIA GeForce RTX 2080 Super Appears in FFXV Benchmark Database

Results of NVIDIA's upcoming GeForce RTX 2080 Super graphics cards have been revealed in Final Fantasy XV benchmark database, where the card is compared against other offerings at 2560 x 1440 resolution using high quality settings. The card scored 8736 points, while its predecessor, RTX 2080, scored 8071 points at same resolution and settings. This shows around 8 % improvement in favor of newer model, which is to be expected given the increase in memory speed going from 14 Gbps to 15.5 Gbps and CUDA core count which increased from 2944 cores to 3072. With this improvement, RTX 2080 Super is now only 105 points (about one percent) behind TITAN V graphics card in FFXV benchmark. If you wish to compare results for yourself, you can do so here.

GeForce RTX 2070 Super Beats Radeon 5700 XT in FFXV Benchmark

In a recent submission to the Final Fantasy XV Benchmark database, upcoming new NVIDIA GeForce RTX 2070 Super GPU has been benchmarked. The new submission is coming just a few days before the Super series officially launches. On benchmark's tests, RTX 2070 Super has scored 7479 points at 1440p resolution on high quality settings, which is almost 12% increase from previous generation 2070, which scored 6679. The performance seem to be attributed to increased CUDA core count, which is rumored to increase about 11%, making the result seem pretty realistic.

When compared to AMD's upcoming Radeon 5700 XT, which also got submitted to FFXV Benchmark database and has scored 5575 at same settings, the RTX 2070 Super is about 34% faster.

UL Releases PCI Express Feature Test For 3DMark Ahead of PCIe 4.0 Hardware

With PCI-Express 4.0 graphics cards and motherboards soon to arrive, UL has released their PCI Express feature test for 3DMark. This latest addition has been designed to verify the bandwidth available to the GPU over a computer's PCI Express interface. To accomplish this, the test will make bandwidth the limiting factor for performance and does so by uploading a large amount of vertex and texture data to the GPU for each frame. The end goal is to transfer enough data over the PCIe 4.0 interface to thoroughly saturate it. Once the test is complete, the end result will be a look at the average bandwidth achieved during the test.

Intel Puts Out Benchmarks Showing Minimal Performance Impact of MDS Mitigation

Intel Tuesday once again shook the IT world by disclosing severe microarchitecture-level security vulnerabilities affecting its processors. The Microarchitectural Data Sampling (MDS) class of vulnerabilities affect Intel CPU architectures older than "Coffee Lake" to a greater extent. Among other forms of mitigation software patches, Intel is recommending that users disable HyperThreading technology (HTT), Intel's simultaneous multithreading (SMT) implementation. This would significantly deplete multi-threaded performance on older processors with lower core-counts, particularly Core i3 2-core/4-thread chips.

On "safer" microarchitectures such as "Coffee Lake," though, Intel is expecting a minimal impact of software patches, and doesn't see any negative impact of disabling HTT. This may have something to do with the 50-100 percent increased core-counts with the 8th and 9th generations. The company put out a selection of benchmarks relevant to client and enterprise (data-center) use-cases. On the client use-case that's we're more interested in, a Core i9-9900K machine with software mitigation and HTT disabled is negligibly slower (within 2 percent) of a machine without mitigation and HTT enabled. Intel's selection of benchmarks include SYSMark 2014 SE, WebXprt 3, SPECInt rate base (1 copy and n copies), and 3DMark "Skydiver" with the chip's integrated UHD 630 graphics. Comparing machines with mitigations applied but toggling HTT presents a slightly different story.

Announcing DRAM Calculator for Ryzen v1.5.0 with an Integrated Benchmark

Yuri "1usmus" Bubliy, who practically wrote the book on AMD Ryzen memory overclocking, presents DRAM Calculator for Ryzen v1.5.0, the latest version of the most powerful tool available to help you overclock memory on PCs powered by AMD Ryzen processors. The biggest feature-addition is MEMBench, a new internal memory benchmark that tests performance of your machine's memory sub-system, and can be used to test the stability of your memory overclock. Among the other feature-additions include the "Compare Timings" button, which gives you a side-by-side comparison of your machine's existing settings, with what's possible or the settings you've arrived at using the app.

Motherboards vary by memory slot topology, and DRAM Calculator for Ryzen can now be told what topology your board has, so it can better tune settings such as procODT and RTT. The author also de-cluttered the main screen to improve ease of use. Among the under-the-hood changes are improved SoC voltage prediction for each generation of Ryzen. The main timing calculation and prediction algorithms are improved with the addition of the likes of GDM prediction. Also added is support for 4-DIMM system configurations. A bug in which the imported HTML profiles were automatically assumed to be specific to Samsung b-die mode. A number of minor changes were made, detailed in the change-log below.

DOWNLOAD: DRAM Calculator for Ryzen by 1usmus

Maxon Releases Cinebench R20 Benchmark

Maxon Tuesday unveiled its Cinebench R20 benchmark designed to test CPU performance at photorealistic rendering using the company's Cinema 4D R20 technology. The benchmark runs on any PC with at least 4 GB of memory and SSE3 instruction-set support, although it can scale across any number of cores, memory, and supports exotic new instruction-sets such as AVX2. Maxon describes Cinebench R20 as using four times the memory, and eight times the CPU computational power as Cinebench R15. The benchmark implements Intel Embree ray-tracing engine. Maxon is distributing Cinebench R20 exclusively through the Microsoft Store on the Windows platform.

Unlike its predecessor, Cinebench R20 lacks a GPU test. The CPU test scales by the number of CPU cores and SMT units available. It consists of a tiled rendering of a studio apartment living room scene by Render Baron, which includes ray-traced elements, high resolution textures, illumination, and reflections. The number of logical processors available determines the number of rendering instances. The benchmark does indeed have a large memory footprint, and rewards HTT or SMT and high clock-speeds, as our own quick test shows. A 4-core/8-thread Core i7-7700K beats our Core i5-9400F 6-core/6-thread processor.

Update (11th March): We have removed the portable version download at Maxon's request.
DOWNLOAD: Maxon Cinebench R20 (Microsoft Store)

UL Corporation Announces Two New Benchmarks Coming to PCMark 10

UL Corporation today announces that two new benchmark tests that will soon be coming to PCMark 10. The first is our eagerly awaited PCMark 10 battery life benchmark. The second is a new benchmark test based on Microsoft Office applications.

PCMark 10 Battery Life benchmark
Battery life is one of the most important criteria for choosing a laptop, but consumers and businesses alike find it hard to compare systems fairly. The challenge, of course, is that battery life depends on how the device is used. Unfortunately, manufacturers' claims are often based on unrealistic scenarios that don't reflect typical use. Figures for practical, day-to-day battery life, which are usually much lower, are rarely available.

NVIDIA GTX 1660 Ti to Perform Roughly On-par with GTX 1070: Leaked Benchmarks

NVIDIA's upcoming "Turing" based GeForce GTX 1660 Ti graphics card could carve itself a value proposition between the $250-300 mark that lets it coexist with both the GTX 1060 6 GB and the $350 RTX 2060, according to leaked "Final Fantasy XV" benchmarks scored by VideoCardz. In these benchmarks, the GTX 1660 Ti was found to perform roughly on par with the previous-generation GTX 1070 (non-Ti), which is plausible given that the 1,536 CUDA cores based on "Turing," architecture, with their higher IPC and higher GPU clocks, are likely to catch up with the 1,920 "Pascal" CUDA cores of the GTX 1070, while 12 Gbps 192-bit GDDR6 serves up more memory bandwidth than 8 Gbps 256-bit GDDR5 (288 GB/s vs. 256 GB/s). The GTX 1070 scores in memory size, with 8 GB of it. NVIDIA is expected to launch the GTX 1660 Ti later this month at USD $279. Unlike the RTX 20-series, these chips lack NVIDIA RTX real-time raytracing technology, and DLSS (deep-learning supersampling).

Anthem VIP Demo Benchmarked on all GeForce RTX & Vega Cards

Yesterday, EA launched the VIP demo for their highly anticipated title "Anthem". The VIP demo is only accessible to Origin Access subscribers or people who preordered. For the first hours after the demo launch, many players were plagued by servers crashes or "servers are full" messages. Looks like EA didn't anticipate the server load correctly, or the inrush of login attempts revealed a software bug that wasn't apparent with light load.

Things are running much better now, and we had time to run some Anthem benchmarks on a selection of graphics cards, from AMD and NVIDIA. We realized too late that even the Anthem Demo comes with a five activation limit, which gets triggered on every graphics card change. That's why we could only test eight cards so far.. we'll add more when the activations reset.

AMD Zen 2 12-Core, 24-Thread Matisse CPU Spotted in UserBenchmark

A new development could shake up our expectations on AMD's new Ryzen 2 CPUs, which if true, could mean that previous rumors of much increased core-counts at the top of AMD's offerings were true. User TUM Apisak, who has been involved in multiple information leaks and scouting for the hardware world, has digged enough to find a submitted UserBenchmark that screams of a 12-core, 24-thread AMD Matisse part (an engineering sample at that, so keep your hats on for the presented clock speeds).

The benchmark list the used CPU via product code 2D3212BGMCWH2_37 / 34_N (H2 is indicative of a Matisse CPU The benchmark is listing a base clock speed of 3.4 GHz and an average boost clock speed of 3.6 GHz. The rest of the system specs are very, very basic, with 4 GB of 1333 MHz DDR4 memory being used on a new AMD platform, based on the Myrtle-MTS based chipset. The processor is listed having a 105 watts TDP and 32 MB of L3 cache.

Basemark GPU 1.1 Update Released, Adds DirectX 12 Support

Today Basemark releases version 1.1 of its multi-platform graphics hardware evaluation tool Basemark GPU. Basemark GPU has been made available for free to download and use for personal users. Additionally, Basemark has provided professional versions for Benchmark Development Program members, corporate and commercial users.

Basemark GPU 1.1 Benchmark offers unparalleled, objective comparisons between Vulkan, OpenGL, OpenGL ES and now DirectX 12 for graphics performance analysis across both mobile and desktop platforms. Our desktop Linux version of Basemark GPU 1.1 will be available in the next few days utilizing the easily installable universal Flatpak delivery format.

Basemark GPU is available for download now.

Final Fantasy XV Benchmark Gets DLSS Update, GeForce RTX 2080 Performance Tested

Square Enix has just updated their Final Fantasy XV Benchmark to version 1.2, adding support for NVIDIA's DLSS (Deep Learning Super-Sampling) technology. The new release will still allow users to test any graphics card(s) they have just as it did before. That said, owners of NVIDIA's RTX 2070, 2080, and 2080 Ti get the benefit of having access to DLSS for improved image quality and performance. NVIDIA claims that performance will improve by up to 38% with DLSS alone. In order to verify that we ran a few tests of our own to find out.

Preliminary testing was done using Corsair's Vengeance 5180 Gaming PC, which is equipped with an Intel i7-8700, 16 GB 2666 MHz DDR4 and an NVIDIA GeForce RTX 2080. At 3840x2160 with the highest possible settings, DLSS offered a 36% increase in performance. This is very close to NVIDIA's specified increase and within the expected margin of error. When compared to the older GTX 1080 Ti which was paired with a stock Intel i7-8700K, and 32 GB of 3466 MHz memory we see the GeForce RTX 2080 and GTX 1080 Ti offer roughly the same level of performance. Therefore DLSS really is the difference maker here allowing for better performance and image quality. It should also be noted both systems used the same NVIDIA 416.94 WHQL drivers.

AMD Vega 20 Possible Performance Spotted in Final Fantasy XV Benchmark

It would appear AMD's 7nm Vega 20 has been benchmarked in Final Fantasy XV. While the details are scarce, what we do know is the hardware device ID 66AF:C1 can be linked to Vega 20 via the Linux patches back in April. Now considering AMD has not confirmed any 7nm Vega graphics cards for consumers, It is more likely this version is an engineering sample for the new Radeon Instinct or Pro series cards.

Alleged AMD RX 590 3D Mark Time Spy Scores Surface

Benchmark scores for 3D Mark's Time Spy have surface, and are purported to represent the performance level of an unidentified "Generic VGA" - which is being identified as AMD's new 12 nm Polaris revision. The RX 590 product name makes almost as much sense as it doesn't, though; for one, there's no real reason to release another entire RX 600 series, unless AMD is giving the 12 nm treatment to the entire lineup (which likely wouldn't happen, due to the investment in fabrication process redesign and node capacity required for such). As such, the RX 590 moniker makes sense if AMD is only looking to increase its competitiveness in the sub-$300 space as a stop-gap until they finally have a new graphics architecture up their shader sleeves.

In Wake of Controversy, Intel-Paid Principled Technologies Retesting AMD Ryzen Processors

Well, that proverbial storm of dirty undies did serve to rile up some people over at Intel and their paid-for stint with Principled Technologies, whose name begs for a change for honesty's sake. In wake of the controversy regarding its... flawed... testing of AMD's Ryzen 2700X performance in gaming workloads, Principled technologies has now come forward to say it's retesting AMD's processors in less... biased circumstances.

Let's start with the glass half-full part of this retesting: initial reports of memory timings on AMD's system being set in an almost "whatever" kind of way apparently weren't fair, since Principled Technologies have said they used D.O.C.P. settings for AMD's XMP-equivalent memory settings (not properly disclosed in the initial report, so, it's their own fault this happened). The good stuff ends there, though; numerous other flaws in the methodology, such as the usage of AMD's stock cooling solutions against a Noctua cooler for the Intel system (which they'll now fix on retesting), and the usage of AMD's Game Mode on their consumer Ryzen processors, which meant the usually 8-core processor was working in a 4-core mode (really, now?)... The company will now retest both CPUs in a more even-footed way. How's that for a change?

Intel's 9th Gen Core Gaming Benchmarks Flawed and Misleading

At its 9th Generation Core processor launch extravaganza earlier this week, Intel posted benchmark numbers to show just how superior its processors are to AMD 2nd generation Ryzen "Pinnacle Ridge." PC enthusiasts worth their salt were quick to point out that Intel's numbers are both flawed and misleading as they misrepresent both test setups - by optimizing Intel processors beyond their out-of-the-box performance, and by running AMD processors with sub-optimal settings.

Intel paid Principled Technologies, a third-party performance testing agency, to obtain performance numbers comparing the Core i9-9900K with the Ryzen 7 2700X across a spectrum of gaming benchmarks, instead of testing the two chips internally, and posting their test setup data in end-notes, as if to add a layer of credibility/deniability to their charade. The agency posted its numbers that were almost simultaneously re-posted PCGamesN, gleaming the headline "Up to 50% Faster than Ryzen at Gaming." You could fertilize the Sahara with this data.

Latest 3DMark Update adds Night Raid DX12 Benchmark for Integrated Graphics

With update 2.6.6174, released today, 3DMark now includes a new benchmark dubbed Night Raid. This latest addition to the popular 3DMark suite offers DX12 performance testing for laptops, tablets and other devices with integrated graphics. It also offers full support for ARM based processors in the latest always-connected PCs running Microsoft's Windows 10 on ARM. Users running 3DMark Basic Edition which is free will have access to this latest addition upon installing the update.

The Night Raid benchmark continues the trend of offering two graphics tests and a CPU test. While not as visually stunning as previous entries this is to be expected considering it is targeted at integrated graphics processors and entry level systems. Even so, it makes use of numerous graphical features with graphics test 1 including; dynamic reflections, ambient occlusion, and deferred rendering. Graphics test 2 features; tessellation, complex particle systems and depth of field effects with forward-rendering. Finally, the CPU test will measures performance through a combination of physics simulation, occlusion culling, and procedural generation.

NVIDIA RTX 2080 / 2080 Ti Results Appear For Final Fantasy XV

The online results database for the Final Fantasy XV Benchmark has been partially updated to include NVIDIA's RTX 2080 and 2080 Ti. Scores for both standard and high quality settings at 2560x1440 and 3840x2160 are available. While the data for 1920x1080 and lite quality tests are not.

Taking a look at the RTX 2080 Ti results, show it beating out the GTX 1080 Ti by 26% and 28% in the standard and high quality tests respectively, at 2560x1440. Increasing the resolution to 3840x2160, again shows the RTX 2080 Ti ahead, this time by 20% and 31% respectively. The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33% in the same standard and high quality tests. Once again, increasing the resolution to 3840x2160 results in performance being 33% and 36% better than the GTX 1080. Overall, both graphics cards are shaping up to be around 30% faster than the previous generation without any special features. With Final Fantasy XV getting DLSS support in the near future, it is likely the performance of the RTX series will further improve compared to the previous generation.

First Time Spy Benchmark of Upcoming NVIDIA RTX 2080 Graphics Card Leaks

A Time Spy benchmark score of one of NVIDIA's upcoming RTX 20-series graphics cards has come out swinging in a new leak. We say "one of NVIDIA's" because we can't say for sure which core configuration this graphics card worked on: the only effective specs we have are the 8 GB of GDDR6 memory working at 14 Gbps, which translates to either NVIDIA's RTX 2070 or RTX 2080 graphics cards. If we were of the betting type, we'd say these scores are likely from an NVIDIA RTX 2080, simply because the performance improvement over the last generation 1080 (which usually scores around the 7,300's) sits pretty at some 36% - more or less what NVIDIA has been doing with their new generation introductions.

The 10,030 points scored in Time Spy by this NVIDIA RTX graphics card brings its performance levels up to GTX 1080 Ti levels, and within spitting distance of the behemoth Titan Xp. This should put to rest questions regarding improved performance in typical (read, non-raytracing) workloads on NVIDIA's upcoming RTX series. It remains to be seen, as it comes to die size, which part of this improvement stems from actual rasterization performance improvements per core, or if this comes only from increased number of execution units (NVIDIA says it doesn't, by the way).

UL's Raytracing Benchmark Not Based on Time Spy, Completely New Development

After we've covered news of UL's (previously known as 3D Mark) move to include a raytracing benchmark mode on Time Spy, the company has contacted us and other members of the press to clarify their message and intentions. As it stands, the company will not be updating their Time Spy testing suite with Raytracing technologies. Part of the reason is that this would need an immense rewrite of the benchmark itself, which would be counterproductive - and this leads to the rest of the reason why it's not so: such a significant change would invalidate previous results that didn't have the Raytracing mode activated.

As such, UL has elected to develop a totally new benchmark, built from the ground up to use Microsoft's DirectX Raytracing (DXR). This new benchmark will be added to the 3D Mark app as an update. The new test will produce its own benchmarking scores, very much like Fire Strike and Time Spy did, and will provide users with yet another ladder to climb on their way to the top of the benchmarking scene. Other details are scarce - which makes sense. But the test should still be available on or around the time of NVIDIA's 20-series launch, come September 20th.

NVIDIA Releases First Internal Performance Benchmarks for RTX 2080 Graphics Card

NVIDIA today released their first official performance numbers for their new generation of GeForce products - particularly, the RTX 2080. The RTX 20 series of graphics cards, according to the company, offers some 50% performance improvements (on average) on architectural improvements alone, in a per-core basis. This number is then built upon with the added RTX performance of the new RT cores, which allows the RTX 2080 to increase its performance advantage over the last generation 1080 by up to 2x more - while using the new DLSS technology. PUBG, Shadow of the Tomb Raider, and Final Fantasy XV are seeing around 75 percent or more improved performance when using this tech.

NVIDIA is also touting the newfound ability to run games at 4K resolutions at over 60 FPS performance, making the RTX 2080 the card to get if that's your preferred resolution (especially if paired with one of those dazzling OLED TVs...) Of course, IQ settings aren't revealed in the slides, so there's an important piece of the puzzle still missing. But considering performance claims of NVIDIA, and comparing the achievable performance on last generation hardware, it's fair to say that these FPS scores refer to the high or highest IQ settings for each game.

Denuvo's Impact on Game Performance Benchmarked

Denuvo's impact on gaming performance has been spoken of immensely - as always has been the case for any and all DRM solution that finds its way into games. However, evidence always seemed to be somewhat anecdotal on whether or not Denuvo really impacted performance - for a while, the inability to test games with Denuvo implemented and officially removed (which, unsurprisingly, isn't the same as it being cracked) was a grand stopgap to any sort of serious testing.

Now, courtesy of Overlord's YouTube channel, we can see whether or not Denuvo impacts performance. In a total of seven games tested on a platform with an Intel Core i7 2600K stock CPU (for adequate testing of whether Denuvo really impacts more the CPU than any other system component) paired with a stock clocked 1080 ti. You really should take a look at the video; it's a short, informative one, but the gist of is this: Some games revealed performance improvements with Denuvo being removed: Mass Effect: Andromeda saw a huge boost from an average of 57 FPS all the way to 64 FPS due to the removal of the DRM solution; and Mad Max saw a more meager 54 to 60 FPS increase. The other games (which included Hitman, Abzu, and others, didn't see any performance difference.

Basemark Launches Free Multiplatform GPU Benchmark

Basemark launched today Basemark GPU, a new graphics performance evaluation tool for systems with Vulkan 1.0, OpenGL 4.5 or OpenGL ES 3.1 graphics APIs. This tool enables the industry to objectively and reliably quantify and compare graphics performance of next generation mobile, automotive and desktop processors.

"We have poured all of our soul and expertise in making this product. The work started two and half years ago and this massive project has culminated in today's launch of a true state-of-the-art product," said Arto Ruotsalainen, CEO of Basemark. "We believe Basemark GPU will become an essential tool for anyone tasked to evaluate graphics performance in systems ranging from smart phones, smart TVs and cars to PCs."

First Benchmarks, CPU-Z Screenshots of AMD Ryzen Threadripper 32-core CPU Surface

First benchmarks and CPU-Z screenshots of AMD's upcoming Ryzen Threadripper 32-core monster have surfaced, courtesy of HKEPC. The on-time-for-launch (as AMD puts it) 12 nm "Pinnacle Ridge" processor has apparently been christened "Threadripper 2990X", which does make sense - should AMD be thinking of keeping the 2920X moniker for 12 cores and 1950X for 16-cores, then it follows a 20-core 2960X, a 24-core 2970X, a 28-core 2980X, and the aforementioned 32-core 2990X. whether AMD would want to offer such a tiered lineup of HEDT processors, however, is another matter entirely, and certainly open for discussion - too much of a good thing can actually happen, at least where ASP of the Threadripper portfolio is concerned.

On the CPU-Z screenshot, the 2990X is running at 3.4 GHz base with up to 4.0 GHz XFR, and carries a 250 W TDP - a believable and very impressive achievement, testament to the 12 nm process and the low leakage it apparently produces. The chip was then overclocked up to 4.2 GHz on all cores, which caused for some thermal throttling, since performance was lower than when the chip was clocked at just 4 GHz on all cores. Gains on this particular piece of silicon were reserved up to 4.12 GHz - the jump to 4.2 GHz must have required another bump in voltage that led to the aforementioned throttling. At 4.12 GHz, the chip scored 6,399 points in Cinebench - a remarkable achievement.
Return to Keyword Browsing