News Posts matching #3DMark

Return to Keyword Browsing

UL Benchmarks Announces 3DMark Wild Life Extreme Cross-platform Benchmark

Last year UL Benchmarks released 3DMark Wild Life, a cross-platform benchmark for Android, iOS and Windows. Today we are releasing 3DMark Wild Life Extreme, a more demanding cross-platform benchmark for comparing the graphics performance of mobile computing devices such as Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

3DMark Wild Life Extreme is a new cross-platform benchmark for Apple, Android and Windows devices. Run Wild Life Extreme to test and compare the graphics performance of the latest Windows notebooks, Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip, and the next generation of smartphones and tablets.

DOWNLOAD: 3DMark for Windows v2.18.7181

Intel Xe HPG Graphics Card Could Compete with Big Navi & Ampere

Intel started shipping their first Xe DG1 graphics card to OEMs earlier this year which features 4 GB of LPDDR4X and 80 execution units. While these initial cards aren't high performance and only compete with entry-level products like the NVIDIA MX350 they demonstrated Intel's ability to produce a discrete GPU. We have recently received some more potential performance information about Intel's next card the DG2 from chief GPU architect, Raja Koduri. Koduri posted an image of him with the team at Intel's Folsom lab working on the Intel Iris Pro 5200 iGPU back in 2012 and noted that now 9 years later their new GPU is over 20 times faster. The Intel Iris Pro 5200 scores 1015 on Videocard Benchmarks and ~1,400 points on 3DMark Fire Strike, if we assume these scores will be 20x more we get values of ~20,300 in Videocard Benchmarks and 28,000 points in Fire Strike. While these values don't extrapolate perfectly they provide a good indication of potential performance placing the GPU in the same realm of the NVIDIA RTX 3070 and AMD RX 6800.

NVIDIA GeForce RTX 3070 Memory-Modded with 16GB

PC enthusiast and overclocker VIK-on pulled off a daring memory chip mod on his Palit GeForce RTX 3070 GamingPro OC graphics card, swapping its 8 GB of 14 Gbps GDDR6 memory with 16 GB of it, using eight replacement 16 Gbit chips. The modded card is able to recognize the 16 GB of memory, is able to utilize it like other 16 GB graphics cards (such as the Radeon RX 6800), and is fairly stable with benchmarks and stress tests, although not initially stable. It did spring up some black-screens. VIK-on later discovered that locking the clock-speeds using EVGA Precision-X stabilizes the card, so it performs as expected.

The mod involves a physical replacement of the card's stock 8 Gbit memory chips with 16 Gbit ones; and shorting certain straps on the PCB that let it recognize the desired memory chip brand and density. After the mod, the GeForce driver and GPU-Z are able to read 16 GB of video memory, and the card is able to handle stress tests such as FurMark. The card was initially underperforming in 3DMark, putting out a TimeSpy score of just 8356 points; but following the clock-speed lock fix, is able to score around 13000 points. The video presentation can be watched from the source link below. Kudos to VIK-on!

UL Benchmarks Announces DirectX 12 3DMark Mesh Shader Test

DirectX 12 Ultimate adds powerful new features and capabilities to DirectX 12 including DirectX Raytracing Tier 1.1, Mesh Shaders, Sampler Feedback, and Variable Rate Shading (VRS). After DirectX 12 Ultimate was announced, we started adding new tests to 3DMark to show how games can benefit from these new features. Our latest addition is the 3DMark Mesh Shader feature test, a new test that shows how game developers can boost frame rates by using mesh shaders in the graphics pipeline.

Kosin Demonstrates RTX 3090 Running Via Ryzen Laptop's M.2 Slot

Kosin a Chinese subsidiary of Lenovo has recently published a video showing how they modded their Ryzen notebook to run an RTX 3090 from the NVMe M.2 slot. Kosin used their Xiaoxin Air 14 laptop with a Ryzen 5 4600U processor for the demonstration. The systems internal M.2 NVMe SSD was removed and an M.2 to PCIe expansion cable was attached allowing them to connect the RTX 3090. Finally, the laptop housing was modified to allow the PCIe cable to exit the chassis and a desktop power supply was attached to the RTX 3090 for power.

The system booted and correctly detected and utilized the attached RTX 3090. The system performed admirably scoring 14,008 points in 3DMark TimeSpy, for comparison the RTX 3090 paired with a desktop Ryzen 5 3600 scores 15,552, and when paired with a Ryzen 7 5800X scores 17,935. While this is an extreme example pairing an RTX 3090 with a mid-range mobile processor it goes to show the amount of performance achievable over the NVMe M.2 connector. The x4 PCIe 3.0 link of the laptop's M.2 slot could handle a maximum of 4 GB/s, while the x16 PCIe 3.0 slot on previous generation processors offered 16 GB/s, and the new x16 PCIe 4.0 connector doubles that providing 32 GB/s of available bandwidth.

NVIDIA RTX 2070 Modded to Support 16GB Memory

PC enthusiast VIK-on pulled off a sophisticated memory mod for the GeForce RTX 2070, doubling its memory amount to 16 GB. In a detailed video presentation (linked below), VIK-on demonstrated how he carefully removed the 8 Gb Micron-made GDDR6 memory chips of his card, with 16 Gb Samsung-made chips he bought off AliExpress for $200. Memory replacement mods are extremely difficult to pull off, as you first de-solder the existing chips using a hot air gun while keeping the contacts on the PCB intact (ensuring no pins short); and solder the replacement BGA memory chips in place.

In addition, a set of "jumpers" on the PCB need to be modified to make it recognize the Samsung memory. The resulting card booted to desktop successfully, with GPU-Z reading its full 16 GB memory size. The card successfully made it through 3DMark TimeSpy, albeit with 30% lower performance than a normal RTX 2070 (6176 points vs. ~9107 points). The card would also crash Furmark. Still, it's mighty impressive that the "TU106" recognizes 16 GB of addressable memory (which means all its memory channels are intact), without the need for any BIOS mods, which is impossible to pull off.
Watch the VIK-on video presentation here.

NVIDIA GeForce RTX 3060 Ti Fire Strike and Time Spy Scores Surface

3DMark scores of the upcoming NVIDIA GeForce RTX 3060 Ti were leaked to the web by VideoCardz. The RTX 3060 Ti was put through standard 3DMark Fire Strike and Time Spy benchmark runs. In the DirectX 11-based Fire Strike benchmark, the card allegedly scores 30706 points, with 146.05 FPS in GT1 and 122 FPS in GT2. With the newer DirectX 12-based Time Spy test, it allegedly scores 12175 points, with 80.82 FPS in GT1, and 68.71 FPS in GT2. There are no system specs on display, but the scores put the RTX 3060 Ti slightly ahead of the previous-generation high-end GeForce RTX 2080 Super.

The GeForce RTX 3060 Ti, bound for a December 2 launch, is an upcoming performance-segment graphics card based on the "Ampere" architecture, and is carved out of the same 8 nm "GA104" silicon as the RTX 3070. It reportedly packs 4,864 "Ampere" CUDA cores, 38 second-gen RT cores, 152 third-gen Tensor cores, and the same memory configuration as the RTX 3070—8 GB of 14 Gbps GDDR6 across a 256-bit wide bus. NVIDIA is targeting a "<$399" price-point, making the card at least 43% cheaper than the RTX 2080 Super.

UL Benchmarks Updates 3DMark with Estimated Game Performance Data

UL Benchmarks late Wednesday released a major update to 3DMark. The new version 2.16.7094 adds a redesigned Results tab, and introduces the Estimated Game Performance feature, which lets you extrapolate your 3DMark score onto real-world game performance. The feature debuts with game performance estimations for five popular titles, "Apex Legends," "Battlefield V," "Fortnite," "Grand Theft Auto V," and "Red Dead Redemption 2." After a benchmark run, simply select one of these games from a drop-down menu, and a resolution+preset combo from another; and 3DMark will tell give you an estimated frame-rate number, taking into account your overall 3DMark score, and performance numbers of sub-systems (graphics, physics, etc). In this article, UL Benchmarks explains the correlation between a 3DMark score and corresponding estimated frame-rates for "Fortnite."

DOWNLOAD: 3DMark 2.16.7094

Radeon RX 6800 XT Overclocked to 2.80 GHz on LN2, Crushes 3DMark Fire Strike Record

One of the secret sauces of AMD's new "Big Navi" Radeon RX 6800 series GPUs is the ability for the GPU to sustain high engine clock speeds, with the company leveraging boost frequencies above 2.00 GHz in certain scenarios. It was only a matter of time before professional overclockers got their hands on an RX 6800 XT, and paired it with an LN2 evaporator. TecLab_Takukou is the new king of the 3DMark Fire Strike leaderboard thanks to their skills in running an RX 6800 XT at insane 2.80 GHz engine clocks. Takukou overclocked the RX 6800 XT to 2.80 GHz core, and 2150 MHz (17.2 Gbps) memory. This particular configuration yielded a Fire Strike score of 48890 points.

In a separate feat, with the RX 6800 XT running at 2.75 GHz, Takukou chased down the HWBot 3DMark Fire Strike leaderboard, with 49456 points. For both the 2.80 GHz and 2.75 GHz feats, the rest of the system included an LN2-cooled AMD Ryzen 9 5950X processor running at 5.60 GHz all-core, 32 GB of DDR4-3800 memory, and an MSI MEG X570 GODLIKE motherboard. From the looks of it, Takukou is using a reference-design RX 6800 XT board, so we can only hope what else can be accomplished from custom-design RX 6800 XT boards, such as the Sapphire NITRO+, PowerColor Red Devil, or ASUS ROG Strix O16G. Takukou leads the 3DMark Fire Strike leaderboard, followed by another RX 6800 XT-powered close-second, Lucky_n00b (47932 points, 2.65 GHz engine clock on air). Safedisc is third (47725 points, RTX 3090 @ 2.38 GHz).

Overclocked AMD Radeon RX 6800 XT Sets World Record on Air

AMD's Radeon RX 6800 XT graphics card debuted yesterday and in the overall test results, we saw that it runs just a few percent behind NVIDIA's direct competitor - GeForce RTX 3080. However, when it comes to overclocking and world records, the card has just set one. Popular extreme overclocker Alva Jonathan aka "LUCKY_NOOB", has managed to overclock the Radeon RX 6800 XT GPU and set a new world record with the card. Paired with LN2 cooled Ryzen 9 5950X clocked at 5.4 GHz, the graphics card was cooled by... air cooler? Indeed it was. Lucky has managed to clock the RX 6800 XT at 2650 MHz using the reference air cooler. With that system, he managed to score 47932 points in 3DMark FireStrike.

The overclocker has modified 3DMark's tessellation to presumably give the Radeon card more performance, so the score isn't valid on the official 3DMark charts. What gives the overclocker an idea of a world record is the fact that HWBOT actually accepts those numbers, which ranks it higher than the NVIDIA GeForce RTX 3090 graphics card that scored 47725 points. Despite the modifications, it is impressive to see AMD's card rank that high, and as more overclockers are getting their hands on these cards, it is a question if we are going to see the 3 GHz barrier broken by a Radeon card.

Intel "Tiger Lake" Gen12 Xe iGPU Compared with AMD "Renoir" Vega 8 in 3DMark "Night Raid"

Last week, reports of Intel's Gen12 Xe integrated graphics solution catching up with AMD's Radeon Vega 8 iGPU found in its latest Ryzen 4000U processors in higher-tier 3DMark tests sparked quite some intrigue. AMD's higher CPU core-count bailed the processor out in overall 3DMark 11 scores. Thanks to Thai PC enthusiast TUM_APISAK, we now have a face-off between the Core i7-1165G7 "Tiger Lake-U" processor (15 W), against AMD Ryzen 7 4800U (15 W), and the mainstream-segment Ryzen 7 4800HS (35 W), in 3DMark "Night Raid."

The "Night Raid" test is designed to evaluate iGPU performance, and takes advantage of DirectX 12. The Core i7-1165G7 falls behind both the Ryzen 7 4800U and the 4800HS in CPU score, owing to its lower CPU core count, despite higher IPC. The i7-1165G7 is a 4-core/8-thread chip featuring "Willow Cove" CPU cores, facing off against 8-core/16-thread "Zen 2" CPU setups on the two Ryzens. Things get interesting with graphics tests, where the Radeon Vega 8 solution aboard the 4800U scores 64.63 FPS in GT1, and 89.41 FPS in GT2; compared to just 27.79 FPS in GT1 and 32.05 FPS in GT2, by the Gen12 Xe iGPU in the i7-1165G7.

Intel 8-core/16-thread "Rocket Lake-S" Processor Engineering Sample 3DMarked

The "Rocket Lake-S" microarchitecture by Intel sees the company back-port its next-generation "Willow Cove" CPU core to the existing 14 nm++ silicon fabrication process in the form of an 8-core die with a Gen12 Xe iGPU. An engineering sample of one such processor made it to the Futuremark database. Clocked at 3.20 GHz with 4.30 GHz boost frequency, the "Rocket Lake-S" ES was put through 3DMark "Fire Strike" and "Time Spy," with its iGPU in play, instead of a discrete graphics card.

In "Fire Strike," the "Rocket Lake-S" ES scores 18898 points in the physics test, 1895 points in the graphics tests, and an overall score of 1746 points. With "Time Spy," the overall score is 605, with a CPU score of 4963 points, and graphics score of 524. The 11th generation Core "Rocket Lake-S" processor is expected to be compatible with existing Intel 400-series chipset motherboards, and feature a PCI-Express gen 4.0 root complex. Several 400-series chipset motherboards have PCIe gen 4.0 preparation for exactly this. The increased IPC from the "Willow Cove" cores is expected to make the 8-core "Rocket Lake-S" a powerful option for gaming and productivity tasks that don't scale across too many cores.

Intel "Elkhart Lake" Processor Put Through 3DMark

One of the first performance benchmarks of Intel's upcoming low-power processor codenamed "Elkhart Lake," surfaced on the Futuremark database, courtesy of TUM_APISAK. The chip scores 571 points, with a graphics score of 590 and physics score of 3801. The graphics score of the Gen11-based iGPU is behind the Intel UHD 630 gen 9.5 iGPU found in heavier desktop processors since "Kaby Lake," but we predict it's being dragged behind by the CPU (3801 points physics vs. roughly 17000 points of a 6-core "Coffee Lake" processor. The chip goes on to score 170 points in Time Spy, with 148 points graphics- and 1131 points physics-scores. Perhaps Cloud Gate would've been a more apt test.

The "Elkhart Lake" silicon is built on Intel's 10 nm silicon fabrication process, and will power the next generation of Pentium Silver and Celeron processors. The chip features up to 4 CPU cores based on the "Tremont" low-power architecture, and an iGPU based on the newer Gen11 architecture. It features a single-channel memory controller that supports DDR4 and LPDDR4/x memory types. The chip in these 3DMark tests is a 4-core variant, likely a Pentium Silver engineering sample, with its CPU clocked at 1.90 GHz, and is paired with LPDDR4x memory. The chip comes in 5 W, 9 W, and 12 W TDP variants.

AMD Ryzen 9 3900XT and Ryzen 7 3800XT Benchmarks Surface

AMD's 3rd generation Ryzen "Matisse Refresh" processors surfaced on the Futuremark online database, as dug up by TUM_APISAK, where someone with access to them allegedly posted some performance numbers. Interestingly, the clock-speeds as read by the Futuremark SystemInfo module appear very different from what were previously reported. The 3800XT is shown featuring a 3.80 GHz nominal clock, boosting up to 4.70 GHz, while the 3900XT has a 3.90 GHz nominal clock, boosting up to the same 4.70 GHz as the 3800XT. APISAK reports that the 3800XT scores 25135 points in the FireStrike physics test.

A WCCFTech report presents screenshots of Cinebench R20 single-thread performance scores of the 3900XT, where it is shown beating the i9-10900K (in a single-threaded test). The 3800XT is within striking distance of the i9-10900K in this test, and beats the i7-10700KF. This single-threaded performance figure suggests that AMD's design focus with "Matisse Refresh" has been to shore up single-threaded and less-parallelized application performance, in other words, gaming performance.

Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

AMD's timely announcement of the Ryzen 3 "Matisse" processor series could stir things up in the entry-level as Intel kitted its 10th generation Core i3 processors as 4-core/8-thread. Last week, a head-to-head Cinebench comparison between the i3-10300 and 3300X ensued, and today we have a 3DMark Firestrike and Time Spy comparison between their smaller siblings, the i3-10100 and the 3100, courtesy of Thai PC enthusiast TUM_APISAK. The two were benchmarked on Time Spy and Fire Strike on otherwise constant hardware: an RTX 2060 graphics card, 16 GB of memory, and a 1 TB Samsung 970 EVO SSD.

With Fire Strike, the 3100-powered machine leads in overall 3DMark score (by 0.31%), CPU-dependent Physics score (by 13.7%), and the Physics test. The i3-10100 is ahead by 1.4% in the Graphics score thanks to a 1.6% lead in graphics test 1, and 1.4% lead in graphics test 2. Over to the more advanced Time Spy test, which uses the DirectX 12 API that better leverages multi-core CPUs, we see the Ryzen 3 3100 post a 0.63% higher overall score, 1.5% higher CPU score; while the i3-10100 powered machines post within 1% higher graphics score. These numbers may suggest that the i3-10100 and the 3100 are within striking distance of each other and that either is a good pick for gamers, until you look at pricing. Intel's official pricing for the i3-10100 is $122 (per chip in 1,000-unit tray), whereas AMD lists the SEP price of the Ryzen 3 3100 at $99 (the Intel chip is at least 22% pricier), giving AMD a vast price-performance advantage that's hard to ignore, more so when you take into account value additions such as an unlocked multiplier and PCIe gen 4.0.

Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs

Intel's first integrated graphics solution based on its ambitious new Xe graphics architecture, could match AMD's "Vega" architecture based iGPU solutions, such as the one found in its latest Ryzen 4000 series "Renoir" iGPUs, according to leaked 3DMark FireStrike numbers put out by @_rogame. Benchmark results of a prototype laptop based on Intel's "Tiger Lake-U" processor surfaced on the 3DMark database. This processor embeds Intel's Gen12 Xe iGPU solution, which is purported to offer significant performance gains over current Gen11 and Gen9.5 based iGPUs.

The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.

UL Benchmarks Outs 3DMark Feature Test for Variable-Rate Shading Tier-2

UL Benchmarks today announced an update to 3DMark, with the expansion of the Variable-Rate Shading (VRS) feature-test with support for VRS Tier-2. A component of DirectX 12, VRS Tier 1 is supported by NVIDIA "Turing" and Intel Gen11 graphics architectures (Ice Lake's iGPU). VRS Tier-2 is currently supported only by NVIDIA "Turing" GPUs. VRS Tier-2 adds a few performance enhancements such as lower levels of shading for areas of the scene with low contrast to their surroundings (think areas under shadow), yielding performance gains. The 3DMark VRS test runs in two passes, pass-1 runs with VRS-off to provide a point of reference; and pass-2 with VRS-on, to test performance gained. The 3DMark update with VRS Tier-2 test will apply for the Advanced and Professional editions.

DOWNLOAD: 3DMark v2.11.6846

AMD Radeon RX 5500 (OEM) Tested, Almost As Fast as RX 580

German publication Heise.de got its hands on a Radeon RX 5500 (OEM) graphics card and put it through their test bench. The numbers yielded show exactly what caused NVIDIA to refresh its entry-level with the GeForce GTX 1650 Super and the GTX 1660 Super. The RX 5500, in Heise's testing was found matching the previous-generation RX 580, and NVIDIA's current-gen GTX 1660 (non-Super). When compared to factory-overclocked RX 580 NITRO+ and GTX 1660 OC, the RX 5500 yielded similar 3DMark Firestrike performance, with 12,111 points, compared to 12,744 points of the RX 580 NITRO+, and 12,525 points of the GTX 1660 OC.

The card was put through two other game tests at 1080p, "Shadow of the Tomb Raider," and "Far Cry 5." In SoTR, the RX 5500 put out 59 fps, which was slightly behind the 65 fps of the RX 580 NITRO+, and 69 fps of the GTX 1660 OC. In "Far Cry 5," it scored 72 fps, which again is within reach of the 75 fps of the RX 580 NITRO+, and 85 fps of the GTX 1660 OC. It's important to once again note that the RX 580 and GTX 1660 in this comparison are factory-overclocked cards, while the RX 5500 is ticking a stock speeds. Heise also did some power testing, and found the RX 5500 to have a lower idle power-draw than the GTX 1660 OC, at 7 W compared to 10 W of the NVIDIA card; and 12 W of the RX 580 NITRO+. Gaming power-draw is also similar to the GTX 1660, with the RX 5500 pulling 133 W compared to 128 W of the GTX 1660. This short test shows that the RX 5500 is in the same league as the RX 580 and GTX 1660, and explains how NVIDIA had to make its recent product-stack changes.

Intel Core i9-10980XE "Cascade Lake-X" Benchmarked

One of the first reviews of Intel's new flagship HEDT processor, the Core i9-10980XE, just hit the web. Lab501.ro got their hands on a freshly minted i9-10980XE and put it through their test bench. Based on the "Cascade Lake-X" silicon, the i9-10980XE offers almost identical IPC to "Skylake-X," but succeeds the older generation with AI-accelerating DLBoost instruction-set, an improved multi-core boosting algorithm, higher clock speeds, and most importantly, a doubling in price-performance achieved by cutting the cores-per-Dollar metric by half, across the board.

Armed with 18 cores, the i9-10980XE is ahead of the 12-core Ryzen 9 3900X in rendering and simulation tests, although not by much (for a chip that has 50% more cores). This is probably attributed to the competing AMD chip being able to sustain higher all-core boost clock speeds. In tests that not only scale with cores, but are also hungry for memory bandwidth, such as 7-zip and Media, Intel extends its lead thanks to its quad-channel memory interface that's able to feed its cores with datasets faster.

AMD Ryzen 9 3950X Beats Intel Core i9-10980XE by 24% in 3DMark Physics

AMD's upcoming Ryzen 9 3950X socket AM4 processor beats Intel's flagship 18-core processor, the Core i9-10980XE, by a staggering 24 percent at 3DMark Physics, according to a PC Perspective report citing TUM_APISAK. The 3950X is a 16-core/32-thread processor that's drop-in compatible with any motherboard that can run the Ryzen 9 3900X. The i9-10980XE is an 18-core/36-thread HEDT chip that enjoys double the memory bus width as the AMD chip, and is based on Intel's "Cascade Lake-X" silicon. The AMD processor isn't at a tangible clock-speed advantage. The 3950X has a maximum boost frequency of 4.70 GHz, while the i9-10980XE isn't much behind, at 4.60 GHz, but things differ with all-core boost.

When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test. It's pertinent to note here that the 3DMark physics test scales across practically any number of CPU cores/threads, and the AMD processor could be benefiting from a higher all-core boost frequency than the Intel chip. Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game.

Intel Iris Plus Graphics G7 iGPU Beats AMD RX Vega 10: Benchmarks

Intel is taking big strides forward with its Gen11 integrated graphics architecture. Its performance-configured variant, the Intel Iris Plus Graphics G7, featured in the Core i7-1065G7 "Ice Lake" processor, is found to beat AMD Radeon RX Vega 10 iGPU, found in the Ryzen 7 2700U processor ("Raven Ridge"), by as much as 16 percent in 3DMark 11, a staggering 23 percent in 3DMark FireStrike 1080p. Notebook Check put the two iGPUs through these, and a few game tests to derive an initial verdict that Intel's iGPU has caught up with AMD's RX Vega 10. AMD has since updated its iGPU incrementally with the "Picasso" silicon, providing it with higher clock speeds and updated display and multimedia engines.

The machines tested here are the Lenovo Ideapad S540-14API for the AMD chip, and Lenovo Yoga C940-14IIL with the i7-1065G7. The Iris Plus G7 packs 64 Gen11 execution units, while the Radeon RX Vega 10 has 640 stream processors based on the "Vega" architecture. Over in the gaming performance, and we see the Intel iGPU 2 percent faster than the RX Vega 10 at Bioshock Infinite at 1080p, 12 percent slower at Dota 2 Reborn 1080p, and 8 percent faster at XPlane 11.11.

Leaked 3DMark Time Spy Result shows Radeon RX 5700 XT matching GeForce RTX 2070

Reviewers should have received their Radeon "Navi" review samples by now, so it's just natural that the number of leaks is increasing. WCCFTech has spotted one such leak in the 3DMark Time Spy database. The card which is just labeled "Generic VGA" achieved a final score of 8575 points, GPU score of 8719 and 7843 CPU points, which is almost identical to WCCFTech's own comparison benchmarks for the GeForce RTX 2070 Founders Edition (8901). The Vega 64 scored 7427, which leads WCCFTech to believe this must be Radeon RX 5700 XT. The result has since been removed from the 3DMark database, which also suggests it's for an unreleased product.

UL Releases PCI Express Feature Test For 3DMark Ahead of PCIe 4.0 Hardware

With PCI-Express 4.0 graphics cards and motherboards soon to arrive, UL has released their PCI Express feature test for 3DMark. This latest addition has been designed to verify the bandwidth available to the GPU over a computer's PCI Express interface. To accomplish this, the test will make bandwidth the limiting factor for performance and does so by uploading a large amount of vertex and texture data to the GPU for each frame. The end goal is to transfer enough data over the PCIe 4.0 interface to thoroughly saturate it. Once the test is complete, the end result will be a look at the average bandwidth achieved during the test.

UL Announces New 3DMark Benchmarks for Testing PCIe Performance Across Generations

UL Benchmarks via its 3DMark product have announced that they'll be introducing a new, comprehensive test that aims to test PCIe bandwidth across generations. Citing the introduction of PCIe 4.0 to the masses - soon available in the consumer market via AMD's Ryzen 3000 series release - UL wants users to be able to know what a difference this makes towards allowing for more complex games and scenarios that aren't data-constrained by PCIe 3.0.

The 3D Mark PCIe Performance Test will be made available this summer for free for 3DMark Advanced Edition and for 3DMark Professional Edition customers with a valid annual license.

G.SKILL DDR4 Memory Achieves DDR4-5886 and 23 Overclocking Records

G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce that 23 overclocking records in various benchmark categories were broken during the Computex 2019 time frame, including the world record for the fastest memory frequency, all using G.SKILL DDR4 memory kits built with high performance Samsung 8Gb components, the latest Intel processors, and high performance motherboards.

This week at the G.SKILL Computex booth, a new world record for fastest memory frequency was set by Toppc, a renowned professional extreme overclocker, reaching an incredible DDR4-5886MHz using the Trident Z Royal memory on a MSI MPG Z390I GAMING EDGE AC motherboard and an Intel Core i9-9900K processor. At the end of Computex 2019, the top two results for the fastest memory frequency are set by team MSI using an identical hardware setup.
Return to Keyword Browsing