News Posts matching #3DMark

Return to Keyword Browsing

Intel "Tiger Lake" Gen12 Xe iGPU Compared with AMD "Renoir" Vega 8 in 3DMark "Night Raid"

Last week, reports of Intel's Gen12 Xe integrated graphics solution catching up with AMD's Radeon Vega 8 iGPU found in its latest Ryzen 4000U processors in higher-tier 3DMark tests sparked quite some intrigue. AMD's higher CPU core-count bailed the processor out in overall 3DMark 11 scores. Thanks to Thai PC enthusiast TUM_APISAK, we now have a face-off between the Core i7-1165G7 "Tiger Lake-U" processor (15 W), against AMD Ryzen 7 4800U (15 W), and the mainstream-segment Ryzen 7 4800HS (35 W), in 3DMark "Night Raid."

The "Night Raid" test is designed to evaluate iGPU performance, and takes advantage of DirectX 12. The Core i7-1165G7 falls behind both the Ryzen 7 4800U and the 4800HS in CPU score, owing to its lower CPU core count, despite higher IPC. The i7-1165G7 is a 4-core/8-thread chip featuring "Willow Cove" CPU cores, facing off against 8-core/16-thread "Zen 2" CPU setups on the two Ryzens. Things get interesting with graphics tests, where the Radeon Vega 8 solution aboard the 4800U scores 64.63 FPS in GT1, and 89.41 FPS in GT2; compared to just 27.79 FPS in GT1 and 32.05 FPS in GT2, by the Gen12 Xe iGPU in the i7-1165G7.

Intel 8-core/16-thread "Rocket Lake-S" Processor Engineering Sample 3DMarked

The "Rocket Lake-S" microarchitecture by Intel sees the company back-port its next-generation "Willow Cove" CPU core to the existing 14 nm++ silicon fabrication process in the form of an 8-core die with a Gen12 Xe iGPU. An engineering sample of one such processor made it to the Futuremark database. Clocked at 3.20 GHz with 4.30 GHz boost frequency, the "Rocket Lake-S" ES was put through 3DMark "Fire Strike" and "Time Spy," with its iGPU in play, instead of a discrete graphics card.

In "Fire Strike," the "Rocket Lake-S" ES scores 18898 points in the physics test, 1895 points in the graphics tests, and an overall score of 1746 points. With "Time Spy," the overall score is 605, with a CPU score of 4963 points, and graphics score of 524. The 11th generation Core "Rocket Lake-S" processor is expected to be compatible with existing Intel 400-series chipset motherboards, and feature a PCI-Express gen 4.0 root complex. Several 400-series chipset motherboards have PCIe gen 4.0 preparation for exactly this. The increased IPC from the "Willow Cove" cores is expected to make the 8-core "Rocket Lake-S" a powerful option for gaming and productivity tasks that don't scale across too many cores.

Intel "Elkhart Lake" Processor Put Through 3DMark

One of the first performance benchmarks of Intel's upcoming low-power processor codenamed "Elkhart Lake," surfaced on the Futuremark database, courtesy of TUM_APISAK. The chip scores 571 points, with a graphics score of 590 and physics score of 3801. The graphics score of the Gen11-based iGPU is behind the Intel UHD 630 gen 9.5 iGPU found in heavier desktop processors since "Kaby Lake," but we predict it's being dragged behind by the CPU (3801 points physics vs. roughly 17000 points of a 6-core "Coffee Lake" processor. The chip goes on to score 170 points in Time Spy, with 148 points graphics- and 1131 points physics-scores. Perhaps Cloud Gate would've been a more apt test.

The "Elkhart Lake" silicon is built on Intel's 10 nm silicon fabrication process, and will power the next generation of Pentium Silver and Celeron processors. The chip features up to 4 CPU cores based on the "Tremont" low-power architecture, and an iGPU based on the newer Gen11 architecture. It features a single-channel memory controller that supports DDR4 and LPDDR4/x memory types. The chip in these 3DMark tests is a 4-core variant, likely a Pentium Silver engineering sample, with its CPU clocked at 1.90 GHz, and is paired with LPDDR4x memory. The chip comes in 5 W, 9 W, and 12 W TDP variants.

AMD Ryzen 9 3900XT and Ryzen 7 3800XT Benchmarks Surface

AMD's 3rd generation Ryzen "Matisse Refresh" processors surfaced on the Futuremark online database, as dug up by TUM_APISAK, where someone with access to them allegedly posted some performance numbers. Interestingly, the clock-speeds as read by the Futuremark SystemInfo module appear very different from what were previously reported. The 3800XT is shown featuring a 3.80 GHz nominal clock, boosting up to 4.70 GHz, while the 3900XT has a 3.90 GHz nominal clock, boosting up to the same 4.70 GHz as the 3800XT. APISAK reports that the 3800XT scores 25135 points in the FireStrike physics test.

A WCCFTech report presents screenshots of Cinebench R20 single-thread performance scores of the 3900XT, where it is shown beating the i9-10900K (in a single-threaded test). The 3800XT is within striking distance of the i9-10900K in this test, and beats the i7-10700KF. This single-threaded performance figure suggests that AMD's design focus with "Matisse Refresh" has been to shore up single-threaded and less-parallelized application performance, in other words, gaming performance.

Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

AMD's timely announcement of the Ryzen 3 "Matisse" processor series could stir things up in the entry-level as Intel kitted its 10th generation Core i3 processors as 4-core/8-thread. Last week, a head-to-head Cinebench comparison between the i3-10300 and 3300X ensued, and today we have a 3DMark Firestrike and Time Spy comparison between their smaller siblings, the i3-10100 and the 3100, courtesy of Thai PC enthusiast TUM_APISAK. The two were benchmarked on Time Spy and Fire Strike on otherwise constant hardware: an RTX 2060 graphics card, 16 GB of memory, and a 1 TB Samsung 970 EVO SSD.

With Fire Strike, the 3100-powered machine leads in overall 3DMark score (by 0.31%), CPU-dependent Physics score (by 13.7%), and the Physics test. The i3-10100 is ahead by 1.4% in the Graphics score thanks to a 1.6% lead in graphics test 1, and 1.4% lead in graphics test 2. Over to the more advanced Time Spy test, which uses the DirectX 12 API that better leverages multi-core CPUs, we see the Ryzen 3 3100 post a 0.63% higher overall score, 1.5% higher CPU score; while the i3-10100 powered machines post within 1% higher graphics score. These numbers may suggest that the i3-10100 and the 3100 are within striking distance of each other and that either is a good pick for gamers, until you look at pricing. Intel's official pricing for the i3-10100 is $122 (per chip in 1,000-unit tray), whereas AMD lists the SEP price of the Ryzen 3 3100 at $99 (the Intel chip is at least 22% pricier), giving AMD a vast price-performance advantage that's hard to ignore, more so when you take into account value additions such as an unlocked multiplier and PCIe gen 4.0.

Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs

Intel's first integrated graphics solution based on its ambitious new Xe graphics architecture, could match AMD's "Vega" architecture based iGPU solutions, such as the one found in its latest Ryzen 4000 series "Renoir" iGPUs, according to leaked 3DMark FireStrike numbers put out by @_rogame. Benchmark results of a prototype laptop based on Intel's "Tiger Lake-U" processor surfaced on the 3DMark database. This processor embeds Intel's Gen12 Xe iGPU solution, which is purported to offer significant performance gains over current Gen11 and Gen9.5 based iGPUs.

The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.

UL Benchmarks Outs 3DMark Feature Test for Variable-Rate Shading Tier-2

UL Benchmarks today announced an update to 3DMark, with the expansion of the Variable-Rate Shading (VRS) feature-test with support for VRS Tier-2. A component of DirectX 12, VRS Tier 1 is supported by NVIDIA "Turing" and Intel Gen11 graphics architectures (Ice Lake's iGPU). VRS Tier-2 is currently supported only by NVIDIA "Turing" GPUs. VRS Tier-2 adds a few performance enhancements such as lower levels of shading for areas of the scene with low contrast to their surroundings (think areas under shadow), yielding performance gains. The 3DMark VRS test runs in two passes, pass-1 runs with VRS-off to provide a point of reference; and pass-2 with VRS-on, to test performance gained. The 3DMark update with VRS Tier-2 test will apply for the Advanced and Professional editions.

DOWNLOAD: 3DMark v2.11.6846

AMD Radeon RX 5500 (OEM) Tested, Almost As Fast as RX 580

German publication Heise.de got its hands on a Radeon RX 5500 (OEM) graphics card and put it through their test bench. The numbers yielded show exactly what caused NVIDIA to refresh its entry-level with the GeForce GTX 1650 Super and the GTX 1660 Super. The RX 5500, in Heise's testing was found matching the previous-generation RX 580, and NVIDIA's current-gen GTX 1660 (non-Super). When compared to factory-overclocked RX 580 NITRO+ and GTX 1660 OC, the RX 5500 yielded similar 3DMark Firestrike performance, with 12,111 points, compared to 12,744 points of the RX 580 NITRO+, and 12,525 points of the GTX 1660 OC.

The card was put through two other game tests at 1080p, "Shadow of the Tomb Raider," and "Far Cry 5." In SoTR, the RX 5500 put out 59 fps, which was slightly behind the 65 fps of the RX 580 NITRO+, and 69 fps of the GTX 1660 OC. In "Far Cry 5," it scored 72 fps, which again is within reach of the 75 fps of the RX 580 NITRO+, and 85 fps of the GTX 1660 OC. It's important to once again note that the RX 580 and GTX 1660 in this comparison are factory-overclocked cards, while the RX 5500 is ticking a stock speeds. Heise also did some power testing, and found the RX 5500 to have a lower idle power-draw than the GTX 1660 OC, at 7 W compared to 10 W of the NVIDIA card; and 12 W of the RX 580 NITRO+. Gaming power-draw is also similar to the GTX 1660, with the RX 5500 pulling 133 W compared to 128 W of the GTX 1660. This short test shows that the RX 5500 is in the same league as the RX 580 and GTX 1660, and explains how NVIDIA had to make its recent product-stack changes.

Intel Core i9-10980XE "Cascade Lake-X" Benchmarked

One of the first reviews of Intel's new flagship HEDT processor, the Core i9-10980XE, just hit the web. Lab501.ro got their hands on a freshly minted i9-10980XE and put it through their test bench. Based on the "Cascade Lake-X" silicon, the i9-10980XE offers almost identical IPC to "Skylake-X," but succeeds the older generation with AI-accelerating DLBoost instruction-set, an improved multi-core boosting algorithm, higher clock speeds, and most importantly, a doubling in price-performance achieved by cutting the cores-per-Dollar metric by half, across the board.

Armed with 18 cores, the i9-10980XE is ahead of the 12-core Ryzen 9 3900X in rendering and simulation tests, although not by much (for a chip that has 50% more cores). This is probably attributed to the competing AMD chip being able to sustain higher all-core boost clock speeds. In tests that not only scale with cores, but are also hungry for memory bandwidth, such as 7-zip and Media, Intel extends its lead thanks to its quad-channel memory interface that's able to feed its cores with datasets faster.

AMD Ryzen 9 3950X Beats Intel Core i9-10980XE by 24% in 3DMark Physics

AMD's upcoming Ryzen 9 3950X socket AM4 processor beats Intel's flagship 18-core processor, the Core i9-10980XE, by a staggering 24 percent at 3DMark Physics, according to a PC Perspective report citing TUM_APISAK. The 3950X is a 16-core/32-thread processor that's drop-in compatible with any motherboard that can run the Ryzen 9 3900X. The i9-10980XE is an 18-core/36-thread HEDT chip that enjoys double the memory bus width as the AMD chip, and is based on Intel's "Cascade Lake-X" silicon. The AMD processor isn't at a tangible clock-speed advantage. The 3950X has a maximum boost frequency of 4.70 GHz, while the i9-10980XE isn't much behind, at 4.60 GHz, but things differ with all-core boost.

When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test. It's pertinent to note here that the 3DMark physics test scales across practically any number of CPU cores/threads, and the AMD processor could be benefiting from a higher all-core boost frequency than the Intel chip. Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game.

Intel Iris Plus Graphics G7 iGPU Beats AMD RX Vega 10: Benchmarks

Intel is taking big strides forward with its Gen11 integrated graphics architecture. Its performance-configured variant, the Intel Iris Plus Graphics G7, featured in the Core i7-1065G7 "Ice Lake" processor, is found to beat AMD Radeon RX Vega 10 iGPU, found in the Ryzen 7 2700U processor ("Raven Ridge"), by as much as 16 percent in 3DMark 11, a staggering 23 percent in 3DMark FireStrike 1080p. Notebook Check put the two iGPUs through these, and a few game tests to derive an initial verdict that Intel's iGPU has caught up with AMD's RX Vega 10. AMD has since updated its iGPU incrementally with the "Picasso" silicon, providing it with higher clock speeds and updated display and multimedia engines.

The machines tested here are the Lenovo Ideapad S540-14API for the AMD chip, and Lenovo Yoga C940-14IIL with the i7-1065G7. The Iris Plus G7 packs 64 Gen11 execution units, while the Radeon RX Vega 10 has 640 stream processors based on the "Vega" architecture. Over in the gaming performance, and we see the Intel iGPU 2 percent faster than the RX Vega 10 at Bioshock Infinite at 1080p, 12 percent slower at Dota 2 Reborn 1080p, and 8 percent faster at XPlane 11.11.

Leaked 3DMark Time Spy Result shows Radeon RX 5700 XT matching GeForce RTX 2070

Reviewers should have received their Radeon "Navi" review samples by now, so it's just natural that the number of leaks is increasing. WCCFTech has spotted one such leak in the 3DMark Time Spy database. The card which is just labeled "Generic VGA" achieved a final score of 8575 points, GPU score of 8719 and 7843 CPU points, which is almost identical to WCCFTech's own comparison benchmarks for the GeForce RTX 2070 Founders Edition (8901). The Vega 64 scored 7427, which leads WCCFTech to believe this must be Radeon RX 5700 XT. The result has since been removed from the 3DMark database, which also suggests it's for an unreleased product.

UL Releases PCI Express Feature Test For 3DMark Ahead of PCIe 4.0 Hardware

With PCI-Express 4.0 graphics cards and motherboards soon to arrive, UL has released their PCI Express feature test for 3DMark. This latest addition has been designed to verify the bandwidth available to the GPU over a computer's PCI Express interface. To accomplish this, the test will make bandwidth the limiting factor for performance and does so by uploading a large amount of vertex and texture data to the GPU for each frame. The end goal is to transfer enough data over the PCIe 4.0 interface to thoroughly saturate it. Once the test is complete, the end result will be a look at the average bandwidth achieved during the test.

UL Announces New 3DMark Benchmarks for Testing PCIe Performance Across Generations

UL Benchmarks via its 3DMark product have announced that they'll be introducing a new, comprehensive test that aims to test PCIe bandwidth across generations. Citing the introduction of PCIe 4.0 to the masses - soon available in the consumer market via AMD's Ryzen 3000 series release - UL wants users to be able to know what a difference this makes towards allowing for more complex games and scenarios that aren't data-constrained by PCIe 3.0.

The 3D Mark PCIe Performance Test will be made available this summer for free for 3DMark Advanced Edition and for 3DMark Professional Edition customers with a valid annual license.

G.SKILL DDR4 Memory Achieves DDR4-5886 and 23 Overclocking Records

G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce that 23 overclocking records in various benchmark categories were broken during the Computex 2019 time frame, including the world record for the fastest memory frequency, all using G.SKILL DDR4 memory kits built with high performance Samsung 8Gb components, the latest Intel processors, and high performance motherboards.

This week at the G.SKILL Computex booth, a new world record for fastest memory frequency was set by Toppc, a renowned professional extreme overclocker, reaching an incredible DDR4-5886MHz using the Trident Z Royal memory on a MSI MPG Z390I GAMING EDGE AC motherboard and an Intel Core i9-9900K processor. At the end of Computex 2019, the top two results for the fastest memory frequency are set by team MSI using an identical hardware setup.

AMD Announces 3rd Generation Ryzen Desktop Processors

AMD CEO Dr. Lisa Su at her 2019 Computex keynote address announced the 3rd generation Ryzen desktop processor family, which leverages the company's Zen 2 microarchitecture, and are built on the 7 nm silicon fabrication process at TSMC. Designed for the AM4 CPU socket, with backwards compatibility for older AMD 300-series and 400-series chipset motherboards, these processors are multi-chip modules of up to two 8-core "Zen 2" CPU chiplets, and a 14 nm I/O controller die that packs the dual-channel DDR4 memory controller and PCI-Express gen 4.0 root complex, along with some SoC connectivity. AMD claims an IPC increase of 15 percent over Zen 1, and higher clock speeds leveraging 7 nm, which add up to significantly higher performance over the current generation. AMD bolstered the core's FPU (floating-point unit), and doubled the cache sizes.

AMD unveiled three high-end SKUs for now, the $329 Ryzen 7 3700X, the $399 Ryzen 7 3800X, and the $499 Ryzen 9 3900X. The 3700X and 3800X are 8-core/16-thread parts with a single CPU chiplet. The 3700X is clocked at 3.60 GHz with 4.40 GHz maximum boost frequency, just 65 Watts TDP and will be beat Intel's Core i7-9700K both at gaming and productivity. The 3800X tops that with 3.90 GHz nominal, 4.50 GHz boost, 105W TDP, and beat the Core i9-9900K at gaming and productivity. AMD went a step further at launched the new Ryzen 9 brand with the 3900X, which is a 12-core/24-thread processor clocked at 3.80 GHz, which 4.60 boost, 72 MB of total cache, 105W TDP, and performance that not only beats the i9-9900K, but also the i9-9920X 12-core/24-thread HEDT processor despite two fewer memory channels. AMD focused on gaming performance with Zen 2, with wider FPU, improved branch prediction, and several micro-architectural improvements contributing to a per-core performance that's higher than Intel's. The processors go on sale on 7/7/2019.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

G.SKILL Announces OC World Cup 2019 Competition

G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and gaming peripherals, is excited to announce the 6th Annual OC World Cup 2019. The online qualifier competition stage will be held from March 13, 2019 until April 16, 2019 on hwbot.org. The top 9 winners of the online qualifier will be qualified to join the live competition at the G.SKILL booth during Computex 2019 week from May 29th to 31st and compete for a chunk of the $25,000 USD cash prize pool.

With the participation of top overclockers from around the world and carefully designed rules, G.SKILL OC World Cup is considered as one of the most challenging overclocking competition by professional overclockers. The G.SKILL OC World Cup consists of three rounds: Online Qualifier, Live Qualifier, and Grand Final. The top 9 winners of the Online Qualifier will receive eligibility to enter the Live Qualifier stage during Computex 2019 and demonstrate their finest LN2 extreme overclocking skills at the G.SKILL booth.

3DMark Adds NVIDIA DLSS Feature Performance Test to Port Royal

Did you see the NVIDIA keynote presentation at CES this year? For us, one of the highlights was the DLSS demo based on our 3DMark Port Royal ray tracing benchmark. Today, we're thrilled to announce that we've added this exciting new graphics technology to 3DMark in the form of a new NVIDIA DLSS feature test. This new test is available now in 3DMark Advanced and Professional Editions.

3DMark feature tests are specialized tests for specific technologies. The NVIDIA DLSS feature test helps you compare performance and image quality with and without DLSS processing. The test is based on the 3DMark Port Royal ray tracing benchmark. Like many games, Port Royal uses Temporal Anti-Aliasing. TAA is a popular, state-of-the-art technique, but it can result in blurring and the loss of fine detail. DLSS (Deep Learning Super Sampling) is an NVIDIA RTX technology that uses deep learning and AI to improve game performance while maintaining visual quality.

NVIDIA Releases GeForce 418.81 WHQL Software

NVIDIA today released GeForce 418.81 WHQL software. The drivers add support for mobile versions of GeForce RTX 20-series GPUs. The desktop version adds optimization for 3DMark Port Royal benchmark, in addition to its DLSS (deep learning supersampling) AA setting. The drivers add or improve NVIDIA SLI support for "Anthem," "Assetto Corsa Competizione," "Battlefleet Gothic: Armada 2," "Life is strange Season 2," "NBA 2K19," and "Space Hulk Tactics." CUDA version 10 is included with these drivers.

Among the issues fixed are HDR not being enabled by default in Gamestream when an HDR display is connected to the client and PC. 3D performance and frame-rate overlays accidentally appearing on Twitter UWP app is fixed. Random flickering in games with G-Sync enabled is fixed. Also fixed is a strange issue in which when a G-Sync display (one with NVIDIA G-Sync hardware) is hotplugged, and a G-Sync Compatible (read: VESA Adaptive Sync) display is connected, the right half of the G-Sync display goes blank. Grab the drivers from the link below.
DOWNLOAD: NVIDIA GeForce 418.81 WHQL

UL Corporation Announces 3D Mark Port Royal Raytracing Suite is Now Available - Benchmark Mode On!

Perhaps gliding through the tech-infused CES week, UL Corporation has just announced that the much-expected Port Royal, the world's first dedicated real-time ray tracing benchmark for gamers, is now available. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques, and enables both performance benchmarking for cutthroat competition throughout the internet (and our own TPU forums, of course), but is also an example of what to expect from ray tracing in upcoming games - ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DFX's Canceled Rampage GPU Pictured, Put Through the Paces in 3D Mark 2001

3DFX is a well-known name for most in our community, I'd wager (I don't have the data to back me up on that, but bare with me). The company is one of the highest-profile defunct companies that vied for a place in the desktop, high-performance GPU space back in the day, and brought its guns bearing on NVIDIA and then ATI. The Rampage was the last GPU ever developed by the company, and looked to compete with NVIDIA's GeForce3. That never saw the light of day, though, with the company shutting its doors before development became viable for market release.

DSOGaming has some images of some of the Rampage GPUs that survived 3DFX's closure, though, and the graphics card is shown running Max Payne, Unreal Tournament & 3DMark 2001. For those of you that ever had a 3DFX graphics card, these should bring you right down memory lane. Enjoy.

3DMark Port Royal Ray-tracing Benchmark Release Date and Pricing Revealed

UL Benchmarks released more information on pricing and availability of its upcoming addition to the 3DMark benchmark suite, named "Port Royal." The company revealed that the benchmark will officially launch on January 8, 2019. The Port Royal upgrade will cost existing 3DMark paid (Advanced and Professional) users USD $2.99. 3DMark Advanced purchased from January 8th onward at $29.99 will include Port Royal. 3DMark Port Royal is an extreme-segment 3D graphics benchmark leveraging DirectX 12 and DirectX Raytracing (DXR). UL Benchmarks stated that Port Royal was developed with inputs from industry giants including NVIDIA, AMD, Intel, and Microsoft.

UL Benchmarks Unveils 3DMark "Port Royal" Ray-tracing Benchmark

Port Royal is the name of the latest component of UL Benchmarks 3DMark. Designed to take advantage of the DirectX Raytracing (DXR) API, this benchmark features an extreme poly-count test-scene with real-time ray-traced elements. Screengrabs of the benchmark depict spacecraft entering and leaving mirrored spheres suspended within a planet's atmosphere, which appear to be docks. It's also a shout out to of a number of space-sims such as "Star Citizen," which could up their production in the future by introducing ray-tracing. The benchmark will debut at the GALAX GOC Grand Final on December 8, where the first public run will be powered by a GALAX GeForce RTX 2080 Ti HOF graphics card. It will start selling in January 2019.

Latest 3DMark Update adds Night Raid DX12 Benchmark for Integrated Graphics

With update 2.6.6174, released today, 3DMark now includes a new benchmark dubbed Night Raid. This latest addition to the popular 3DMark suite offers DX12 performance testing for laptops, tablets and other devices with integrated graphics. It also offers full support for ARM based processors in the latest always-connected PCs running Microsoft's Windows 10 on ARM. Users running 3DMark Basic Edition which is free will have access to this latest addition upon installing the update.

The Night Raid benchmark continues the trend of offering two graphics tests and a CPU test. While not as visually stunning as previous entries this is to be expected considering it is targeted at integrated graphics processors and entry level systems. Even so, it makes use of numerous graphical features with graphics test 1 including; dynamic reflections, ambient occlusion, and deferred rendering. Graphics test 2 features; tessellation, complex particle systems and depth of field effects with forward-rendering. Finally, the CPU test will measures performance through a combination of physics simulation, occlusion culling, and procedural generation.
Return to Keyword Browsing