News Posts matching "Benchmarks"

Return to Keyword Browsing

NVIDIA Releases Comparison Benchmarks for DLSS-Accelerated 4K Rendering

NVIDIA released comparison benchmarks for its new AI-accelerated DLSS technology, which is part of their new Turing architecture's call to fame. Using the Infiltrator benchmark with its stunning real-time graphics, NVIDIA showcased the performance benefits of using DLSS-improved 4K rendering instead of the usual 4K rendering + TAA (Temporal Anti-Aliasing). Using a Core i9-7900X 3.3GHz CPU paired with 16 GB of Corsair DDR4 memory, Windows 10 (v1803) 64-bit, and version 416.25 of the NVIDIA drivers, the company showed tremendous performance improvements that can be achieved with the pairing of both Turing's architecture strengths and the prowess of DLSS in putting Tensor cores to use in service of more typical graphics processing workloads.

The results speak for themselves: with DLSS at 4K resolution, the upcoming NVIDIA RTX 2070 convincingly beats its previous-gen pair by doubling performance. Under these particular conditions, the new king of the hill, the RTX 2080 Ti, convincingly beats the previous gen's halo product in the form of the Titan Xp, with a 41% performance lead - but so does the new RTX 2070, which is being sold at half the asking price of the original Titan Xp.

Intel's 9th Gen Core Gaming Benchmarks Flawed and Misleading

At its 9th Generation Core processor launch extravaganza earlier this week, Intel posted benchmark numbers to show just how superior its processors are to AMD 2nd generation Ryzen "Pinnacle Ridge." PC enthusiasts worth their salt were quick to point out that Intel's numbers are both flawed and misleading as they misrepresent both test setups - by optimizing Intel processors beyond their out-of-the-box performance, and by running AMD processors with sub-optimal settings.

Intel paid Principled Technologies, a third-party performance testing agency, to obtain performance numbers comparing the Core i9-9900K with the Ryzen 7 2700X across a spectrum of gaming benchmarks, instead of testing the two chips internally, and posting their test setup data in end-notes, as if to add a layer of credibility/deniability to their charade. The agency posted its numbers that were almost simultaneously re-posted PCGamesN, gleaming the headline "Up to 50% Faster than Ryzen at Gaming." You could fertilize the Sahara with this data.

UL Benchmarks Kicks Huawei Devices from its Database over Cheating

UL Benchmarks de-listed several popular Huawei devices from its database over proof of cheating in its benchmarks. Over the month, it was found that several of Huawei's devices, such as P20 Pro, Nova 3, and Play; overclocked their SoCs while ignoring all power and thermal limits, to achieve high benchmark scores, when it detected that a popular benchmark such as 3DMark, was being run. To bust this, UL Benchmarks tested the three devices with "cloaked" benchmarks, or "private benchmarks" as they call it. These apps are identical in almost every way to 3DMark, but lack the identification or branding that lets Huawei devices know when to overclock themselves to cheat the test.

The results were startling. When the devices have no clue that a popular benchmark is being run (or if has no way of telling that 3DMark is being run), it chugs along at its "normal" speed, which is 35% to 36% lower. The rules that bind device manufacturers from advertising UL's 3DMark scores explicitly state that the device must not detect the app and optimize its hardware on the fly to ace the test. Huawei responded to UL by stating that it will unlock a new "performance mode" to users that lets them elevate their SoCs to the same high clocks for any application.

NVIDIA Releases First Internal Performance Benchmarks for RTX 2080 Graphics Card

NVIDIA today released their first official performance numbers for their new generation of GeForce products - particularly, the RTX 2080. The RTX 20 series of graphics cards, according to the company, offers some 50% performance improvements (on average) on architectural improvements alone, in a per-core basis. This number is then built upon with the added RTX performance of the new RT cores, which allows the RTX 2080 to increase its performance advantage over the last generation 1080 by up to 2x more - while using the new DLSS technology. PUBG, Shadow of the Tomb Raider, and Final Fantasy XV are seeing around 75 percent or more improved performance when using this tech.

NVIDIA is also touting the newfound ability to run games at 4K resolutions at over 60 FPS performance, making the RTX 2080 the card to get if that's your preferred resolution (especially if paired with one of those dazzling OLED TVs...) Of course, IQ settings aren't revealed in the slides, so there's an important piece of the puzzle still missing. But considering performance claims of NVIDIA, and comparing the achievable performance on last generation hardware, it's fair to say that these FPS scores refer to the high or highest IQ settings for each game.

Futuremark Sets a Date for UL Rebranding

Futuremark set a date for its re-branding to align with its parent company UL. April 23, 2018 is when Futuremark products and services will be sold under the new branding scheme. Futuremark became part of UL in 2014. UL is an independent, global company with more than 10,000 professionals in 40 countries. UL offers a wide range of testing, inspection, auditing, and certification services and solutions to help customers, purchasers and policymakers manage risk and complexity in modern markets. A set of FAQs associated with the change are answered below.

Intel Releases CPU Benchmarks with Meltdown and Spectre Mitigations

It's safe to say that there's one thing that you don't mess around with, and that's performance. Enthusiasts don't spend hundreds of dollars on a processor to watch it underperform. Given the complicated nature of the Meltdown and Spectre vulnerabilities, Microsoft's so-called mitigations were bound to have an impact on processor performance. The million dollar question was: Just how much? The initial estimate was somewhere around 30%, but Intel, being optimistic as usual, expected the performance impact to be insignificant for the average user. They recently provided some preliminary benchmark results that looked quite convincing too. Well, let's take a look at their findings, shall we?

Intel measured the mitgations' impact on CPU performance using their 6th, 7th, and 8th Generation Intel Core processors but, more specifically, the i7-6700K, i7-7920HQ, i7-8650U, and i7-8700K. The preferred operating system used in the majority of the benchmarks was Windows 10, however, Windows 7 also made a brief appearance. Intel chose four key benchmarks for their testing. SYSmark 2014 SE evaluated CPU performance on an enterprise level simulating office productivity, data and financial analysis, and media creation. PC Mark 10, on the other hand, tested performance in real-world usage employing different workloads like web browsing, video conferencing, application start-up time, spreadsheets, writing, and digital content creation. 3DMark Sky Diver assessed CPU performance in a DirectX 11 gaming scenario. Lastly, WebXPRT 2015 measured system performance using six HTML5- and JavaScript-based workloads which include photo enhancement, organize album, stock option pricing, local notes, sales graphs, and explore DNA sequencing.

AMD EPYC 7601 Processors Set Two New World Records on SPEC CPU Benchmarks

​AMD today announced that the new Hewlett Packard Enterprise ProLiant DL385 Gen10 server, powered by AMD EPYC processors set world records in both SPECrate2017_fp_base and SPECfp_rate2006. The secure and flexible 2P 2U HPE ProLiant DL385 Gen10 Server joins the HPE Cloudline CL3150 server in featuring AMD EPYC processors. With designs ranging from 8-core to 32-core, AMD EPYC delivers industry-leading memory bandwidth across the HPE line-up, with eight channels of memory and unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe 3 on every EPYC processor.

"HPE is joining with AMD today to extend the world's most secure industry standard server portfolio to include the AMD EPYC processor. We now give customers another option to optimize performance and security for today's virtualized workloads," said Justin Hotard, vice president and GM, Volume Global Business Unit, HPE. "The HPE ProLiant DL385 featuring the AMD EPYC processor is the result of a long-standing technology engagement with AMD and a shared belief in continuing innovation."

Intel's 18-core Core i9-7980XE Benchmarks Surface

A user on Coolenjoy has apparently gotten his hands on Intel's upcoming i9-7980XE silicon, putting it through its paces on Cinebench and a number of other benchmarks. The 18-core, 36-thread Core i9-7980XE is set to be Intel's most advanced HEDT processor of all time by a wide margin - both in number of cores and pricing. It seems that even in the face of a competitive AMD, that puts value towards core counts with its $999 Threadripper 1950X 16-core, 32-thread CPU, Intel still sees it fit to charge an arm, a leg, and both of your kidneys for a 2-core advantage. Intel's XE processors have become more synonymous of eXtremely Expensive and less about being eXtreme Edition over the years, and the i9-7980XE, with its $1999 price-tag, does nothing to alleviate the issue. This is a halo product, though - the most advanced HEDT processor in the world. And with it being as niche a product as it is, it actually makes some kind of sense for it to be so expensive - an immoral, "where has the world gone" kind of sense, but still, some measure of it.

Intel Core i7-8700K and i5-8400 SANDRA Benchmarks Surface

Ahead of their launch later this quarter, SiSoft SANDRA benchmarks of Intel 8th generation Core i7-8700K and Core i5-8400 six-core processors surfaced in benchmark databases, which were promptly compared to their predecessors by HotHardware. The results put to the test Intel's claims of "over 40 percent more performance" compared to the 7th generation Core processors, which the company made in its 8th Generation Core Launch Event presentation. A bulk of these performance increases are attributed to the increasing core-count over generation, which directly yields higher multi-threaded performance; while a small but significant portion of it is attributed to increases in single-threaded performance. Since the "Coffee Lake" micro-architecture is essentially a refresh of the "Skylake" architecture, single-threaded performance increases could be attributed to higher clock speeds.

The Core i7-8700K is the top-dog of the 8th generation Core mainstream-desktop processor family. This six-core chip was compared to the product it succeeds in Intel's MSDT product-stack, the quad-core Core i7-7700K. There is a 45 percent increase in performance, in the "processor arithmetic" test; and a 47 percent increase in the "processor multimedia" test. These two test-suites are multi-threaded, and hence benefit from the two added cores, which in turn add four additional logical CPUs, thanks to HyperThreading. "Processor cryptography" sees a 12 percent increase. The single-precision and double-precision "Scientific Analysis" tests, which again are multi-threaded, see 26 percent and 32 percent performance gains over the i7-7700K, respectively.

New Performance Benchmarks of AMD's Vega Frontier Edition Surface

You probably took a long, hard read at our article covering a single-minded user's experience of his new Vega Frontier Edition. Now, courtesy of PCPer, and charitable soul Ekin at Linus Tech Tips, we have some more performance benchmarks of AMD's latest (non gaming specific) graphics card.

Starting with 2560x1440, let's begin with the good news: in what seems to be the best performance scenario we've seen until now, the Vega Frontier Edition stands extremely close to NVIDIA's GTX 1080 Ti video card in Fallout 4. It trails it for about 10 FPS most of the test, and even surpasses it at some points. These numbers should be taken with a grain of salt regarding the RX Vega consumer cards: performance on those models will probably be higher than the Frontier Edition's results. And for the sake of AMD, they better be, because in all other tests, the Frontier Edition somewhat disappoints. It's beaten by NVIDIA's GTX 1070 in Grand Theft Auto V, mirrors its performance in The Witcher 3, and delivers slightly higher performance than the GTX 1070 on Hitman and Dirt Rally (albeit lower than the GTX 1080.)

Intel's Core i7-7740K Kaby Lake-X Benchmarks Surface

Two days, two leaks on an upcoming Intel platform (the accelerated release dates gods are working hard with the blue giant, it would seem.) Now, it's Intel's own i7-7740K, a Kaby Lake-X HEDT processor that packs 4 cores and 8 threads, which is interesting when one considers that AMD's latest mainstream processors, Ryzen, already pack double the cores and threads in a non-HEDT platform. Interesting things about the Kaby Lake-X processors is that they are rumored to carry 16x PCIe 3.0 lane from the CPU (which can be configured as a singularly populated 16x or as a triple-populated 1x @ 8x and 2x @ 4x PCIe ports. Since these parts are reported as being based of on consumer, LGA-1151 Kaby Lake processors, it would seem these eschew Intel's integrated graphics, thus saving die space. And these do seem to deliver a quad-channel memory controller as well, though we've seen with Ryzen R7 reviews how much of a difference that makes for some of the use cases.

AMD Radeon RX 580 Overclocking and Benchmarks Surface

Some photos, screenshots and benchmarks of what appears to be an XFX RX 580 graphics card are doing the rounds, courtesy of overclocker Lau Kin Lam, who shared them (alongside a three-hour log video) on his Facebook page. Apparently, this is a special, China-only edition of the card, which is a shame, considering the great-looking waterblock that is smiling for the camera. The fact that this card is using a reference board with one 8-pin power connector may prove relevant to its overclocking efforts (and those of other, non-reference boards that we've seen carry both the 8-pin and an extra 6-pin power connector.

AMD's Ryzen 7 1700X Glorious Benchmarks Leak; IHS, Pin Layout Photographed

Another day, another leak: the folks at XFastest have indeed been the fastest to leak images of an actual Ryzen 7 1700X processor, with pictures of the processor's IHS and pin area running rampant throughout the Internet (the Ryzen chip is located to the right in both pictures, with a sample of AMD's previous-generation FX CPUs on the left side for comparison sake).

While revealing shots may have their appeal, it's the benchmarking portion that most of us are expectant about. Until actual reviews are out, we're left nothing more than these leaks (which should be taken with appropriate amounts of salt). In this case, benchmarks of AMD's upcoming Ryzen 7 1700X have been released, showing just how the upcoming CPU delivers in 3D Mark Fire Strike, CPU Mark and Cinebench R15.

AMD Ryzen 1700X, 1600X & 1300 Benchmarks Leaked

A number of sites have been reporting on some leaked (as in, captured from Futuremark's database) scores on AMD's upcoming CPUs. Now, some benchmarks seem to have surfaced regarding not only the company's 8-core, 16-thread monsters, but also towards its sweet-spot 6-core, 12-thread CPUs and its more mundane 4-core offerings.

Taking into account some metrics (which you should, naturally, take with some grains of salt), and comparing Intel's and AMD's Ryzen offerings on 3DMark's Fire Strike Physics scores, we can see that a $389 Ryzen 7 1700X (8 cores, 16 threads) at its base clock of 3.4 GHz manages to surpass Intel's competing (in thread count alone, since it retails for $1089) 6900K running at its base 3.2 GHz frequency - with the Ryzen processor scoring 17,878 points versus the 6900K's 17,100. Doing some fast and hard maths, this would mean that if the R7 1700X was to be clocked at the same speed as the 6900K, it would still be faster, clock for clock (though not by much, admittedly). We don't know whether Turbo was disabled or not on these tests, for either AMD's or Intel's processor, so we have to consider that. However, if Turbo were enabled, that would mean that the R7 1700X's clockspeed would only be 100 MHz higher than the 6900K's (3.8 GHz max, vs 3.7 GHz max on the Intel CPU).

Intel Core i3-7350K "Kaby Lake" Benchmarks Surface

Intel's third overclocker-friendly SKU from its 7th generation Core "Kaby Lake" processor family, the Core i3-7350K, is shaping up to be an interesting option for gaming PC builds. It may be a dual-core chip, which at $177 is within striking-range of the current $189 Core i5-6400 quad-core chip, but the lack of two cores appears to be more than made up for by its significantly higher clock speeds, even in multi-threaded benchmarks. The i3-7350K is clocked at 4.00 GHz, out of the box. It's also the only Core i3 desktop SKU to feature Turbo Boost, with a frequency of 4.20 GHz. It chip comes with an unlocked base-clock multiplier, letting you overclock it with ease.

A PC enthusiast with access to a Core i3-7350K sample put it through Geekbench, where it scored 5,137 points in the single-threaded tests, and 10,048 in multi-threaded tests. Here's where it gets interesting - the Core i5-6400 quad-core chip scores 3,686 points (single-threaded), and 9,982 points (multi-threaded. The i5-6400 is clocked at a mere 2.70 GHz, with 3.30 GHz Turbo Boost. Even the higher clocked i5-4670K "Haswell" quad-core chip (3.40 GHz core, 3.80 GHz Turbo Boost) doesn't manage to beat the i3-7350K, with 4,361 points (single-threaded), and 10,036 points (multi-threaded).

NVIDIA Calls Intel's Bluff on Xeon Phi vs. GPU Benchmarks

NVIDIA accused Intel of cheating in its ISC 2016 presentation, particularly in a performance-related slide, in which it claimed that its "Knights Landing" Xeon Phi HPC processor provides faster training of neural nets than NVIDIA "Maxwell" GPUs. In a slide, Intel claimed that a Xeon Phi HPC processor card is 2.3 times faster at training deep-learning neural nets, and 38 percent better scaling across nodes, than NVIDIA "Maxwell" GPUs, which triggered a swift response from the GPU maker, which made significant investments in deep-learning technologies over the past three years.

NVIDIA argued that Intel is using the classic technique of running outdated benchmarks to test its neural net training speeds. The company says that if Intel used the latest version of the same benchmark (Caffe AlexNet), the "Maxwell" GPU will be found to be 30 percent faster than the Xeon Phi at training neural nets. NVIDIA also notes that "Maxwell" is only its previous-generation part, and a "Pascal" based HPC processor would easily be 90 percent faster than the Xeon Phi. More importantly, NVIDIA notes that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. The latest "Pascal" GPUs leverate NVLink to scale up to 128 GPUs, providing the fastest deep-learning solutions money can buy.

FinalWire Announces AIDA64 v5.70 with Ray-tracing Benchmarks and Vulkan Support

Today FinalWire released an update to the PC editions of its award-winning system information software. Version 5.70 comes with new multi-threaded ray tracing benchmarks, which fully utilize the latest instruction set extensions (AVX, AVX2, FMA). The new floating point tests use more realistic workloads and put more intensive load on the CPU.

The new AIDA64 edition is now capable of displaying hardware monitoring information on RGB LED backlit mice and provides information on the Vulkan capabilities of the latest AMD and NVIDIA graphics processors. Additionally, it can now read and display sensor readings for Corsair AXi power supply units.
DOWNLOAD: FinalWire AIDA64 v5.70

Intel "Skylake" to be 6th Generation Core Series, First i7-6700K Benchmarks

Intel's next major CPU architecture, codenamed "Skylake," could be classified as the company's 6th generation Core processor family. It will succeed the brief stint Core "Broadwell" will have at the market, with no major chips for PC enthusiasts to look forward to. The Core i7-6700K appears to be the flagship product based on the Skylake-D silicon, succeeding the i7-4770K and i7-4790K. The Core i5-6600K will succeed the i5-4670K and i5-4690K.

The i7-6700K is a quad-core chip, with HyperThreading enabling 8 logical CPUs. Its nominal clock will be 4.00 GHz, with a rather shallow 4.20 GHz Turbo Boost frequency. It will feature an 8 MB L3 cache, and an integrated memory controller that supports both DDR4 and DDR3 memory types. This makes Skylake a transition point for the mainstream PC market to gradually upgrade to DDR4. You'll have some motherboards with DDR3 memory slots, some with DDR4 slots, and some with both kinds of slots. The resulting large uncore component, and perhaps a bigger integrated GPU, will result in quad-core Skylake parts having TDP rated as high as 95W, higher than current Haswell quad-core parts, with their 88W TDP.

First Alleged GTX TITAN-X Benchmarks Surface

Here are some of the first purported benchmarks of NVIDIA's upcoming flagship graphics card, the GeForce GTX TITAN-X. Someone with access the four of these cards installed them on a system driven by a Core i7-5960X eight-core processor, and compared its single-GPU and 4-way SLI performance on 3DMark 11, with its "extreme" (X) preset. The card scored X7994 points going solo - comparable to Radeon R9 290X 2-way CrossFire, and a single GeForce GTX TITAN-Z. With four of these cards in play, you get X24064 points. Sadly, there's nothing you can compare that score with.

NVIDIA unveiled the GeForce GTX TITAN-X at the Game Developers Conference (GDC) 2015. It was just that - an unveiling, with no specs, performance numbers, or launch date announced. The card is rumored to be based on the GM200 silicon - NVIDIA's largest based on the "Maxwell" architecture - featuring 3072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The benchmark screenshots reveal core clock speeds to be around 1.00 GHz, and the memory clock at 7.00 GHz.

Galaxy GeForce GTX 970 Pictured, Specs Confirmed, Early Benchmarks Surface

Here are some of the first pictures of an AIC partner branded NVIDIA GeForce GTX 970 graphics card, the Galaxy GTX 970 GC. Spotted across Chinese PC enthusiast forums and social networks, the latest set of leaks cover not just pictures of what the GTX 970 looks like, but also what's under its hood. To begin with, Galaxy's card appears to be built for the high-end market segment. A meaty twin-fan aluminium fin-stack heatsink, coupled by a spacey backplate cover a signature Galaxy blue PCB, holding NVIDIA's new GTX 970 GPU, and 4 GB of GDDR5 memory. The card appears to feature a high-grade VRM that draws power from a combination of 8-pin and 6-pin PCIe power connectors.

Eurocom Equips M4 with Core i7-4940MX, GeForce GTX 880M, 3200x1800 Display

Eurocom has benchmarked the M4, the world's most powerful 13.3" QHD+ notebook. It is equipped with a breathtaking 3200x1800 QHD+ display, an Intel Core i7 Extreme Processor and NVIDIA GeForce GTX 860M graphics. Eurocom has benchmarked and stress tested the M4 to exemplify the performance that is squeezed into its ultra portable chassis. The 5,760,000 pixels of the 13.3" 3200x1800 QHD+ display are powered by the NVIDIA GeForce GTX 860M graphics with 2GB DDR5 VRAM, 640 CUDA Cores, with a GPU core able to run at up to 1029MHz +boost.

Powered by a full line of 4th Generation Intel Core i7 processors utilizing the Intel HM87 Express Chipset including the Intel Core i7-4940MX Processor Extreme with 4 cores and 8 threads running at 3.1 GHz (up to 4 GHz max turbo frequency) with 8 MB L3 cache. The integrated Trusted Platform Module 1.2 from Infineon Technologies ensures that digital certificates, passwords and keys are made more secure from software attacks and physical theft. TPM provides the ability for a computing system to run applications more secure and allows secured remote access to perform electronic transactions and communication more safely. The increased security, brought on by the TPM 1.2 can save organizations in IT management costs.

Eurocom adds AMD Radeon HD 8970M to Ivy Bridge based Racer 2.0

Eurocom has added the AMD Radeon HD 8970M to the Ivy Bridge based Racer 2.0 VGA upgradeable laptop. The addition of the MXM 3.0b spec AMD Radeon HD 8970M offers gamers and enthusiasts a new level of ultra high performance graphics to upgrade their existing Racer 2.0 or configure into their new system. Eurocom is always striving to allow upgradeability of legacy systems to allow customers the opportunity to upgrade their system to improve performance and life span. Eurocom engineers have recently finished stress testing and verifying the performance and operation of the 8970M in the Racer 2.0.

The AMD Radeon HD 8970M GPU has 1,280 compute cores running at 850MHz with 4 GB GDDR5 memory running at 1200MHz. With full DirectX 11.1 support, the 8970M offers a great gaming experience for all PC titles. AMD App Acceleration uses GPU compute to accelerate and improve image quality, video playback and overall computing performance. "At Eurocom, we have offered fully upgradeable notebooks to our clients for many years because we believe very strongly in the long term benefits it offers our clients; VGA upgradeability allows users to extend lifespan of their equipment without compromising performance " Mark Bialic, Eurocom President.

Intel Haswell 35W Dual-Core i5-4570T Benchmarks Surface Online

While Intel's quad-core Haswell architecture based 4th generation Core processors are already out there dominating benchmark charts (even if only by a small margin) and scoring new accolades in power efficiency, the dual-core Core processors have been left in the dark. At least, in the desktop segment. Not anymore though, as PC Games Hardware managed to score a sample of the energy efficient Core i5-4570T, a dual-core Haswell based processor for the desktop with a very impressive, and low, 35W TDP only.

No More Monthly AMD Catalyst Updates?

It's 31st, and Catalyst 12.5 WHQL is nowhere in sight. According to an article by Benchmarks3D, you should give up on it, for AMD is calling quits with the monthly driver update cycle. However, AMD will focus on a staggered driver update cycle that will address issues with games as they crop up. In other words, end-users could end up seeing new drivers as hotfixes more often, depending on launches of games and new Radeon hardware, but that could also mean long periods of lull when there are no major issues to address, or no new hardware to improve drivers for.

GeForce GTX 680 SLI Performance Surfaces

NVIDIA's big GeForce GTX 680 launch is just around the corner, but performance figures are already trickling in. Last week, we were treated to a wide range of benchmarks covering a single GeForce GTX 680. Today, VR-Zone posted a performance-preview of the GeForce GTX 680 in 2-way SLI configuration. A set of two GTX 680 cards were put through 3DMark 11 in Entry, Performance, and eXtreme presets. It should be noted here, that the GTX 680 cards were clocked at 1150 MHz core, and 1803 MHz (7.20 GHz effective) memory.

In the Entry preset, GTX 680 2-way SLI scored E22878; it scored P16860 in Performance preset; and X6243 in eXtreme. 2-way SLI of GTX 680 should be fit for 2560x1440/1600 resolution gaming. The rest of the test-bench consisted of Intel Core i7-3930K six-core processor clocked at 5.00 GHz, with 16 GB of quad-channel DDR3-2133 MHz memory, and ASUS ROG Rampage IV Extreme motherboard.
Return to Keyword Browsing