News Posts matching "Benchmarks"

Return to Keyword Browsing

UL Announces New 3DMark Benchmarks for Testing PCIe Performance Across Generations

UL Benchmarks via its 3DMark product have announced that they'll be introducing a new, comprehensive test that aims to test PCIe bandwidth across generations. Citing the introduction of PCIe 4.0 to the masses - soon available in the consumer market via AMD's Ryzen 3000 series release - UL wants users to be able to know what a difference this makes towards allowing for more complex games and scenarios that aren't data-constrained by PCIe 3.0.

The 3D Mark PCIe Performance Test will be made available this summer for free for 3DMark Advanced Edition and for 3DMark Professional Edition customers with a valid annual license.

Intel Puts Out Benchmarks Showing Minimal Performance Impact of MDS Mitigation

Intel Tuesday once again shook the IT world by disclosing severe microarchitecture-level security vulnerabilities affecting its processors. The Microarchitectural Data Sampling (MDS) class of vulnerabilities affect Intel CPU architectures older than "Coffee Lake" to a greater extent. Among other forms of mitigation software patches, Intel is recommending that users disable HyperThreading technology (HTT), Intel's simultaneous multithreading (SMT) implementation. This would significantly deplete multi-threaded performance on older processors with lower core-counts, particularly Core i3 2-core/4-thread chips.

On "safer" microarchitectures such as "Coffee Lake," though, Intel is expecting a minimal impact of software patches, and doesn't see any negative impact of disabling HTT. This may have something to do with the 50-100 percent increased core-counts with the 8th and 9th generations. The company put out a selection of benchmarks relevant to client and enterprise (data-center) use-cases. On the client use-case that's we're more interested in, a Core i9-9900K machine with software mitigation and HTT disabled is negligibly slower (within 2 percent) of a machine without mitigation and HTT enabled. Intel's selection of benchmarks include SYSMark 2014 SE, WebXprt 3, SPECInt rate base (1 copy and n copies), and 3DMark "Skydiver" with the chip's integrated UHD 630 graphics. Comparing machines with mitigations applied but toggling HTT presents a slightly different story.

UL Corporation Announces Two New Benchmarks Coming to PCMark 10

UL Corporation today announces that two new benchmark tests that will soon be coming to PCMark 10. The first is our eagerly awaited PCMark 10 battery life benchmark. The second is a new benchmark test based on Microsoft Office applications.

PCMark 10 Battery Life benchmark
Battery life is one of the most important criteria for choosing a laptop, but consumers and businesses alike find it hard to compare systems fairly. The challenge, of course, is that battery life depends on how the device is used. Unfortunately, manufacturers' claims are often based on unrealistic scenarios that don't reflect typical use. Figures for practical, day-to-day battery life, which are usually much lower, are rarely available.

NVIDIA GTX 1660 Ti to Perform Roughly On-par with GTX 1070: Leaked Benchmarks

NVIDIA's upcoming "Turing" based GeForce GTX 1660 Ti graphics card could carve itself a value proposition between the $250-300 mark that lets it coexist with both the GTX 1060 6 GB and the $350 RTX 2060, according to leaked "Final Fantasy XV" benchmarks scored by VideoCardz. In these benchmarks, the GTX 1660 Ti was found to perform roughly on par with the previous-generation GTX 1070 (non-Ti), which is plausible given that the 1,536 CUDA cores based on "Turing," architecture, with their higher IPC and higher GPU clocks, are likely to catch up with the 1,920 "Pascal" CUDA cores of the GTX 1070, while 12 Gbps 192-bit GDDR6 serves up more memory bandwidth than 8 Gbps 256-bit GDDR5 (288 GB/s vs. 256 GB/s). The GTX 1070 scores in memory size, with 8 GB of it. NVIDIA is expected to launch the GTX 1660 Ti later this month at USD $279. Unlike the RTX 20-series, these chips lack NVIDIA RTX real-time raytracing technology, and DLSS (deep-learning supersampling).

UL Corporation Announces 3D Mark Port Royal Raytracing Suite is Now Available - Benchmark Mode On!

Perhaps gliding through the tech-infused CES week, UL Corporation has just announced that the much-expected Port Royal, the world's first dedicated real-time ray tracing benchmark for gamers, is now available. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques, and enables both performance benchmarking for cutthroat competition throughout the internet (and our own TPU forums, of course), but is also an example of what to expect from ray tracing in upcoming games - ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

UL Benchmarks Unveils 3DMark "Port Royal" Ray-tracing Benchmark

Port Royal is the name of the latest component of UL Benchmarks 3DMark. Designed to take advantage of the DirectX Raytracing (DXR) API, this benchmark features an extreme poly-count test-scene with real-time ray-traced elements. Screengrabs of the benchmark depict spacecraft entering and leaving mirrored spheres suspended within a planet's atmosphere, which appear to be docks. It's also a shout out to of a number of space-sims such as "Star Citizen," which could up their production in the future by introducing ray-tracing. The benchmark will debut at the GALAX GOC Grand Final on December 8, where the first public run will be powered by a GALAX GeForce RTX 2080 Ti HOF graphics card. It will start selling in January 2019.

NVIDIA Releases Comparison Benchmarks for DLSS-Accelerated 4K Rendering

NVIDIA released comparison benchmarks for its new AI-accelerated DLSS technology, which is part of their new Turing architecture's call to fame. Using the Infiltrator benchmark with its stunning real-time graphics, NVIDIA showcased the performance benefits of using DLSS-improved 4K rendering instead of the usual 4K rendering + TAA (Temporal Anti-Aliasing). Using a Core i9-7900X 3.3GHz CPU paired with 16 GB of Corsair DDR4 memory, Windows 10 (v1803) 64-bit, and version 416.25 of the NVIDIA drivers, the company showed tremendous performance improvements that can be achieved with the pairing of both Turing's architecture strengths and the prowess of DLSS in putting Tensor cores to use in service of more typical graphics processing workloads.

The results speak for themselves: with DLSS at 4K resolution, the upcoming NVIDIA RTX 2070 convincingly beats its previous-gen pair by doubling performance. Under these particular conditions, the new king of the hill, the RTX 2080 Ti, convincingly beats the previous gen's halo product in the form of the Titan Xp, with a 41% performance lead - but so does the new RTX 2070, which is being sold at half the asking price of the original Titan Xp.

Intel's 9th Gen Core Gaming Benchmarks Flawed and Misleading

At its 9th Generation Core processor launch extravaganza earlier this week, Intel posted benchmark numbers to show just how superior its processors are to AMD 2nd generation Ryzen "Pinnacle Ridge." PC enthusiasts worth their salt were quick to point out that Intel's numbers are both flawed and misleading as they misrepresent both test setups - by optimizing Intel processors beyond their out-of-the-box performance, and by running AMD processors with sub-optimal settings.

Intel paid Principled Technologies, a third-party performance testing agency, to obtain performance numbers comparing the Core i9-9900K with the Ryzen 7 2700X across a spectrum of gaming benchmarks, instead of testing the two chips internally, and posting their test setup data in end-notes, as if to add a layer of credibility/deniability to their charade. The agency posted its numbers that were almost simultaneously re-posted PCGamesN, gleaming the headline "Up to 50% Faster than Ryzen at Gaming." You could fertilize the Sahara with this data.

UL Benchmarks Kicks Huawei Devices from its Database over Cheating

UL Benchmarks de-listed several popular Huawei devices from its database over proof of cheating in its benchmarks. Over the month, it was found that several of Huawei's devices, such as P20 Pro, Nova 3, and Play; overclocked their SoCs while ignoring all power and thermal limits, to achieve high benchmark scores, when it detected that a popular benchmark such as 3DMark, was being run. To bust this, UL Benchmarks tested the three devices with "cloaked" benchmarks, or "private benchmarks" as they call it. These apps are identical in almost every way to 3DMark, but lack the identification or branding that lets Huawei devices know when to overclock themselves to cheat the test.

The results were startling. When the devices have no clue that a popular benchmark is being run (or if has no way of telling that 3DMark is being run), it chugs along at its "normal" speed, which is 35% to 36% lower. The rules that bind device manufacturers from advertising UL's 3DMark scores explicitly state that the device must not detect the app and optimize its hardware on the fly to ace the test. Huawei responded to UL by stating that it will unlock a new "performance mode" to users that lets them elevate their SoCs to the same high clocks for any application.

NVIDIA Releases First Internal Performance Benchmarks for RTX 2080 Graphics Card

NVIDIA today released their first official performance numbers for their new generation of GeForce products - particularly, the RTX 2080. The RTX 20 series of graphics cards, according to the company, offers some 50% performance improvements (on average) on architectural improvements alone, in a per-core basis. This number is then built upon with the added RTX performance of the new RT cores, which allows the RTX 2080 to increase its performance advantage over the last generation 1080 by up to 2x more - while using the new DLSS technology. PUBG, Shadow of the Tomb Raider, and Final Fantasy XV are seeing around 75 percent or more improved performance when using this tech.

NVIDIA is also touting the newfound ability to run games at 4K resolutions at over 60 FPS performance, making the RTX 2080 the card to get if that's your preferred resolution (especially if paired with one of those dazzling OLED TVs...) Of course, IQ settings aren't revealed in the slides, so there's an important piece of the puzzle still missing. But considering performance claims of NVIDIA, and comparing the achievable performance on last generation hardware, it's fair to say that these FPS scores refer to the high or highest IQ settings for each game.

Futuremark Sets a Date for UL Rebranding

Futuremark set a date for its re-branding to align with its parent company UL. April 23, 2018 is when Futuremark products and services will be sold under the new branding scheme. Futuremark became part of UL in 2014. UL is an independent, global company with more than 10,000 professionals in 40 countries. UL offers a wide range of testing, inspection, auditing, and certification services and solutions to help customers, purchasers and policymakers manage risk and complexity in modern markets. A set of FAQs associated with the change are answered below.

Intel Releases CPU Benchmarks with Meltdown and Spectre Mitigations

It's safe to say that there's one thing that you don't mess around with, and that's performance. Enthusiasts don't spend hundreds of dollars on a processor to watch it underperform. Given the complicated nature of the Meltdown and Spectre vulnerabilities, Microsoft's so-called mitigations were bound to have an impact on processor performance. The million dollar question was: Just how much? The initial estimate was somewhere around 30%, but Intel, being optimistic as usual, expected the performance impact to be insignificant for the average user. They recently provided some preliminary benchmark results that looked quite convincing too. Well, let's take a look at their findings, shall we?

Intel measured the mitgations' impact on CPU performance using their 6th, 7th, and 8th Generation Intel Core processors but, more specifically, the i7-6700K, i7-7920HQ, i7-8650U, and i7-8700K. The preferred operating system used in the majority of the benchmarks was Windows 10, however, Windows 7 also made a brief appearance. Intel chose four key benchmarks for their testing. SYSmark 2014 SE evaluated CPU performance on an enterprise level simulating office productivity, data and financial analysis, and media creation. PC Mark 10, on the other hand, tested performance in real-world usage employing different workloads like web browsing, video conferencing, application start-up time, spreadsheets, writing, and digital content creation. 3DMark Sky Diver assessed CPU performance in a DirectX 11 gaming scenario. Lastly, WebXPRT 2015 measured system performance using six HTML5- and JavaScript-based workloads which include photo enhancement, organize album, stock option pricing, local notes, sales graphs, and explore DNA sequencing.

AMD EPYC 7601 Processors Set Two New World Records on SPEC CPU Benchmarks

​AMD today announced that the new Hewlett Packard Enterprise ProLiant DL385 Gen10 server, powered by AMD EPYC processors set world records in both SPECrate2017_fp_base and SPECfp_rate2006. The secure and flexible 2P 2U HPE ProLiant DL385 Gen10 Server joins the HPE Cloudline CL3150 server in featuring AMD EPYC processors. With designs ranging from 8-core to 32-core, AMD EPYC delivers industry-leading memory bandwidth across the HPE line-up, with eight channels of memory and unprecedented support for integrated, high-speed I/O with 128 lanes of PCIe 3 on every EPYC processor.

"HPE is joining with AMD today to extend the world's most secure industry standard server portfolio to include the AMD EPYC processor. We now give customers another option to optimize performance and security for today's virtualized workloads," said Justin Hotard, vice president and GM, Volume Global Business Unit, HPE. "The HPE ProLiant DL385 featuring the AMD EPYC processor is the result of a long-standing technology engagement with AMD and a shared belief in continuing innovation."

Intel's 18-core Core i9-7980XE Benchmarks Surface

A user on Coolenjoy has apparently gotten his hands on Intel's upcoming i9-7980XE silicon, putting it through its paces on Cinebench and a number of other benchmarks. The 18-core, 36-thread Core i9-7980XE is set to be Intel's most advanced HEDT processor of all time by a wide margin - both in number of cores and pricing. It seems that even in the face of a competitive AMD, that puts value towards core counts with its $999 Threadripper 1950X 16-core, 32-thread CPU, Intel still sees it fit to charge an arm, a leg, and both of your kidneys for a 2-core advantage. Intel's XE processors have become more synonymous of eXtremely Expensive and less about being eXtreme Edition over the years, and the i9-7980XE, with its $1999 price-tag, does nothing to alleviate the issue. This is a halo product, though - the most advanced HEDT processor in the world. And with it being as niche a product as it is, it actually makes some kind of sense for it to be so expensive - an immoral, "where has the world gone" kind of sense, but still, some measure of it.

Intel Core i7-8700K and i5-8400 SANDRA Benchmarks Surface

Ahead of their launch later this quarter, SiSoft SANDRA benchmarks of Intel 8th generation Core i7-8700K and Core i5-8400 six-core processors surfaced in benchmark databases, which were promptly compared to their predecessors by HotHardware. The results put to the test Intel's claims of "over 40 percent more performance" compared to the 7th generation Core processors, which the company made in its 8th Generation Core Launch Event presentation. A bulk of these performance increases are attributed to the increasing core-count over generation, which directly yields higher multi-threaded performance; while a small but significant portion of it is attributed to increases in single-threaded performance. Since the "Coffee Lake" micro-architecture is essentially a refresh of the "Skylake" architecture, single-threaded performance increases could be attributed to higher clock speeds.

The Core i7-8700K is the top-dog of the 8th generation Core mainstream-desktop processor family. This six-core chip was compared to the product it succeeds in Intel's MSDT product-stack, the quad-core Core i7-7700K. There is a 45 percent increase in performance, in the "processor arithmetic" test; and a 47 percent increase in the "processor multimedia" test. These two test-suites are multi-threaded, and hence benefit from the two added cores, which in turn add four additional logical CPUs, thanks to HyperThreading. "Processor cryptography" sees a 12 percent increase. The single-precision and double-precision "Scientific Analysis" tests, which again are multi-threaded, see 26 percent and 32 percent performance gains over the i7-7700K, respectively.

New Performance Benchmarks of AMD's Vega Frontier Edition Surface

You probably took a long, hard read at our article covering a single-minded user's experience of his new Vega Frontier Edition. Now, courtesy of PCPer, and charitable soul Ekin at Linus Tech Tips, we have some more performance benchmarks of AMD's latest (non gaming specific) graphics card.

Starting with 2560x1440, let's begin with the good news: in what seems to be the best performance scenario we've seen until now, the Vega Frontier Edition stands extremely close to NVIDIA's GTX 1080 Ti video card in Fallout 4. It trails it for about 10 FPS most of the test, and even surpasses it at some points. These numbers should be taken with a grain of salt regarding the RX Vega consumer cards: performance on those models will probably be higher than the Frontier Edition's results. And for the sake of AMD, they better be, because in all other tests, the Frontier Edition somewhat disappoints. It's beaten by NVIDIA's GTX 1070 in Grand Theft Auto V, mirrors its performance in The Witcher 3, and delivers slightly higher performance than the GTX 1070 on Hitman and Dirt Rally (albeit lower than the GTX 1080.)

Intel's Core i7-7740K Kaby Lake-X Benchmarks Surface

Two days, two leaks on an upcoming Intel platform (the accelerated release dates gods are working hard with the blue giant, it would seem.) Now, it's Intel's own i7-7740K, a Kaby Lake-X HEDT processor that packs 4 cores and 8 threads, which is interesting when one considers that AMD's latest mainstream processors, Ryzen, already pack double the cores and threads in a non-HEDT platform. Interesting things about the Kaby Lake-X processors is that they are rumored to carry 16x PCIe 3.0 lane from the CPU (which can be configured as a singularly populated 16x or as a triple-populated 1x @ 8x and 2x @ 4x PCIe ports. Since these parts are reported as being based of on consumer, LGA-1151 Kaby Lake processors, it would seem these eschew Intel's integrated graphics, thus saving die space. And these do seem to deliver a quad-channel memory controller as well, though we've seen with Ryzen R7 reviews how much of a difference that makes for some of the use cases.

AMD Radeon RX 580 Overclocking and Benchmarks Surface

Some photos, screenshots and benchmarks of what appears to be an XFX RX 580 graphics card are doing the rounds, courtesy of overclocker Lau Kin Lam, who shared them (alongside a three-hour log video) on his Facebook page. Apparently, this is a special, China-only edition of the card, which is a shame, considering the great-looking waterblock that is smiling for the camera. The fact that this card is using a reference board with one 8-pin power connector may prove relevant to its overclocking efforts (and those of other, non-reference boards that we've seen carry both the 8-pin and an extra 6-pin power connector.

AMD's Ryzen 7 1700X Glorious Benchmarks Leak; IHS, Pin Layout Photographed

Another day, another leak: the folks at XFastest have indeed been the fastest to leak images of an actual Ryzen 7 1700X processor, with pictures of the processor's IHS and pin area running rampant throughout the Internet (the Ryzen chip is located to the right in both pictures, with a sample of AMD's previous-generation FX CPUs on the left side for comparison sake).

While revealing shots may have their appeal, it's the benchmarking portion that most of us are expectant about. Until actual reviews are out, we're left nothing more than these leaks (which should be taken with appropriate amounts of salt). In this case, benchmarks of AMD's upcoming Ryzen 7 1700X have been released, showing just how the upcoming CPU delivers in 3D Mark Fire Strike, CPU Mark and Cinebench R15.

AMD Ryzen 1700X, 1600X & 1300 Benchmarks Leaked

A number of sites have been reporting on some leaked (as in, captured from Futuremark's database) scores on AMD's upcoming CPUs. Now, some benchmarks seem to have surfaced regarding not only the company's 8-core, 16-thread monsters, but also towards its sweet-spot 6-core, 12-thread CPUs and its more mundane 4-core offerings.

Taking into account some metrics (which you should, naturally, take with some grains of salt), and comparing Intel's and AMD's Ryzen offerings on 3DMark's Fire Strike Physics scores, we can see that a $389 Ryzen 7 1700X (8 cores, 16 threads) at its base clock of 3.4 GHz manages to surpass Intel's competing (in thread count alone, since it retails for $1089) 6900K running at its base 3.2 GHz frequency - with the Ryzen processor scoring 17,878 points versus the 6900K's 17,100. Doing some fast and hard maths, this would mean that if the R7 1700X was to be clocked at the same speed as the 6900K, it would still be faster, clock for clock (though not by much, admittedly). We don't know whether Turbo was disabled or not on these tests, for either AMD's or Intel's processor, so we have to consider that. However, if Turbo were enabled, that would mean that the R7 1700X's clockspeed would only be 100 MHz higher than the 6900K's (3.8 GHz max, vs 3.7 GHz max on the Intel CPU).

Intel Core i3-7350K "Kaby Lake" Benchmarks Surface

Intel's third overclocker-friendly SKU from its 7th generation Core "Kaby Lake" processor family, the Core i3-7350K, is shaping up to be an interesting option for gaming PC builds. It may be a dual-core chip, which at $177 is within striking-range of the current $189 Core i5-6400 quad-core chip, but the lack of two cores appears to be more than made up for by its significantly higher clock speeds, even in multi-threaded benchmarks. The i3-7350K is clocked at 4.00 GHz, out of the box. It's also the only Core i3 desktop SKU to feature Turbo Boost, with a frequency of 4.20 GHz. It chip comes with an unlocked base-clock multiplier, letting you overclock it with ease.

A PC enthusiast with access to a Core i3-7350K sample put it through Geekbench, where it scored 5,137 points in the single-threaded tests, and 10,048 in multi-threaded tests. Here's where it gets interesting - the Core i5-6400 quad-core chip scores 3,686 points (single-threaded), and 9,982 points (multi-threaded. The i5-6400 is clocked at a mere 2.70 GHz, with 3.30 GHz Turbo Boost. Even the higher clocked i5-4670K "Haswell" quad-core chip (3.40 GHz core, 3.80 GHz Turbo Boost) doesn't manage to beat the i3-7350K, with 4,361 points (single-threaded), and 10,036 points (multi-threaded).

NVIDIA Calls Intel's Bluff on Xeon Phi vs. GPU Benchmarks

NVIDIA accused Intel of cheating in its ISC 2016 presentation, particularly in a performance-related slide, in which it claimed that its "Knights Landing" Xeon Phi HPC processor provides faster training of neural nets than NVIDIA "Maxwell" GPUs. In a slide, Intel claimed that a Xeon Phi HPC processor card is 2.3 times faster at training deep-learning neural nets, and 38 percent better scaling across nodes, than NVIDIA "Maxwell" GPUs, which triggered a swift response from the GPU maker, which made significant investments in deep-learning technologies over the past three years.

NVIDIA argued that Intel is using the classic technique of running outdated benchmarks to test its neural net training speeds. The company says that if Intel used the latest version of the same benchmark (Caffe AlexNet), the "Maxwell" GPU will be found to be 30 percent faster than the Xeon Phi at training neural nets. NVIDIA also notes that "Maxwell" is only its previous-generation part, and a "Pascal" based HPC processor would easily be 90 percent faster than the Xeon Phi. More importantly, NVIDIA notes that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. The latest "Pascal" GPUs leverate NVLink to scale up to 128 GPUs, providing the fastest deep-learning solutions money can buy.

FinalWire Announces AIDA64 v5.70 with Ray-tracing Benchmarks and Vulkan Support

Today FinalWire released an update to the PC editions of its award-winning system information software. Version 5.70 comes with new multi-threaded ray tracing benchmarks, which fully utilize the latest instruction set extensions (AVX, AVX2, FMA). The new floating point tests use more realistic workloads and put more intensive load on the CPU.

The new AIDA64 edition is now capable of displaying hardware monitoring information on RGB LED backlit mice and provides information on the Vulkan capabilities of the latest AMD and NVIDIA graphics processors. Additionally, it can now read and display sensor readings for Corsair AXi power supply units.
DOWNLOAD: FinalWire AIDA64 v5.70

Intel "Skylake" to be 6th Generation Core Series, First i7-6700K Benchmarks

Intel's next major CPU architecture, codenamed "Skylake," could be classified as the company's 6th generation Core processor family. It will succeed the brief stint Core "Broadwell" will have at the market, with no major chips for PC enthusiasts to look forward to. The Core i7-6700K appears to be the flagship product based on the Skylake-D silicon, succeeding the i7-4770K and i7-4790K. The Core i5-6600K will succeed the i5-4670K and i5-4690K.

The i7-6700K is a quad-core chip, with HyperThreading enabling 8 logical CPUs. Its nominal clock will be 4.00 GHz, with a rather shallow 4.20 GHz Turbo Boost frequency. It will feature an 8 MB L3 cache, and an integrated memory controller that supports both DDR4 and DDR3 memory types. This makes Skylake a transition point for the mainstream PC market to gradually upgrade to DDR4. You'll have some motherboards with DDR3 memory slots, some with DDR4 slots, and some with both kinds of slots. The resulting large uncore component, and perhaps a bigger integrated GPU, will result in quad-core Skylake parts having TDP rated as high as 95W, higher than current Haswell quad-core parts, with their 88W TDP.

First Alleged GTX TITAN-X Benchmarks Surface

Here are some of the first purported benchmarks of NVIDIA's upcoming flagship graphics card, the GeForce GTX TITAN-X. Someone with access the four of these cards installed them on a system driven by a Core i7-5960X eight-core processor, and compared its single-GPU and 4-way SLI performance on 3DMark 11, with its "extreme" (X) preset. The card scored X7994 points going solo - comparable to Radeon R9 290X 2-way CrossFire, and a single GeForce GTX TITAN-Z. With four of these cards in play, you get X24064 points. Sadly, there's nothing you can compare that score with.

NVIDIA unveiled the GeForce GTX TITAN-X at the Game Developers Conference (GDC) 2015. It was just that - an unveiling, with no specs, performance numbers, or launch date announced. The card is rumored to be based on the GM200 silicon - NVIDIA's largest based on the "Maxwell" architecture - featuring 3072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The benchmark screenshots reveal core clock speeds to be around 1.00 GHz, and the memory clock at 7.00 GHz.
Return to Keyword Browsing