News Posts matching #Maxwell

Return to Keyword Browsing

Audeze Releases Limited Edition Maxwell "Wasabi" Gaming Headset

Audeze, the leading premium headphone manufacturer, is excited to announce the launch of its latest limited edition of their wildly popular Maxwell gaming headset, Maxwell Wasabi Edition, exclusively for Xbox. Known for its superior audio quality and class leading technology, the Maxwell headset now arrives in a vibrant new
colorway featuring an iridescent chameleon-like electroplated finish, offering gamers a new way to show off their style along with Maxwell's superior performance.

The Maxwell Wasabi Edition follows the success of last year's Maxwell Ultraviolet Edition, part of Microsoft's "Shimmer" campaign of limited edition Xbox accessories, which captivated gamers with unique eye-catching designs. This new finish on Maxwell Wasabi features shimmering hues that shift from green to gold, adding a spicy, dazzling aesthetic to Audeze's critically acclaimed gaming headset, which will only be produced one time, in limited quantities.

NVIDIA Shifts Gears: Open-Source Linux GPU Drivers Take Center Stage

Just a few months after hiring Ben Skeggs, a lead maintainer of the open-source NVIDIA GPU driver for Linux kernel, NVIDIA has announced a complete transition to open-source GPU kernel modules in its upcoming R560 driver release for Linux. This decision comes two years after the company's initial foray into open-source territory with the R515 driver in May 2022. The tech giant began focusing on data center compute GPUs, while GeForce and Workstation GPU support remained in the alpha stages. Now, after extensive development and optimization, NVIDIA reports that its open-source modules have achieved performance parity with, and in some cases surpassed, their closed-source counterparts. This transition brings a host of new capabilities, including heterogeneous memory management support, confidential computing features, and compatibility with NVIDIA's Grace platform's coherent memory architectures.

The move to open-source is expected to foster greater collaboration within the Linux ecosystem and potentially lead to faster bug fixes and feature improvements. However, not all GPUs will be compatible with the new open-source modules. While cutting-edge platforms like NVIDIA Grace Hopper and Blackwell will require open-source drivers, older GPUs from the Maxwell, Pascal, or Volta architectures must stick with proprietary drivers. NVIDIA has developed a detection helper script to guide driver selection for users who are unsure about compatibility. The shift also brings changes to NVIDIA's installation processes. The default driver version for most installation methods will now be the open-source variant. This affects package managers with the CUDA meta package, run file installations and even Windows Subsystem for Linux.

NVIDIA GeForce Drivers 546.65 WHQL Released, Supports RTX 4070 SUPER

NVIDIA today released the latest version of its GeForce drivers. Version 546.65 WHQL comes with support for the new GeForce RTX 4070 SUPER that's available for purchase from today. The driver also comes with Game Ready optimizations for "Palworld." GeForce Experience optimal settings are added for a handful new games, including Apocalypse Party, Ash Echoes, Escape from Tarkov: Arena, House Flipper 2, and Pioneers of Pagonia. The drivers also fix a bug where GeForce Experience 1-click optimization for Cyberpunk 2077 v2.1 patch. It also fixed a bug with the 10-year old GeForce GTX "Maxwell" GPUs that causes monitors to not wake up after multiple sleep-wake cycles. Grab the driver from the link below.

DOWNLOAD: NVIDIA GeForce 546.65 WHQL

Nintendo Switch 2 to Feature NVIDIA Ampere GPU with DLSS

The rumors of Nintendo's next-generation Switch handheld gaming console have been piling up ever since the competition in the handheld console market got more intense. Since the release of the original Switch, Valve has released Steam Deck, ASUS made ROG Ally, and others are also exploring the market. However, the next-generation Nintendo Switch 2 is closer and closer, as we have information about the chipset that will power this device. Thanks to Kepler_L2 on Twitter/X, we have the codenames of the upcoming processors. The first generation Switch came with NVIDIA's Tegra X1 SoC built on a 20 nm node. However, later on, NVIDIA supplied Nintendo with a Tegra X1+ SoC made on a 16 nm node. There were no performance increases recorded, just improved power efficiency. Both of them used four Cortex-A57 and four Cortex-A53 cores with GM20B Maxwell GPUs.

For the Nintendo Switch 2, NVIDIA is said to utilize a customized variant of NVIDIA Jetson Orin SoC for automotive applications. The reference Orin SoC carries a codename T234, while this alleged adaptation has a T239 codename; the version is most likely optimized for power efficiency. The reference Orin design is a considerable uplift compared to the Tegra X1, as it boasts 12 Cortex-A78AE cores and LPDDR5 memory, along with Ampere GPU microarchitecture. Built on Samsung's 8 nm node, the efficiency would likely yield better battery life and position the second-generation Switch well among the now extended handheld gaming console market. However, including Ampere architecture would also bring technologies like DLSS, which would benefit the low-power SoC.

Audeze and Microsoft Collaborate for Limited Edition Maxwell Headset

Audeze, the leading premium headphone manufacturer, today announced the release of a limited edition gaming headset developed in partnership with Microsoft, for their Designed for Xbox Summer product collection. This stylish, limited edition "Ultraviolet" version of their highly lauded Maxwell gaming headset features a new finish that adds a head-turning and unique look to one of the best gaming headsets on the market. The Ultraviolet edition of the Maxwell headphone is available on the Audeze website for pre-order in limited quantities.

The partnership between Audeze and Microsoft which began in 2021 with the Microsoft certification of Audeze's Penrose gaming headset, and continued recently with Microsoft's certification of our new Maxwell gaming headset. This partnership has expanded access to Audeze's industry-leading planar magnetic headsets to Microsoft's vast audience of console users. Epic life-like audio quality as delivered by Audeze's legendary planar magnetic drivers are a powerful tool for gamers, and now with the Ultraviolet limited edition of Maxwell, they can use their sonic advantage in style.

NVIDIA Enables More Encoding Streams on GeForce Consumer GPUs

NVIDIA has quietly removed some video encoding limitations on its consumer GeForce graphics processing units (GPUs), allowing encoding of up to five simultaneous streams. Previously, NVIDIA's consumer GeForce GPUs were limited to three simultaneous NVENC encodes. The same limitation did not apply to professional GPUs.

According to NVIDIA's own Video Encode and Decode GPU Support Matrix document, the number of concurrent NVENC encodes on consumer GPUs have been increased from three to five. This includes certain GeForce GPUs based on Maxwell 2nd Gen, Pascal, Turing, Ampere, and Ada Lovelace GPU architectures. While the number of concurrent NVDEC decodes were never limited, there is a limitation on how many streams you can encode by certain GPU, depending on the resolution of the stream and the codec.

Forspoken Simply Doesn't Work with AMD Radeon RX 400 and RX 500 "Polaris" GPUs

AMD Radeon RX 400 series and RX 500 series graphics cards based on the "Polaris" graphics architecture are simply unable to run "Forspoken," as users on Reddit report. The game has certain DirectX 12 feature-level 12_1 API requirements that the architecture does not meet. Interestingly, NVIDIA's "Maxwell" graphics architecture, which predates AMD "Polaris" by almost a year, supports FL 12_1, and is able to play the game. Popular GPUs from the "Maxwell" generation include the GeForce GTX 970 and GTX 960. Making matters much worse, AMD is yet to release an update to its Adrenalin graphics drivers for the RX Vega, RX 5000, and RX 6000 series that come with "Forspoken" optimization. Its latest 23.1.2 beta drivers that come with these optimizations only support the RX 7000 series RDNA3 graphics cards. It's now been over 50 days since the vast majority of AMD discrete GPUs have received a driver update.

AMD ROCm 4.5 Drops "Polaris" Architecture Support

AMD's ROCm compute programming platform—a competitor to NVIDIA's CUDA, dropped support for the "Polaris" graphics architecture, with the latest version 4.5 update. Users on the official ROCm git raised this as an issue assuming it was a bug, to which an official AMD support handle confirmed that the Radeon RX 480 graphics card of the original poster is no longer supported. Another user tested his "Polaris 20" based RX 570, and it isn't supported, either. It's conceivable that the "Polaris 30" based RX 590, a GPU launched in November 2018, isn't supported either. Cutting out a 3-year old graphics architecture from the compute platform sends the wrong message, especially to CUDA users who AMD wants to win over with ROCm. With contemporary GPUs priced out of reach, IT students are left with used older-generation graphics cards, such as those based on "Polaris." NVIDIA CUDA supports GPUs as far back as "Maxwell" (September 2014).

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

NVIDIA Updates Video Encode and Decode Matrix with Reference to Ampere GPUs

NVIDIA has today updated its video encode and decode matrix with references to the latest Ampere GPU family. The video encode/decode matrix represents a table of supported video encoding and decoding standards on different NVIDIA GPUs. The matrix has a reference dating back to the Maxwell generation of NVIDIA graphics cards, showing what video codecs are supported by each generation. That is a useful tool for reference purposes, as customers can check if their existing or upcoming GPUs support a specific codec standard if they need any for video reproduction purposes. The update to the matrix comes in a form of Ampere GPUs, which are now present there.

For example, the table shows that, while supporting all of the previous generations of encoding standards, the Ampere based GPUs feature support for HEVC B Frame standard. For decoding purposes, the Ampere lineup now includes support for AV1 8-bit and 10-bit formats, while also supporting all of the previous generation formats. For a more detailed look at the table please go toNVIDIA's website here.
NVIDIA Encoding and Decoding Standards

NVIDIA Develops Tile-based Multi-GPU Rendering Technique Called CFR

NVIDIA is invested in the development of multi-GPU, specifically SLI over NVLink, and has developed a new multi-GPU rendering technique that appears to be inspired by tile-based rendering. Implemented at a single-GPU level, tile-based rendering has been one of NVIDIA's many secret sauces that improved performance since its "Maxwell" family of GPUs. 3DCenter.org discovered that NVIDIA is working on its multi-GPU avatar, called CFR, which could be short for "checkerboard frame rendering," or "checkered frame rendering." The method is already secretly deployed on current NVIDIA drivers, although not documented for developers to implement.

In CFR, the frame is divided into tiny square tiles, like a checkerboard. Odd-numbered tiles are rendered by one GPU, and even-numbered ones by the other. Unlike AFR (alternate frame rendering), in which each GPU's dedicated memory has a copy of all of the resources needed to render the frame, methods like CFR and SFR (split frame rendering) optimize resource allocation. CFR also purportedly offers lesser micro-stutter than AFR. 3DCenter also detailed the features and requirements of CFR. To begin with, the method is only compatible with DirectX (including DirectX 12, 11, and 10), and not OpenGL or Vulkan. For now it's "Turing" exclusive, since NVLink is required (probably its bandwidth is needed to virtualize the tile buffer). Tools like NVIDIA Profile Inspector allow you to force CFR on provided the other hardware and API requirements are met. It still has many compatibility problems, and remains practically undocumented by NVIDIA.

NVIDIA Releases GeForce 441.20 WHQL Drivers

NVIDIA today posted GeForce 441.20 WHQL software. The drivers come game-ready for "Star Wars Jedi: The Fallen Order," and "Stormland." This includes SLI support for the latest Star Wars game. NVIDIA also introduced support for CUDA 10.2 compute API. The list of G-Sync compatible monitors has been expanded with three more displays. Among the issues addressed with this release are geometry corruption on GTX 900-series "Maxwell" GPUs with "Red Dead Redemption" in Vulkan; G-Sync getting disengaged when V-Sync is disabled in "Red Dead Redemption." Vulkan-related errors with "The Surge" have also been fixed. There's also a surprising bug-fix, "Quake 3 Arena" appearing washed-out when color-depth is set to 16-bit. Performance drops with "CS: GO" have also been addressed. Grab the driver from the link below.
DOWNLOAD: NVIDIA GeForce 441.20 WHQL Drivers

NVIDIA Has No Plans for Adaptive Sync Support on Maxwell, Prior GPUs

In case anyone's been living under a rock (and in these times, if you can do that, I probably envy you), NVIDIA at CES 2019 announced it was opening up G-Sync support to non-G-Sync totting monitors. Via adoption of VESA's open VRR standard (Adaptive Sync, on which FreeSync is based), the company will now add support for monitors that usually only support FreeSync. The company also vowed to test all configurations and monitors, with a whitelist of automatically-enabled panels and manual override for those that don't pass the certification process or still haven't been subjected to it.

Now, via a post on NVIDIA's GeForce forums, ManuelGuzmanNV, with a Customer Care badge, has said, in answer to a users' question on Variable Refresh-Rate support for NVIDIA's 9000 series, that "Sorry but we do not have plans to add support for Maxwell and below". So this means that only NVIDIA's 1000 and 2000-series of GPUs will be getting said support, thus reducing the number of users for which VRR support on NVIDIA graphics cards is relevant. At the same time, this might serve as a reason for those customers to finally make the jump to one of NVIDIA's more recent graphics card generations, in case they don't already own a VRR-capable monitor and want to have some of that smoothness.

NVIDIA GPUs Can be Tricked to Support AMD FreeSync

Newer generations of NVIDIA GPUs such as "Pascal" and "Maxwell" meet or exceed the hardware requirements of AMD FreeSync, as they feature DisplayPort 1.4 connectors that include the features of DisplayPort 1.2a, required for VESA adaptive sync. In a bid to promote its own G-SYNC technology, NVIDIA doesn't expose this feature to monitors or software that support FreeSync. Redditor "bryf50" may have found a way around this. The trick is deceptively simple, however, you'll need games that support on-the-fly switching of rendering GPUs, and an AMD Radeon graphics card at hand.

When poking around with system settings in "Warcraft: Battle for Azeroth," bryf50 discovered that you can switch the "rendering GPU" on the fly, without having to physically connect your display to that newly selected GPU. You can start the game with your display connected to VGA1 (an AMD Radeon GPU), and switch the renderer in-game to VGA2 (an NVIDIA GPU). FreeSync should continue to work, while you enjoy the performance of that NVIDIA GPU. In theory, this should allow you to pair your high-end GTX 1080 Ti with a $50 RX 550 that supports FreeSync, instead of paying the $200+ G-SYNC tax.

USB Type-C with DisplayPort+USB Wiring Could Get a Big Push by NVIDIA

With its GeForce "Maxwell" family, NVIDIA, riding on the multi-monitor fad, began equipping its graphics cards with up to three DisplayPort connectors, besides an HDMI, and optionally, a legacy DVI connector. Prior to that generation, AMD dabbled with equipping its cards with two mini-DisplayPorts, besides two DVI and an HDMI.

With the latest GeForce RTX "Turing" family, NVIDIA could push for the adoption of USB type-C connectors with DisplayPort wiring, and perhaps even USB-PD standards compliance, pushing up to 60 Watts of power from the same port. This USB+DP+Power connector is called VirtuaLink. This could make it easier for VR HMD manufacturers to design newer generations of their devices with a single USB type-C connection for display and audio input form the GPU, USB input from the system, and power. We reckon 60W is plenty of power for a VR HMD.

NVIDIA Has a DisplayPort Problem Which Only a BIOS Update Can Fix

NVIDIA "Maxwell" and "Pascal" graphics architectures introduced support for modern display connectivity to keep up with the breakneck pace at which display resolutions are scaling up. The two introduce support for DisplayPort 1.4 and 1.3, however the implementation is less than perfect. Some of the newer monitors that leverage DisplayPort 1.4 or 1.3 standards don't function as designed on "Maxwell" (GeForce GTX 900 series) and "Pascal" (GeForce 10-series) graphics cards, with users reporting a range of bugs from blank screens until the operating system loads, to frozen boot sequences.

Unfortunately, these issues cannot be fixed by driver updates, and require graphics card BIOS updates. Luckily, you won't be at the mercy of lethargic AIC partners looking to limit their warranty claims by going slow on BIOS updates, or NVFlash rocket-science. NVIDIA released a tool which can detect if your graphics card needs the update, and then updates the BIOS for you, from within Windows. The app first unloads your driver, and flashes your graphics card BIOS (a process which must not be interrupted, lest you end up with an expensive brick).

Update: We have confirmation that the tool is intended for both reference-design and custom-design graphics cards.
DOWNLOAD: NVIDIA Graphics Firmware Update Tool for DisplayPort 1.3 and 1.4 Displays

NVIDIA Quietly Rolls Out GeForce MX130 and MX110 Notebook GPUs

NVIDIA quietly rolled out two entry-level notebook graphics processors, the GeForce MX130 and GeForce MX110. The two continue to be based on the "Maxwell" GPU architecture, and from the looks of it, appear to be based on the "GM108" silicon. With a core-configuration of 384 CUDA cores, 16 TMUs, and 8 ROPs, and GDDR5 memory, and mentioned that the MX130 is "2.5x" as fast as Intel UHD 620 graphics, while the MX110 is "1.5x" as fast, with slower DDR3 memory. Both support NVIDIA Optimus technology, besides the feature-set of the "Maxwell" architecture, and complete the MX100-series entry-level notebook GPU family led by the GP108-based MX150, which offers 4x UHD 620 performance.

NVIDIA Working on a Major DirectX 12 Focused Driver Update

NVIDIA is reportedly working on a major driver update that improves performance across the board on games that take advantage of DirectX 12. The company is working with major game studios to improve DirectX 12 performance by up to 16 percent on recent AAA titles, such as "Gears of War 4," "Hitman" (2016), "Rise of the Tomb Raider" and "Tom Clancy's The Division." Even "Ashes of the Singularity" is getting a performance improvement.

At its GeForce GTX 1080 Ti launch event, NVIDIA revealed that its "Pascal" and "Maxwell" GPU architectures are capable of tile-based rendering, a feature that significantly improves video memory bandwidth utilization. With the GTX 1080 Ti going on sale later this month, one can expect NVIDIA to release new drivers.

On NVIDIA's Tile-Based Rendering

Looking back on NVIDIA's GDC presentation, perhaps one of the most interesting aspects approached was the implementation of tile-based rendering on NVIDIA's post-Maxwell architectures. This has been an adaptation of typically mobile approaches to graphics rendering which keeps their specific needs for power efficiency in mind - and if you'll "member", "Maxwell" was NVIDIA's first graphics architecture publicly touted for its "mobile first" design.

This approach essentially divides the screen into tiles, and then rasterizes the entire frame in a per-tile basis. 16×16 and 32×32 pixels are the usual tile sizes, but both Maxwell and Pascal can dynamically assess the required tile size for each frame, changing it on-the-fly as needed and according to the complexity of the scene. This looks to ensure that the processed data has a much smaller footprint than that of the full image rendering - small enough that it makes it possible for NVIDIA to keep the data in a much smaller amount of memory (essentially, the L2 memory), dynamically filling and flushing the available cache as possible until the full frame has been rendered. This means that the GPU doesn't have to access larger, slower memory pools as much, which primarily reduces the load on the VRAM subsystem (increasing available VRAM for other tasks), whilst simultaneously accelerating rendering speed. At the same time, a tile-based approach also lends itself pretty well to the nature of GPUs - these are easily parallelized operations, with the GPU being able to tackle many independent tiles simultaneously, depending on the available resources.

NVIDIA Announces the GeForce GTX 1080 Ti Graphics Card at $699

NVIDIA today unveiled the GeForce GTX 1080 Ti graphics card, its fastest consumer graphics card based on the "Pascal" GPU architecture, and which is positioned to be more affordable than the flagship TITAN X Pascal, at USD $699, with market availability from the first week of March, 2017. Based on the same "GP102" silicon as the TITAN X Pascal, the GTX 1080 Ti is slightly cut-down. While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit wide GDDR5X memory interface. This translates to 11 memory chips on the card. On the bright side, NVIDIA is using newer memory chips than the one it deployed on the TITAN X Pascal, which run at 11 GHz (GDDR5X-effective), so the memory bandwidth is 484 GB/s.

Besides the narrower 352-bit memory bus, the ROP count is lowered to 88 (from 96 on the TITAN X Pascal), while the TMU count is unchanged from 224. The GPU core is clocked at a boost frequency of up to 1.60 GHz, with the ability to overclock beyond the 2.00 GHz mark. It gets better: the GTX 1080 Ti features certain memory advancements not found on other "Pascal" based graphics cards: a newer memory chip and optimized memory interface, that's running at 11 Gbps. NVIDIA's Tiled Rendering Technology has also been finally announced publicly; a feature NVIDIA has been hiding from its consumers since the GeForce "Maxwell" architecture, it is one of the secret sauces that enable NVIDIA's lead.

NVIDIA Rumored to Launch the GTX 1050 M at CES 2017

New reports have surfaced that indicate NVIDIA is all set to launch a new mid range mobile graphics solution, the GTX 1050 M at CES 2017 in Las Vegas. While NVIDIA has already released mobile variants of the GTX 1080, GTX 1070 and GTX 1060, people not willing to spend big on higher tier products from the green camp are limited to "Maxwell" based GTX 960 M or GTX 950 M offerings. Reports also indicate there has been somewhat of a surge in demand for GTX 1060 M equipped laptops, where the new GTX 1050 M could be nicely positioned to further augment consumer appetite.

As we reported in November, we can expect that in line with the existing "Pascal" based mobile solutions, the new GTX 1050 M to sport the same (or better) core-configuration as its desktop counterpart, but with significantly higher clock speeds. This should make the GTX 1050 M an attractive option as it would endow the laptop with graphical horsepower exceeding the very capable GTX 970 M. All in all with new Intel "Kaby Lake" processors set to take the stage at CES 2017 too, we could see quite an array of new or reconfigured laptops scattered throughout the show.

NVIDIA Announces Xavier, Volta-based Autonomous Transportation SoC

At its inaugural European edition of the Graphics Technology Conference (GTC), NVIDIA announced Xavier, an "AI supercomputer for the future of autonomous transportation." An evolution of its Drive PX2 board that leverages a pair of "Maxwell" GPUs with some custom logic and an ARM CPU, to provide cars with the compute power necessary to deep-learn the surroundings and self-drive, or assist-drive; Xavier is a refinement over Drive PX2 in that it merges three chips - two GPUs and one control logic into an SoC.

You'd think that NVIDIA refined its deep-learning tech enough to not need a pair of "Maxwell" SoCs, but Xavier is more than that. The 7 billion-transistor chip built on 16 nm FinFET process, offers more raw compute performance thanks to leveraging NVIDIA's next-generation "Volta" architecture, one more advanced than even its current "Pascal" architecture. The chip features a "Volta" GPU with 512 CUDA cores. The CVA makes up the vehicle I/O, while an image processor that's capable of 8K HDR video streams feeds the chip with visual inputs from various cameras around the vehicle. An 8-core ARM CPU performs general-purpose compute. NVIDIA hopes to get the first engineering samples of Xavier out to interested car-makers by Q4-2017.

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Deus Ex: Mankind Divided is the latest AAA title to support DirectX 12, with its developer Eidos deploying a DirectX 12 renderer weeks after its release, through a patch. Guru3D put the DirectX 12 version of the game through five GPU architectures, AMD "Polaris," GCN 1.1, GCN 1.2, NVIDIA "Pascal," and NVIDIA "Maxwell," through Radeon RX 480, Radeon R9 Fury X, Radeon R9 390X, GeForce GTX 1080, GeForce GTX 1060, and GeForce GTX 980. The AMD GPUs were driven by RSCE 16.9.1 drivers, and NVIDIA by GeForce 372.70.

Looking at the graphs, switching from DirectX 11 to DirectX 12 mode, AMD GPUs not only don't lose frame-rates, but in some cases, even gain frame-rates. NVIDIA GPUs, on the other hand, significantly lose frame-rates. AMD GPUs tend to hold on to their frame-rates at 4K Ultra HD, marginally gain frame-rates at 2560 x 1440, and further gain frame-rates at 1080p. NVIDIA GPUs either barely hold on to their frame-rates, or significantly lose them. AMD has on multiple occasions claimed that its Graphics CoreNext architecture, combined with its purist approach to asynchronous compute make Radeon GPUs a better choice for DirectX 12 and Vulkan. Find more fascinating findings by Guru3D here.
More graphs follow.

NVIDIA Calls Intel's Bluff on Xeon Phi vs. GPU Benchmarks

NVIDIA accused Intel of cheating in its ISC 2016 presentation, particularly in a performance-related slide, in which it claimed that its "Knights Landing" Xeon Phi HPC processor provides faster training of neural nets than NVIDIA "Maxwell" GPUs. In a slide, Intel claimed that a Xeon Phi HPC processor card is 2.3 times faster at training deep-learning neural nets, and 38 percent better scaling across nodes, than NVIDIA "Maxwell" GPUs, which triggered a swift response from the GPU maker, which made significant investments in deep-learning technologies over the past three years.

NVIDIA argued that Intel is using the classic technique of running outdated benchmarks to test its neural net training speeds. The company says that if Intel used the latest version of the same benchmark (Caffe AlexNet), the "Maxwell" GPU will be found to be 30 percent faster than the Xeon Phi at training neural nets. NVIDIA also notes that "Maxwell" is only its previous-generation part, and a "Pascal" based HPC processor would easily be 90 percent faster than the Xeon Phi. More importantly, NVIDIA notes that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. The latest "Pascal" GPUs leverate NVLink to scale up to 128 GPUs, providing the fastest deep-learning solutions money can buy.

NVIDIA Intros the Quadro M2000 Graphics Card

NVIDIA announced the Quadro M2000 professional graphics card. Based on the 28 nm GM206 "Maxwell" silicon, the M2000 is a mid-range professional graphics offering by the company, and is energy-efficient. The card has the same core-configuration as the GeForce GTX 950, with 768 CUDA cores, 48 TMUs, 16 ROPs, and a 128-bit wide GDDR5 memory interface. 4 GB is the standard memory amount for this card.

The GPU is clocked up to 1188 MHz, and the memory at 6.60 GHz (GDDR5-effective), yielding a memory bandwidth of 106 GB/s. The card relies on the PCI-Express slot for all its power, its typical board power-draw is rated at 75W. The reference-design board features a neat full-height PCB, and a single-slot cooling solution. Display outputs include four DisplayPort 1.2 connectors. The M2000 brings some Quadro-exclusive software features to the table, including nView MultiDisplay, and support for 5K and 8K displays (through multiple connectors). Expect this card to be priced around US $550.
Return to Keyword Browsing
Dec 10th, 2024 09:38 CST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts