News Posts matching "DirectX 12"

Return to Keyword Browsing

AMD Wants You to Choose Radeon RX 470 Over the GTX 1050 Ti, For Now

Hot on the heels of NVIDIA's GeForce GTX 1050 Ti launch, AMD fired off an elaborate press-deck explaining why consumers should choose its $169 Radeon RX 470 graphics card over the $139 GeForce GTX 1050 Ti it announced last Tuesday (18/10), which is due for market launch a week later (25/10). The presentation begins explaining that the RX 470 is better equipped to offer above 60 fps on all of today's games at 1080p (Full HD) resolution, with anti-aliasing enabled.

Later down the presentation, AMD alleges that NVIDIA "Pascal" architecture lacks asynchronous compute feature. There are already games that take advantage of it. AMD also claims that its "Polaris" based GPUs RX 480, RX 470, and RX 460, will be faster than competing GTX 1060, GTX 1050 Ti, and GTX 750 Ti at "Battlefield 1" with its DirectX 12 renderer. The presentation ends with a refresher of the company's current product-stack, and how it measures up to NVIDIA's offerings across the competitive landscape. Turns out there is indeed a big price/performance gap between the RX 460 and RX 470, just waiting to be filled.

AMD Actively Promoting Vulkan Beyond GPUOpen

Vulkan, the new-generation cross-platform 3D graphics API governed by the people behind OpenGL, the Khronos Group, is gaining in relevance, with Google making it the primary 3D graphics API for Android. AMD said that it's actively promoting the API. Responding to a question by TechPowerUp in its recent Radeon Technology Group (RTG) first anniversary presser, its chief Raja Koduri agreed that the company is actively working with developers to add Vulkan to their productions, and optimize them for Radeon GPUs. This, we believe, could be due to one of many strategic reasons.

First, Vulkan works inherently better on AMD Graphics CoreNext GPU architecture because it's been largely derived from Mantle, a now defunct 3D graphics API by AMD that brings a lot of "close-to-metal" API features that make game consoles more performance-efficient, over to the PC ecosystem. The proof of this pudding is the AAA title and 2016 reboot of the iconic first-person shooter "Doom," in which Radeon GPUs get significant performance boosts switching from the default OpenGL renderer to Vulkan. These boosts aren't as pronounced on NVIDIA GPUs.

CryEngine to Support Vulkan This November, DX12 Multi-GPU Next February

Game developer CryTek is planning to implement Vulkan API support for its in-house game-engine, CryEngine. To be released in November 2016 as CryEngine 5.3, CryTek will implement Vulkan across platforms, including PC (Windows desktop/notebook), and mobile (smartphones, tablets). The decision to implement Vulkan could have been fueled by Google's decision to make Vulkan the primary 3D graphics API of Android.

CryTek will also expand its support for DirectX 12, which it implemented in March 2016. With CryEngine 5.4, scheduled to be released in February 2017, along the sidelines of the Game Developers Conference (GDC), CryTek is introducing support for DirectX 12 native multi-GPU; deferred shading in sandboxed viewports, and multi-threaded rendering in sandbox.

Source: OC3D

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Deus Ex: Mankind Divided is the latest AAA title to support DirectX 12, with its developer Eidos deploying a DirectX 12 renderer weeks after its release, through a patch. Guru3D put the DirectX 12 version of the game through five GPU architectures, AMD "Polaris," GCN 1.1, GCN 1.2, NVIDIA "Pascal," and NVIDIA "Maxwell," through Radeon RX 480, Radeon R9 Fury X, Radeon R9 390X, GeForce GTX 1080, GeForce GTX 1060, and GeForce GTX 980. The AMD GPUs were driven by RSCE 16.9.1 drivers, and NVIDIA by GeForce 372.70.

Looking at the graphs, switching from DirectX 11 to DirectX 12 mode, AMD GPUs not only don't lose frame-rates, but in some cases, even gain frame-rates. NVIDIA GPUs, on the other hand, significantly lose frame-rates. AMD GPUs tend to hold on to their frame-rates at 4K Ultra HD, marginally gain frame-rates at 2560 x 1440, and further gain frame-rates at 1080p. NVIDIA GPUs either barely hold on to their frame-rates, or significantly lose them. AMD has on multiple occasions claimed that its Graphics CoreNext architecture, combined with its purist approach to asynchronous compute make Radeon GPUs a better choice for DirectX 12 and Vulkan. Find more fascinating findings by Guru3D here.
More graphs follow.

AMD Rolls Out the Radeon Software Crimson Edition 16.9.1

AMD today rolled out the Radeon Software Crimson Edition 16.9.1 Beta drivers. The drivers add optimization for the DirectX 12 renderer of "Deus Ex: Mankind Divided," and adds an AMD CrossFire profile for the DirectX 11 renderer of "DOTA 2." The drivers also address flickering on certain 144 Hz displays, and some rare game crash bugs on "DOOM," "Total War: Warhammer," "DiRT Rally," and "GTA V." Grab the drivers from the links below.
DOWNLOAD: AMD Radeon Software Crimson Edition 16.9.1 for Windows 10 64-bit | Windows 10 32-bit | Windows 8.1 64-bit | Windows 8.1 32-bit | Windows 7 64-bit | Windows 7 32-bit

No DirectX 12 Support for "Deus Ex: Mankind Divided" at Launch

Eidos announced that its upcoming AAA title "Deus Ex: Mankind Divided," which was touted as one of the posterboys for DirectX 12 by GPU manufacturers, won't ship with DirectX 12 support at launch. The game will release on August 23, 2016, with a DirectX 11 renderer, while the DirectX 12 renderer will be added via a patch, which will release in the week of 5th September.

Eidos said that it delayed DirectX 12 support because it needs some "extra work" by the developers, followed by optimizations. "We have some extra work and optimizations to do for DX12, and we need more time to ensure we deliver a compelling experience," the release reads. "Deus Ex: Mankind Divided" releases for PC, Xbox One, and PlayStation 4 on the 23rd of August, and is the 5th entry to the smash hit cyberpunk RPG.

EVGA Hardware Breaks 3DMark "Time Spy" World Records

EVGA and extreme overclocker Vince "K|NGP|N" Lucido have done it again, this time in the new DirectX 12 benchmark from Futuremark: "3DMark Time Spy". This new benchmark pushes the limits of modern PC gaming technology and can be used as a gauge for future game performance and visuals.

Armed with Liquid Nitrogen, an EVGA X99 FTW K motherboard, and an EVGA GeForce GTX 1080 Classified overclocked to 2,594MHz, K|NGP|N was able to hit new performance heights. He was able to score 9,973 points, a new single card World Record. The new EVGA Precision XOC software was used to overclock the graphics card, and is one of the only tools that can display a DirectX 12 OSD, a must have feature for overclockers.

Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark

Futuremark released the latest addition to the 3DMark benchmark suite, the new "Time Spy" benchmark and stress-test. All existing 3DMark Basic and Advanced users have limited access to "Time Spy," existing 3DMark Advanced users have the option of unlocking the full feature-set of "Time Spy" with an upgrade key that's priced at US $9.99. The price of 3DMark Advanced for new users has been revised from its existing $24.99 to $29.99, as new 3DMark Advanced purchases include the fully-unlocked "Time Spy." Futuremark announced limited-period offers that last up till 23rd July, in which the "Time Spy" upgrade key for existing 3DMark Advanced users can be had for $4.99, and the 3DMark Advanced Edition (minus "Time Spy") for $9.99.

Futuremark 3DMark "Time Spy" has been developed with inputs from AMD, NVIDIA, Intel, and Microsoft, and takes advantage of the new DirectX 12 API. For this reason, the test requires Windows 10. The test almost exponentially increases the 3D processing load over "Fire Strike," by leveraging the low-overhead API features of DirectX 12, to present a graphically intense 3D test-scene that can make any gaming/enthusiast PC of today break a sweat. It can also make use of several beyond-4K display resolutions.

DOWNLOAD: 3DMark with TimeSpy v2.1.2852

AMD Partners with Firaxis on Civilization VI Development

AMD today announced that it is partnering with Firaxis and 2K Games on the development of "Civilization VI," the next addition to the smash-hit turn-based strategy franchise. AMD announced that it is helping the developer take advantage of DirectX 12, including API-standardized Asynchronous Compute, and Explicit multi-adapter (MDA). This will let the game use setups that are a mixture of discrete and integrated GPUs, and mixed multi-GPU. The game is also said to feature a benchmark tool.

Source: AMD

Microsoft Refines DirectX 12 Multi-GPU with Simple Abstraction Layer

Microsoft is sparing no efforts in promoting DirectX 12 native multi-GPU as the go-to multi-GPU solution for game developers, obsoleting proprietary technologies like SLI and CrossFire. The company recently announced that it is making it easier for game developers to code their games to take advantage of multiple GPUs without as much coding as they do now. This involves the use of a new hardware abstraction layer that simplifies the process of pooling multiple GPUs in a system, which will let developers bypass the Explicit Multi-Adapter (EMA) mode of graphics cards.

This is the first major step by Microsoft since its announcement that DirectX 12, in theory, supports true Mixed Multi-Adapter configurations. The company stated that it will release the new abstraction layer as part of a comprehensive framework into the company's GitHub repository with two sample projects, one which takes advantage of the new multi-GPU tech, and one without. Exposed to this code, game developers' learning curve will be significantly reduced, and they will have a template on how to implement multi-GPU in their DirectX 12 projects with minimal effort. With this, Microsoft is supporting game developers in implementing API native multi-GPU, even as GPU manufacturers stated that while their GPUs will support EMA, the onus will be on game-developers to keep their games optimized.Source: GitHub

Shuttle Releases Five 39mm Thin Fanless PCs

Shuttle's popular series of robust, fanless, steel chassis Mini-PCs has been given a subtle make-over. Now the sixth generation of Intel Core processors (Skylake) is being used in five different model versions. This update brings 4K support, ac-WLAN and M.2 slot.

As with their predecessors, the five new models measure just 20 x 16.5 x 3.95 cm (DWH) and have 2x Intel Gigabit Ethernet, two serial interfaces, HDMI and DisplayPort, remote Power-ON and Always-ON jumper. New features include ac-WLAN with external antennae, support for DirectX 12 and 4K screens in all models. Up to 32 GB DDR3L memory can now be installed and the USB 3.0 ports have moved to the front panel. Each model comes supplied with a VESA mount that enables it to be attached to a suitable monitor or surface. At just 10 Watts, power consumption is just as miserly as before.

Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.

Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.
A teaser trailer video follows.

Microsoft XBOX Scorpio SoC Powered by "Polaris" and "Zen"

It looks like Microsoft will overpower Sony in the next round of the console wars, with a more powerful SoC on paper. The new XBOX "Scorpio" 4K Ultra HD game console will feature a custom-design SoC by AMD, which will combine not just a GPU based on the "Polaris" architecture, but also a CPU based on the "Zen" microarchitecture. This is significant because it sees a departure from using 8 smaller "Jaguar" CPU cores, and upshifts to stronger "Zen" ones. The chip could be built on the 14 nm process.

The SoC powering the XBOX Scorpio could feature a CPU component with eight "Zen" CPU cores, with SMT enabling 16 logical CPUs, and a "Polaris" GPU with 6 TFLOP/s of compute power. The combined compute power is expected to be close to 10 TFLOP/s. The Radeon RX 480, for instance features 5.84 TFLOP/s of power at its given clock speed. The CPU and GPU will likely share a common memory interface, belting out a memory bandwidth of 320 GB/s. The silicon muscle of this console should power 4K Ultra HD, 1080p @ 60 Hz HDR, and "good VR" solutions such as the Oculus Rift and HTC Vive. Games for the console could leverage DirectX 12.

Source: TweakTown

NVIDIA to Focus on 2-way SLI with GeForce "Pascal"

At its GeForce GTX 1080 launch event, NVIDIA is said to have told the press that with its GeForce "Pascal" series, the company will focus on 2-way SLI "for maximum performance," implying an uncertain future for 3-way and 4-way SLI. The company's new SLI HB bridge introduced with the GTX 1080, enables higher bandwidth between two graphics cards in SLI, letting them more reliably render games at high resolutions. On the downside, this new bridge occupies both SLI contact points on each card, in 2-way SLI.

It might still be possible to do 3-way and 4-way SLI using a classic 3-way or 4-way bridge included with your motherboard. You'd be at the mercy of applications somehow being able to take advantage of 3-4 GPUs, NVIDIA on its part, will likely only optimize its drivers for 2-way SLI. The knight in shining armour here is DirectX 12 native multi-GPU, which doesn't care how many GPUs you're using, or if they're even the same kind (as long as the GPUs and the app support Direct3D 12).
Source: TechofTomorrow (YouTube)

No Takers for VR: TechPowerUp New GPU Survey

The latest TechPowerUp front-page survey springs up some interesting findings on what our readers are most looking forward to, with the upcoming GPUs. Timed ahead of market availability of new GPUs from both NVIDIA and AMD, this poll gains relevance. At the time of writing of this article, we had received 4,650 votes over a week-long period, which amounts to a reasonable sample size. Some of the findings were surprising.

An overwhelming 60 percent of the respondents find price/performance most important. Interestingly, only 7 percent find efficiency/noise important. The second most popular choice, at 14 percent, was "4K Playability" (the ability for the GPU to play games at 4K Ultra HD resolution, at playable frame-rates). Our readers are seven times more likely to invest on 4K Ultra HD monitors (which start at $300 if you look in the right places), than picking up a VR (virtual reality) headset. A negligible 2 percent of our readers find VR most important.

NVIDIA GeForce GTX 1080 Specifications Released

After launching its shockingly fast (claimed) GeForce GTX 1080 and GTX 1070 graphics cards, NVIDIA posted specifications of the former. The two are based on NVIDIA's swanky new 16 nm "GP104" silicon, derived from its "Pascal" GPU architecture. The architecture is detailed in our older article, here. The GeForce GTX 1080 leads the pack, featuring four graphics processing clusters, holding 2,560 CUDA cores. The core runs at a scorching 1607 MHz, with a GPU Boost frequency of 1733 MHz. In one of its demos, NVIDIA overclocked this chip to over 2100 MHz, on its reference air cooling, and the GPU barely scraped 67 °C under stress. The GTX 1080 features a 256-bit wide GDDR5X memory interface, holding 8 GB of memory. The memory is clocked at 2500 MHz (10 GHz effective), working out to a memory bandwidth of 320 GB/s.

API support includes DirectX 12 (feature-level 12_1), OpenGL 4.5, and Vulkan. Display outputs include three DisplayPort 1.4 connectors, one HDMI 2.0b, and one dual-link DVI. The reference-design card is 10.5-inch long, and double-slot. It draws power from a single 8-pin PCIe power connector, and its typical board power is rated at 180W. With the GeForce "Pascal" family, instead of caving in to DirectX 12 native multi-GPU, NVIDIA developed its SLI technology further, with the new SLI HB (high-bandwidth) bridge standard. It's essentially a 2-way bridge in which both SLI fingers of the card are used. This doubles bandwidth between the two cards, allowing higher display resolutions, and multi-display setups between high-resolution monitors. The GeForce GTX 1080 will be available from May 27, 2016, starting at US $599. The $379 GTX 1070 specifications will be revealed closer to its June 10, 2016 market availability.

Hitman DirectX 12 Support Restored with Latest Patch

IO Interactive today released a 130 MB patch for the PC version of Hitman (2016), through Steam. The patch fixes DirectX 12 API support, which was broken by the recent April 26th update, that installs the game's second episode "Sapienza, Italy." After the update, Hitman appears to be working in DirectX 12 mode, in both its in-game benchmark, and the game itself, on both our GeForce GTX 970 SLI (365.10 drivers) and Radeon R9 290 (16.4.2 drivers) machines.

Hitman Episode Two Update Breaks DirectX 12 Support

The latest update for the episodic "Hitman" game (version 1.1.2) breaks support for DirectX 12 on the PC platform. Released earlier this week, the update introduces players to the second episode based in the Medeterranean setting of Sapienza. We found that after the update, the game simply refused to start. We took advantage of the external settings tool to do some trial-and-error work.

The topmost setting, which lets you select between DirectX 12 (our original setting) and DirectX 11, fixed the issue, when we switched to the latter. We even swapped graphics cards between GeForce GTX 970 SLI (our original setup), GTX 970 single-card (NVIDIA driver 364.96), and Radeon R9 290 (AMD driver 16.4.2). Neither setups worked. A stroll through the Internet reveals that multiple other players are facing this issue. Has Square Enix done absolutely no play-testing in DirectX 12?

EVGA Announces the GeForce GTX 950 Low Power Graphics Card

The EVGA GeForce GTX 950 features a true gaming GPU designed for every PC gamer. It's built to the exact specifications of the GeForce GTX family and powered by NVIDIA Maxwell - the most advanced GPU architecture ever created - to deliver 3x the performance of previous-generation cards. More powerful than any console, this card delivers a truly interactive, cinematic experience in the latest games with advanced effects driven by NVIDIA GameWorks and DirectX 12.

The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector. Learn more at this page.

AMD Releases Radeon Software Crimson Edition 16.4.1

AMD released its latest version of the Radeon Software Crimson Edition drivers. Version 16.4.1 comes with optimization for "Quantum Break," with up to 35 percent higher performance seen a machine running the Radeon R9 Fury X, compared to the previous 16.3.2 drivers. It also provides software support for the two hottest VR headsets - Oculus Rift and HTC Vive. The drivers also address a bug seen on "Hitman" (2016), which cases the game to flicker when shadow quality is bumped up to "high," in DirectX 11 mode. Most importantly, the drivers fix a frame-rate capping issues noticed on some DirectX 12 applications.
DOWNLOAD: AMD Radeon Software Crimson Edition 16.4.1 for Windows 10/8.1/7 64-bit | Windows 10/8.1/7 32-bit

AMD Announces Exciting DirectX 12 Game Engine Developer Partnerships

AMD today once again took the pole position in the DirectX 12 era with an impressive roster of state-of-the-art DirectX 12 games and engines, each with extensive tuning for the Graphics Core Next (GCN) architecture at the heart of modern Radeon GPUs.

"DirectX 12 is poised to transform the world of PC gaming, and Radeon GPUs are central to the experience of developing and enjoying great content," said Roy Taylor, corporate vice president, Content and Alliances, AMD. "With a definitive range of industry partnerships for exhilarating content, plus an indisputable record of winning framerates, Radeon GPUs are an end-to-end solution for consumers who deserve the latest and greatest in DirectX 12 gaming."

Microsoft Details Shader Model 6.0

Microsoft is giving final touches to Shader Model 6.0, an update to a key component of its Direct3D API. This succeeds Shader Model 5.0, which remained largely unchanged since the introduction of DirectX 11.0 in 2009. Shader Model 6.0 provides a more optimized pathway for shader code to make its way to the metal (GPU, hardware). The outgoing Shader Model 5.0, which is featured on DirectX 11 and DirectX 12, relies on FXC, an offline shader compiler, to both compile and optimize HLSL shader code, supporting HLSL v1.4 to v5.1 code.

Shader Model 6.0, on the other hand, dedicates compiling to Clang HLSL compiler, and optimization to multiple LLVM passes. Since Shader Model 6.0 supports HLSL code from v5.0 upwards, it should also benefit existing DirectX 11 and DirectX 12 apps, while relegating older apps to the then legacy Shader Model 5.0 pathway. In addition, Shader Model 6.0 claims to provide the right performance to cope with API level features such as tiled resources (mega-textures). It remains to be seen how Microsoft deploys Shader Model 6.0.

Source: PCGH

NVIDIA Unveils the Quadro M6000 24GB Graphics Card

NVIDIA announced the Quadro M6000, its new high-end workstation single-GPU graphics card. Based on the GM200 silicon, and leveraging the "Maxwell" GPU architecture, the M6000 maxes out all the hardware features of the chip, featuring 3,072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 24 GB of memory, double that of the GeForce GTX TITAN X. Its peak single-precision floating point performance is rated at 7 TFLOP/s.

Where the M6000 differs from its the GTX TITAN X is its workstation-grade features. It drops the HDMI 2.0 connector for a total of four DisplayPort 1.2 connectors, supporting a total of four 4K Ultra HD displays. The dual-link DVI connector stays on. There's also an optional stereoscopic 3D connector. The nView MultiDisplay tech provides more flexible display-head configurations than the ones you find on NVIDIA's consumer graphics GPUs; you also get NVIDIA GPUDirect support, which gives better memory sharing access for multi-GPU systems. The M6000 supports most modern 3D APIs, such as DirectX 12, OpenGL 4.5, and Vulkan; with compute capabilities over CUDA, OpenCL, and DirectCompute. NVIDIA didn't reveal pricing.

AMD "Polaris" Based Graphics Card Display Connector Layout Pictured

At its Capsacin event, AMD demoed one of the first DirectX 12 AAA games, Hitman (2016), on a machine running its next-generation GPU. This GPU is based on its "Polaris" architecture, and while the press got no specs to take home, they did catch a glimpse of the card's rear I/O panel. The display connector layout is unlike anything AMD has at the moment, on its reference-design cards, and looks similar to what NVIDIA offers on some of its high-end graphics cards.

The connector layout sees four connectors on the first slot (same level as the card's PCB), which includes three DisplayPorts and one HDMI port; and one DVI connector on the second slot, with an air-vent next to it. The "Polaris" architecture supports DisplayPort 1.3 and HDMI 2.0a standards. DisplayPort 1.3 serves up enough bandwidth for 5K (5120 x 2880 pixels) at 60 Hz, 4K-HDR (3840 x 2160 with HDR) at 60 Hz, or 4K-SDR at 120 Hz. The SFF desktop setup running this card was able to render Hitman with Ultra settings and 2560x1440 pixels resolution, over 60 fps.

Source: VideoCardz

AMD Radeon Fury X2 Reference Air Cooled?

AMD, which has been timing its upcoming dual-GPU "Fiji" graphics card to launch sometime this year, may have demoed a production version of the card in one of its launch partners, Falcon Northwest's, Tiki high-end gaming desktop, as a "VR developer box." AMD's Roy Taylor, in a recent tweet, captions a picture of this dev box as being "the world's best DirectX 12 VR developer box," leading the press to speculate that it's running the company's dual-GPU "Fiji" card.

A close look at AMD's VR dev box, through its windowed graphics card compartment, reveals an air-cooled AMD reference graphics card, which VideoCardz' trigonometry pins as being shorter than a Radeon R9 390X reference board. It could be a reference R9 380X, but then a reference dual-GPU "Fiji" PCB is roughly of the same length, and a R9 380X wouldn't earn the title of being the "world's best" from a senior AMD exec while there are faster AMD cards, such as the R9 Fury. The ability of the full-spec "Fiji" silicon to cope well with a rather simple air-cooler in the R9 Nano fans even more speculation that a dual-GPU "Fiji" board could make do with a powerful air-channel cooler.
Sources: VideoCardz, TweakTown
Return to Keyword Browsing