News Posts matching #Raytracing

Return to Keyword Browsing

Khronos Group Releases Vulkan SDK, Drivers With Official Raytracing Support; Showcases Wolfenstein: Youngblood

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, announces that LunarG has released the Vulkan Software Development Kit (SDK) version 1.2.162.0, with full support for the new Vulkan raytracing extensions, including Validation Layers and integration of upgraded GLSL, HLSL and SPIR-V shader tool chains. The Khronos open source Vulkan Samples and Vulkan Guide have been upgraded to illustrate raytracing techniques. Finally, with production drivers shipping from both AMD and NVIDIA, developers are now enabled to easily integrate Vulkan raytracing into their applications.

Khronos released final Vulkan raytracing extensions in November 2020 to seamlessly integrate raytracing functionality alongside Vulkan's rasterization framework, making Vulkan the industry's first open, cross-vendor, cross-platform standard for raytracing acceleration. Vulkan raytracing can be deployed using existing GPU compute or dedicated raytracing cores. The Vulkan SDK now integrates all the components necessary for developers to easily use the new raytracing extensions, such as new shader tool chains, without needing them to be built from multiple repositories, and supports raytracing validation within the SDK validation layers.

NVIDIA Brings DLSS Support To Four New Games

Artificial intelligence is revolutionizing gaming - from in-game physics and animation simulation, to real-time rendering and AI-assisted broadcasting features. And NVIDIA is at the forefront of this field, bringing gamers, scientists and creators incredible advancements. With Deep Learning Super Sampling (DLSS), NVIDIA set out to redefine real-time rendering through AI-based super resolution - rendering fewer pixels, then using AI to construct sharp, higher resolution images, giving gamers previously unheard-of performance gains.

Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS has accelerated performance in more than 25 games to date, boosting frame rates significantly, ensuring GeForce RTX gamers receive high-performance gameplay at the highest resolutions and detail settings, and when using immersive ray-traced effects. And now, NVIDIA has delivered four new DLSS titles for gamers to enjoy.

AMD Radeon RX 6800 XT Raytracing Performance Leaked

It's only tomorrow that reviewers will take the lids off AMD's latest and greatest Navi-powered graphics cards, but it's hard to keep a secret such as this... well... secret. Case in point: Videocardz has accessed some leaked slides from the presentation AMD has given to its partners, and these shed some light on what raytracing performance users can expect from AMD's RX 6800 XT, the card that's meant to bring the fight to NVIDIA's RTX 3080 graphics card. AMD's RDNA2 features support for hardware-accelerated raytracing from the get go, with every CU receiving on additional hardware piece: a Ray Accelerator. As such, the RX 6800 XT, with its 72 enabled CUs, features 72 Ray Accelerators; the RX 6800, with its 60 CUs, features 60 of these Ray Accelerators.

The RX 6800 XT was tested in five titles: Battlefield V, Call of Duty MW, Crysis Remastered, Metro Exodus and Shadow of the Tomb Raider. At 1440p resolution with Ultra Settings and DXR options enabled according to the game, AMD claims an RX 6800 XT paired with their Ryzen 9 5900X can deliver an average of 70 FPS on Battlefield V; 95 FPS on Call of Duty MW; 90 FPS in Crysis Remastered; 67 FPS in Metro Exodus; and 82 FPS in Shadow of the Tomb Raider. These results are, obviously, not comparable to our own results in previous NVIDIA RTX reviews; there's just too many variables in the system to make that a worthwhile comparison. You'll just have to wait for our own review in our normalized test bench so you can see where exactly does AMD's latest stand against NVIDIA.

AMD, Blizzard Showcase World of Warcraft: Shadowlands DXR

As part of its road towards release of their Radeon RX 6000 series, AMD has posted a video showcasing the raytracing effects that are being baked into World of Warcraft: Shadowlands. This comes as a result of a strategic partnership between the two companies. World of Warcraft: Shadowlands will be making use of AMD's FidelityFX Ambient Occlusion, where Blizzard says they were able to achieve "(...)a perfect balance between quality and performance..." which allowed them to achieve "(...)a significant performance advantage over our previous ambient occlusion applications."

World of Warcraft: Shadowlands will also be making use of DXR Raytracing technology as well as Variable Rate Shading (VRS). Raytracing is being used to calculate light interactions between light sources, objects and characters on the screen, while VRS will enable the game to reduce shading resolution on areas closer to the corners of the frame, or in fast-moving objects, where detail would be lost either way, to achieve higher frame rates. The higher the resolution, the more impactful the benefits of VRS. So it seems that Blizzard has decided to implement two performance-increasing and one performance-decreasing features available from the DXR repository. Catch the video explaining these features and showcasing their implementation after the break.

UL Benchmarks Updates 3DMark with Ray-Tracing Feature Test

The launch of AMD Radeon RX 6000 Series graphics cards on November 18 will end NVIDIA's monopoly on real-time raytracing. For the first time, gamers will have a choice of GPU vendors when buying a raytracing-capable graphics card. Today, we're releasing a new 3DMark feature test that measures pure raytracing performance. You can use the 3DMark DirectX Raytracing feature test to compare the performance of the dedicated raytracing hardware in the latest graphics cards from AMD and NVIDIA.

Real-time raytracing is incredibly demanding. The latest graphics cards have dedicated hardware that's optimized for raytracing operations. Despite the advances in GPU performance, the demands are still too high for a game to rely on raytracing alone. That's why games use raytracing to complement traditional rendering techniques. The 3DMark DirectX Raytracing feature test is designed to make raytracing performance the limiting factor. Instead of relying on traditional rendering, the whole scene is ray-traced and drawn in one pass.
DOWNLOAD: 3DMark v2.15.7078

Microsoft: Only Consoles Supporting Full RDNA 2 Capabilities Are Xbox Series X and Series S, Excludes PlayStation 5

Microsoft has today published another article on its Xbox Wire blog, dedicated to all the news regarding the Xbox consoles and its ecosystem. In the light of yesterday's launch of AMD Radeon RDNA 2 graphics cards, Microsoft has congratulated its partner and provider of processors SoCs for their next-generation consoles. Besides the celebrations and congratulations, Microsoft has proceeded to show off what the Xbox Series X and Series S consoles are capable of, and how they integrate the RDNA 2 architecture. The company notes that there are hardware accelerated DirectX Raytracing, Mesh Shaders, Sampler Feedback, and Variable Rate Shading units built-in, so game developers can take advantage of it.

Another interesting point Microsoft made was that "Xbox Series X|S are the only next-generation consoles with full hardware support for all the RDNA 2 capabilities AMD showcased today." What this translates into is that Microsoft is the only console maker that uses the full RDNA 2 potential. This could leave Sony out in the dark with its PlayStation 5 console, meaning that it does not support all the features of AMD's new GPU architecture. There are not any specific points, however, we have to wait and see what Sony has left out, if anything.

Microsoft Rolls Out DirectX 12 Feature-level 12_2: Turing and RDNA2 Support it

Microsoft on Thursday rolled out the DirectX 12 feature-level 12_2 specification. This adds a set of new API-level features to DirectX 12 feature-level 12_1. It's important to understand that 12_2 is not DirectX 12 Ultimate, even though Microsoft explains in its developer blog that the four key features that make up DirectX 12 Ultimate logo requirements were important enough to be bundled into a new feature-level. At the same time, Ultimate isn't feature-level 12_1, either. The DirectX 12 Ultimate logo requirement consists of DirectX Raytracing, Mesh Shaders, Sampler Feedback, and Variable Rate Shading. These four, combined with an assortment of new features make up feature-level 12_2.

Among the updates introduced with feature-level 12_2 are DXR 1.1, Shader Model 6.5, Variable Rate Shading tier-2, Resource Binding tier-3, Tiled Resources tier-3, Conservative Rasterization tier-3, Root Signature tier-1.1, WriteBufferImmediateSupportFlags, GPU Virtual Address Bits resource expansion, among several other Direct3D raster rendering features. Feature-level 12_2 requires a WDDM 2.0 driver, and a compatible GPU. Currently, NVIDIA's "Turing" based GeForce RTX 20-series are the only GPUs capable of feature-level 12_2. Microsoft announced that AMD's upcoming RDNA2 architecture supports 12_2, too. NVIDIA's upcoming "Ampere" (RTX 20-series successors) may support it, too.

AMD RDNA 2 GPUs to Support the DirectX 12 Ultimate API

AMD today announced in the form of a blog post that its upcoming graphics cards based on RDNA 2 architecture will feature support for Microsoft's latest DirectX 12 Ultimate API. "With this architecture powering both the next generation of AMD Radeon graphics cards and the forthcoming Xbox Series X gaming console, we've been working very closely with Microsoft to help move gaming graphics to a new level of photorealism and smoothness thanks to the four key DirectX 12 Ultimate graphics features -- DirectX Raytracing (DXR), Variable Rate Shading (VRS), Mesh Shaders, and Sampler Feedback." - said AMD in the blog.

Reportedly, Microsoft and AMD have worked closely to enable this feature set and provide the best possible support for RDNA 2 based hardware, meaning that future GPUs and consoles are getting the best possible integration of the new API standard.
AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate

Sony Reveals PS5 Hardware: RDNA2 Raytracing, 16 GB GDDR6, 6 GB/s SSD, 2304 GPU Cores

Sony in a YouTube stream keynote by PlayStation 5 lead system architect Mark Cerny, detailed the upcoming entertainment system's hardware. There are three key areas where the company has invested heavily in driving forward the platform by "balancing revolutionary and evolutionary" technologies. A key design focus with PlayStation 5 is storage. Cerny elaborated on how past generations of the PlayStation guided game developers' art direction as the low bandwidths and latencies of optical discs and HDDs posed crippling latencies arising out of mechanical seeks, resulting in infinitesimally lower data transfer rates than what the media is capable of in best case scenario (seeking a block of data from its outermost sectors). SSD was the #1 most requested hardware feature by game developers during the development of PS5, and Sony responded with something special.

Each PlayStation 5 ships with a PCI-Express 4.0 x4 SSD with a flash controller that has been designed in-house by Sony. The controller features 12 flash channels, and is capable of at least 5.5 GB/s transfer speeds. When you factor in the exponential gains in access time, Sony expects the SSD to provide a 100x boost in effective storage sub-system performance, resulting in practically no load times.

Sony's Mark Cerny to Detail PS5 Architecture March 18th

Sony has announced via Twitter that their lead system architect Mark Cerny will "provide a deep dive into PS5's system architecture, and how it will shape the future of games" tomorrow. This is likely the start of Sony's marketing campaign for the release of the PS5 which is due out Holidays 2020.

The Japanese company has remained puzzlingly tight-lipped regarding their next-gen games console, which is a far cry from Microsoft's position, who have been releasing details and teasing their next-gen Xbox Series X system for a while now. It remains to be seen how Sony's system will differ from Microsoft's Xbox Series X, since most specs are rumored to be close on both consoles. The underlying Zen 2 architecture for the CPUs is confirmed in both consoles, and so should the fabrication process and RDNA2-based graphics with dedicated ray tracing hardware. It remains to be seen how the companies will aim to differentiate their offerings.

AMD Financial Analyst Day 2020 Live Blog

AMD Financial Analyst Day presents an opportunity for AMD to talk straight with the finance industry about the company's current financial health, and a taste of what's to come. Guidance and product teasers made during this time are usually very accurate due to the nature of the audience. In this live blog, we will post information from the Financial Analyst Day 2020 as it unfolds.
20:59 UTC: The event has started as of 1 PM PST. CEO Dr Lisa Su takes stage.

Ray Tracing and Variable-Rate Shading Design Goals for AMD RDNA2

Hardware-accelerated ray tracing and variable-rate shading will be the design focal points for AMD's next-generation RDNA2 graphics architecture. Microsoft's reveal of its Xbox Series X console attributed both features to AMD's "next generation RDNA" architecture (which logically happens to be RDNA2). The Xbox Series X uses a semi-custom SoC that features CPU cores based on the "Zen 2" microarchitecture and a GPU based on RDNA2. It's highly likely that the SoC could be fabricated on TSMC's 7 nm EUV node, as the RDNA2 graphics architecture is optimized for that. This would mean an optical shrink of "Zen 2" to 7 nm EUV. Besides the SoC that powers Xbox Series X, AMD is expected to leverage 7 nm EUV for its RDNA2 discrete GPUs and CPU chiplets based on its "Zen 3" microarchitecture in 2020.

Variable-rate shading (VRS) is an API-level feature that lets GPUs conserve resources by shading certain areas of a scene at a lower rate than the other, without perceptible difference to the viewer. Microsoft developed two tiers of VRS for its DirectX 12 API, tier-1 is currently supported by NVIDIA "Turing" and Intel Gen11 architectures, while tier-2 is supported by "Turing." The current RDNA architecture doesn't support either tiers. Hardware-accelerated ray-tracing is the cornerstone of NVIDIA's "Turing" RTX 20-series graphics cards, and AMD is catching up to it. Microsoft already standardized it on the software-side with the DXR (DirectX Raytracing) API. A combination of VRS and dynamic render-resolution will be crucial for next-gen consoles to achieve playability at 4K, and to even boast of being 8K-capable.

Crytek Releases Hardware-Agnostic Raytracing Benchmark "Neon Noir"

Crytek today released the final build for their hardware-agnostic raytracing benchmark. Dubbed Neon Noir, the benchmark had already been showcased in video form back in March 2019, but now it's finally available for download for all interested parties from the Crytek Marketplace. The benchmark currently doesn't support any low-level API such as Vulkan or DX 12, but support for those - and the expected performance improvements - will be implemented in the future.

Neon Noir has its raytracing chops added via an extension of CRYENGINE's SVOGI rendering tool that currently Crytek's games use, including Hunt: Showdown, which will make it easier for developers to explore raytracing implementations that don't require a particular hardware implementation (such as RTX). However, the developer has added that they will add hardware acceleration support in the future, which should only improve performance, and will not add any additional rendering features compared to those that can be achieved already. What are you waiting for? Just follow the link below.

Microsoft Details DirectX Raytracing Tier 1.1, New DirectX 12 Features

Microsoft detailed feature additions to the DirectX 12 3D graphics API, and an expansion of its DirectX Ray-tracing (DXR) API to Tier 1.1. The updated APIs will be included with the Windows 10 major update that's scheduled for the first half of 2020 — the features are accessible already for developers in Windows Insider preview builds. DXR 1.1 is the first major update to the API since its Q4-2018 launch, and adds three major features. To begin with, it brings support for extra shaders to an existing ray-tracing PSO (pipeline-state object), increasing the efficiency of dynamic PSO additions. Next up, is ExecuteIndirect for Raytracing support, described by Microsoft as "enabling adaptive algorithms where the number of rays is decided on the GPU execution timeline." This could be a hint what to expect from NVIDIA's next-generation GPUs that are expected for next year. Lastly, the API introduces support for Inline Raytracing, which gives developers more control over ray traversal and scheduling.

Over in the main DirectX 12 API, Microsoft is introducing support for Mesh Shaders, which brings about systemic changes to the graphics pipeline. "Mesh shaders and amplification shaders are the next generation of GPU geometry processing capability, replacing the current input assembler, vertex shader, hull shader, tessellator, domain shader, and geometry shader stages," writes Microsoft in its blog post. DirectX Sampler Feedback contributes toward memory management by allowing games to better understand which texture assets are more frequently accessed and need to remain resident.

NVIDIA's Lightspeed Studios Aim to Remaster Games With RTX Effects

NVIDIA's efforts with Quake 2 RTX haven't gone unnoticed in the community, providing a distinct overhaul in image quality to the now decades-old game. Raytracing and its global illumination capabilities have already been well explored in this publication, and its effect in older games - the ones that already run remarkably well in modern hardware - has been well documented, with some third party solutions (such as ReShade) including the effect to great measure in a number of games that haven't been built from the ground-up for raytracing.

The new Lightspeed Studios from NVIDIA aims to bring Quake 2 RTX-like improvements to other games. A job posting from the company dated from the end of September seems to make it clear that the company is looking to increase the number of games with RTX support in this way. Read on after the break for the description in the job listing.

DOOM Eternal to Also Support Raytracing

In another iD Software game supporting ray tracing (we already know Wolfenstein: Young Blood will support it), id Software's Marty Stratton confirmed that DOOM Eternal will also support the graphics technology. In what capacity, it is unclear as of yet; whether for a global illumination solution, like Metro: Exodus, or just for reflections and shadows like most games seem to be using, is unknown at this point. Looking back at how the "original" DOOM looked, and considering changes to graphics technologies under the new iD Tech 7 engine, however, DOOM Eternal really is looking to be one of the best looking games - at least on the PC platform.

As Marty Stratton put it, "RTX makes it look, you know, amazing. There are great benefits but it doesn't necessarily expand our audience or that the way that the way that something like Stadia does so, but absolutely people can look forward to DOOM Eternal and id Tech 7 supporting ray tracing. Absolutely. I mean we love that stuff, the team loves it and I think we'll do it better than anybody honestly."

Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed

Considering your reaction, you certainly remember Crytek's Neon noir raytracing scene that we shared with you back in march. At the time, the fact that raytracing was running at such mesmerizing levels on AMD hardware was arguably the biggest part of the news piece: AMD's Vega 56 graphics card with no dedicated raytracing hardware, was pushing the raytraced scene in a confident manner. Now, Crytek have shared some details on how exactly Neon noir was rendered.

The AMD Radeon Vega 56 pushed the demo at 1080p/30 FPS, with full-resolution rendering of raytraced effects. Crytek further shared that raytracing can be rendered at half resolution compared to the rest of the scene, and that if they did so on AMD's Vega 56, they could push a 1440p resolution at 40+ FPS. The raytraced path wasn't running on any modern, lower-level API, such as DX12 or Vulkan, but rather, on a custom branch of Crytek's CryEngine, version 5.5.

Intel Xe GPUs to Support Raytracing Hardware Acceleration

Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.

"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.

NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs

NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.

The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.
Update: Article updated with additional test data from NVIDIA.

Unreal Engine Gets a Host of Real-Time Raytracing Features

Epic Games wants a slice of next-generation NVIDIA GameWorks titles that are bound to leverage the RTX feature-set of its hardware. The latest version of Unreal Engine 4, released as a preview-build, comes with a host of real-time ray-tracing features. In its change-log for Unreal Engine 4.22 Preview, Epic describes its real-time ray-tracing feature to be a "low level layer on top of UE DirectX 12 that provides support for DXR and allows creating and using ray tracing shaders (ray generation shaders, hit shaders, etc) to add ray tracing effects."

The hardware being reference here are the RT cores found in NVIDIA's "Turing RTX" GPUs. At the high-level, Unreal Engine 4 will support close to two dozen features that leverage DXR, including a denoiser for shadows, reflections, and ambient occlusion; rectangular area lights, soft shadows, ray-traced reflections and AO, real-time global illumination, translucency, triangular meshes, and path-tracing. We could see Unreal Engine 4.22 get "stable" towards the end of 2019, to enable DXR-ready games of 2020.

UL Corporation Announces 3D Mark Port Royal Raytracing Suite is Now Available - Benchmark Mode On!

Perhaps gliding through the tech-infused CES week, UL Corporation has just announced that the much-expected Port Royal, the world's first dedicated real-time ray tracing benchmark for gamers, is now available. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques, and enables both performance benchmarking for cutthroat competition throughout the internet (and our own TPU forums, of course), but is also an example of what to expect from ray tracing in upcoming games - ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DMark Port Royal Ray-tracing Benchmark Release Date and Pricing Revealed

UL Benchmarks released more information on pricing and availability of its upcoming addition to the 3DMark benchmark suite, named "Port Royal." The company revealed that the benchmark will officially launch on January 8, 2019. The Port Royal upgrade will cost existing 3DMark paid (Advanced and Professional) users USD $2.99. 3DMark Advanced purchased from January 8th onward at $29.99 will include Port Royal. 3DMark Port Royal is an extreme-segment 3D graphics benchmark leveraging DirectX 12 and DirectX Raytracing (DXR). UL Benchmarks stated that Port Royal was developed with inputs from industry giants including NVIDIA, AMD, Intel, and Microsoft.

DICE Prepares "Battlefield V" RTX/DXR Performance Patch: Up to 50% FPS Gains

EA-DICE and NVIDIA earned a lot of bad press last month, when performance numbers for "Battlefield V" with DirectX Raytracing (DXR) were finally out. Gamers were disappointed to see that DXR inflicts heavy performance penalties, with 4K UHD gameplay becoming out of bounds even for the $1,200 GeForce RTX 2080 Ti, and acceptable frame-rates only available on 1080p resolution. DICE has since been tirelessly working to rework its real-time raytracing implementation so performance is improved. Tomorrow (4th December), the studio will release a patch to "Battlefield V," a day ahead of its new Tides of War: Overture and new War Story slated for December 5th. This patch could be a game-changer for GeForce RTX users.

NVIDIA has been closely working with EA-DICE on this new patch, which NVIDIA claims improves the game's frame-rates with DXR enabled by "up to 50 percent." The patch enables RTX 2080 Ti users to smoothly play "Battlefield V" with DXR at 1440p resolution, with frame-rates over 60 fps, and DXR Reflections set to "Ultra." RTX 2080 (non-Ti) users should be able to play the game at 1440p with over 60 fps, if the DXR Reflections toggle is set at "Medium." RTX 2070 users can play the game at 1080p, with over 60 fps, and the toggle set to "Medium." NVIDIA states that it is continuing to work with DICE to improve DXR performance even further, which will take the shape of future game patches and driver updates.
A video presentation by NVIDIA follows.

UL Benchmarks Unveils 3DMark "Port Royal" Ray-tracing Benchmark

Port Royal is the name of the latest component of UL Benchmarks 3DMark. Designed to take advantage of the DirectX Raytracing (DXR) API, this benchmark features an extreme poly-count test-scene with real-time ray-traced elements. Screengrabs of the benchmark depict spacecraft entering and leaving mirrored spheres suspended within a planet's atmosphere, which appear to be docks. It's also a shout out to of a number of space-sims such as "Star Citizen," which could up their production in the future by introducing ray-tracing. The benchmark will debut at the GALAX GOC Grand Final on December 8, where the first public run will be powered by a GALAX GeForce RTX 2080 Ti HOF graphics card. It will start selling in January 2019.

Microsoft Resumes Rollout of Windows 10 October 2018 Feature Update (1809)

Originally Microsoft shelved the Windows 10 October 2018 feature update after a data-destroying bug among other problems was detected just days after its initial rollout. Now with more than a month has passed they are finally re-releasing the update after having "thoroughly investigated and resolved" the issues, according to Microsoft's John Cable, director of Program Management for Windows Servicing and Delivery.

The decision to re-release the update was reached after the careful study of diagnostic data from millions of Windows Insiders showed no further evidence of data loss. Currently, the update is only available via media and manual updates, automatic updates will be coming later. This is because Microsoft is taking a slower more methodical approach to their updates. Taking more time for careful study of device health data in order to improve the overall user experience. This new approach will take problems like application incompatibility among other things into account in order to make sure future updates do not automatically install unless known issues have been resolved. This should help reduce the frequency of problems end users encounter.
Return to Keyword Browsing