News Posts matching #GPU

Return to Keyword Browsing

AMD Robotics Starter Kit Kick-Starts the Intelligent Factory of the Future

Today AMD announced the Kria KR260 Robotics Starter Kit, the latest addition to the Kria portfolio of adaptive system-on-modules (SOMs) and developer kits. A scalable and out-of-the-box development platform for robotics, the Kria KR260 offers a seamless path to production deployment with the existing Kria K26 adaptive SOMs. With native ROS 2 support, the standard framework for robotics application development, and pre-built interfaces for robotics and industrial solutions, the new SOM starter kit enables rapid development of hardware-accelerated applications for robotics, machine vision and industrial communication and control.

"The Kria KR260 Robotics Starter Kits builds on the success of our Kria SOMs and KV260 Vision AI Starter Kit for AI and embedded developers, providing roboticists with a complete, out-of-the-box solution for this rapidly growing application space," said Chetan Khona, senior director of Industrial, Vision, Healthcare and Sciences Markets at AMD. "Roboticists will now be able to work in their standard development environment on a platform that has all the interfaces and capabilities needed to be up and running in less than an hour. The KR260 Starter Kit is an ideal platform to accelerate robotics innovation and easily take ideas to production at scale."

AMD Claims Higher FPS/$ Radeon GPU Value Over NVIDIA Offerings

Frank Azor, Chief Architect of Gaming Solutions & Marketing at AMD, has posted an interesting slide on Twitter, claiming that AMD Radeon products possess higher FPS/$ value than NVIDIA's graphics offerings. According to the slide, AMD Radeon graphics cards are the best solutions for gamers looking at performance per dollar ratings and performance per watt. This means that AMD claims that Radeon products are inherently higher-value products than NVIDIA's offerings while also more efficient. As the chart shows, which you can see below, some AMD Radeon cards are offering up to 89% better FPS/$ value with up to 123% better FPS/Watt metric. This highest rating is dedicated to Radeon RX 6400 GPU; however, there are all GPUs included in comparison with up to the latest Radeon RX 6950 XT SKU.

Compared to TechPowerUp's own testing of AMD's Radeon cards and multiple reviews calculating the performance per dollar metric, we could not see numbers as high as AMD's. This means that AMD's marketing department probably uses a different selection of games that may perform better on AMD Radeon cards than NVIDIA GeForce RTX. Of course, as with any company marketing material, you should take it with a grain of salt, so please check some of our reviews for a non-biased comparison.

NVIDIA Releases Security Update 473.47 WHQL Driver for Kepler GPUs

Ten years ago, in 2012, NVIDIA introduced its Kepler series of graphics cards based on the TSMC 28 nm node. Architecture has been supported for quite a while now by NVIDIA's drivers, and the last series to carry support was the 470 driver class. Today, NVIDIA pushed a security update in the form of a 473.47 WHQL driver that brings fixes to various CVE vulnerabilities that can cause anything from issues that may lead to denial of service, information disclosure, or data tampering. This driver version has no fixed matters and doesn't bring any additional features except the fix for vulnerabilities. With CVEs rated from 4.1 to 8.5, NVIDIA has fixed major issues bugging Kepler GPU users. With a high risk for code execution, denial of service, escalation of privileges, information disclosure, and data tampering, the 473.47 WHQL driver is another step for supporting Kepler architecture until 2024, when NVIDIA plans to drop the support for this architecture. Supported cards are GT 600, GT 700, GTX 600, GTX 700, Titan, Titan Black, and Titan Z.

The updated drivers are available for installation on NVIDIA's website and for users of TechPowerUp's NVCleanstall software.

NVIDIA GeForce RTX 4090 Twice as Fast as RTX 3090, Features 16128 CUDA Cores and 450W TDP

NVIDIA's next-generation GeForce RTX 40 series of graphics cards, codenamed Ada Lovelace, is shaping up to be a powerful graphics card lineup. Allegedly, we can expect to see a mid-July launch of NVIDIA's newest gaming offerings, where customers can expect some impressive performance. According to a reliable hardware leaker, kopite7kimi, NVIDIA GeForce RTX 4090 graphics card will feature AD102-300 GPU SKU. This model is equipped with 126 Streaming Multiprocessors (SMs), which brings the total number of FP32 CUDA cores to 16128. Compared to the full AD102 GPU with 144 SMs, this leads us to think that there will be an RTX 4090 Ti model following up later as well.

Paired with 24 GB of 21 Gbps GDDR6X memory, the RTX 4090 graphics card has a TDP of 450 Watts. While this number may appear as a very power-hungry design, bear in mind that the targeted performance improvement over the previous RTX 3090 model is expected to be a two-fold scale. Paired with TSMC's new N4 node and new architecture design, performance scaling should follow at the cost of higher TDPs. These claims are yet to be validated by real-world benchmarks of independent tech media, so please take all of this information with a grain of salt and wait for TechPowerUp reviews once the card arrives.

Alleged AMD Instinct MI300 Exascale APU Features Zen4 CPU and CDNA3 GPU

Today we got information that AMD's upcoming Instinct MI300 will be allegedly available as an Accelerated Processing Unit (APU). AMD APUs are processors that combine CPU and GPU into a single package. AdoredTV managed to get ahold of a slide that indicates that AMD Instinct MI300 accelerator will also come as an APU option that combines Zen4 CPU cores and CDNA3 GPU accelerator in a single, large package. With technologies like 3D stacking, MCM design, and HBM memory, these Instinct APUs are positioned to be a high-density compute the product. At least six HBM dies are going to be placed in a package, with the APU itself being a socketed design.

The leaked slide from AdoredTV indicates that the first tapeout is complete by the end of the month (presumably this month), with the first silicon hitting AMD's labs in Q3 of 2022. If the silicon turns out functional, we could see these APUs available sometime in the first half of 2023. Below, you can see an illustration of the AMD Instinct MI300 GPU. The APU version will potentially be of the same size with Zen4 and CDNA3 cores spread around the package. As Instinct MI300 accelerator is supposed to use eight compute tiles, we could see different combinations of CPU/GPU tiles offered. As we await the launch of the next-generation accelerators, we are yet to see what SKUs AMD will bring.

AMD's Integrated GPU in Ryzen 7000 Gets Tested in Linux

It appears that one of AMD's partners has a Ryzen 7000 CPU or APU, with integrated graphics up and running in Linux. Based on details leaked, courtesy of the partner testing the chip using the Phoronix Test Suite and submitting the results to the OpenBenchmarking database. The numbers are by no means impressive, suggesting that this engineering sample isn't running at the proper clock speeds. For example, it only scores 63.1 FPS in Enemy Territory: Quake Wars, where a Ryzen 9 6900HX manages 182.1 FPS, where both GPUs have been allocated 512 MB of system memory as the minimum graphics memory allocation.

The integrated GPU goes under the model name of GFX1036, with older integrated RDNA2 GPUs from AMD having been part of the GFX103x series. It's reported to have a clock speed of 2000/1000 MHz, although it's presumably running at the lower of the two clock speeds, if not even slower, as it's only about a third of the speed or slower, than the GPU in the Ryzen 9 6900HX. That said, the GPU in the Ryzen 7000-series is as far as anyone's aware, not really intended for gaming, since it's a very stripped down GPU that is meant to mainly be for desktop use and media usage, so it's possible that it'll never catch up with the current crop of integrated GPUs from AMD. We'll hopefully find out more in less than two weeks time, when AMD has its keynote at Computex.

NVIDIA Releases Open-Source GPU Kernel Modules

NVIDIA is now publishing Linux GPU kernel modules as open source with dual GPL/MIT license, starting with the R515 driver release. You can find the source code for these kernel modules in the NVIDIA Open GPU Kernel Modules repo on GitHub. This release is a significant step toward improving the experience of using NVIDIA GPUs in Linux, for tighter integration with the OS and for developers to debug, integrate, and contribute back. For Linux distribution providers, the open-source modules increase ease of use.

They also improve the out-of-the-box user experience to sign and distribute the NVIDIA GPU driver. Canonical and SUSE are able to immediately package the open kernel modules with Ubuntu and SUSE Linux Enterprise Distributions. Developers can trace into code paths and see how kernel event scheduling is interacting with their workload for faster root cause debugging. In addition, enterprise software developers can now integrate the driver seamlessly into the customized Linux kernel configured for their project.

Tachyum Delivers the Highest AI and HPC Performance with the Launch of the World's First Universal Processor

Tachyum today launched the world's first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products.

After the company undertook its mission to conquer the processor performance plateau in nanometer-class chips and the systems they power, Tachyum has succeeded by launching its first commercial product. The Prodigy Cloud/AI/HPC supercomputer processor chip offers 4x the performance of the fastest Xeon, has 3x more raw performance than NVIDIA's H100 on HPC and has 6x more raw performance on AI training and inference workloads, and up to 10x performance at the same power. Prodigy is poised to overcome the challenges of increasing data center power consumption, low server utilization and stalled performance scaling.

Supermicro Accelerates AI Workloads, Cloud Gaming, Media Delivery with New Systems Supporting Intel's Arctic Sound-M and Intel Habana Labs Gaudi 2

Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking, and green computing technology, supports two new Intel-based accelerators for demanding cloud gaming, media delivery, AI and ML workloads, enabling customers to deploy the latest acceleration technology from Intel and Intel Habana. "Supermicro continues to work closely with Intel and Habana Labs to deliver a range of server solutions supporting Arctic Sound-M and Gaudi 2 that address the demanding needs of organizations that require highly efficient media delivery and AI training," said Charles Liang, president and CEO. "We continue to collaborate with leading technology suppliers to deliver application-optimized total system solutions for complex workloads while also increasing system performance."

Supermicro can quickly bring to market new technologies by using a Building Block Solutions approach to designing new systems. This methodology allows new GPUs and acceleration technology to be easily placed into existing designs or, when necessary, quickly adapt an existing design when needed for higher-performing components. "Supermicro helps deliver advanced AI and media processing with systems that leverage our latest Gaudi 2 and Arctic Sound-M accelerators," stated Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel. "Supermicro's Gaudi AI Training Server will accelerate deep learning training in some of the fastest growing workloads in the datacenter."

NVIDIA H100 SXM Hopper GPU Pictured Up Close

ServeTheHome, a tech media outlet focused on everything server/enterprise, posted an exclusive set of photos of NVIDIA's latest H100 "Hopper" accelerator. Being the fastest GPU NVIDIA ever created, H100 is made on TSMC's 4 nm manufacturing process and features over 80 billion transistors on an 814 mm² CoWoS package designed by TSMC. Complementing the massive die, we have 80 GB of HBM3 memory that sits close to the die. Pictured below, we have an SXM5 H100 module packed with VRM and power regulation. Given that the rated TDP for this GPU is 700 Watts, power regulation is a serious concern and NVIDIA managed to keep it in check.

On the back of the card, we see one short and one longer mezzanine connector that acts as a power delivery connector, different from the previous A100 GPU layout. This board model is labeled PG520 and is very close to the official renders that NVIDIA supplied us with on launch day.

NVIDIA GeForce RTX 3090 Ti Gets Custom 890 Watt XOC BIOS

Extreme overclocking is an enthusiast discipline where overclockers try to push their hardware to extreme limits. Combining powerful cooling solutions like liquid nitrogen (LN2), which reaches sub-zero temperatures alongside modified hardware, the silicon can output tremendous power. Today, we are witnessing a custom XOC (eXtreme OverClocking) BIOS for the NVIDIA GeForce RTX 3090 Ti graphics card that can push the GA102 SKU to impressive 890 Watts of power, representing almost a two-fold increase to the stock TDP. Enthusiasts pursuing large frequencies with their RTX 3090 Ti are likely users of this XOC BIOS. However, most likely, we will see GALAX HOF or EVGA KINGPIN cards with dual 16-pin power connectors utilize this.

As shown below, MEGAsizeGPU, the creator of this BIOS, managed to push his ASUS GeForce RTX 3090 Ti TUF with XOC BIOS to 615 Watts, so KINGPIN and HOF designs will have to be used to draw all the possible heat. The XOC BIOS was uploaded to our VGA BIOS database, however, caution is advised as this can break your graphics card.

Intel Buys Finnish Graphics IP Developer Siru Innovations

Intel has announced that it has bought 11 year old, veteran Finnish graphics IP developer Siru Innovations. You'd be forgiven if you've never heard about the company, but it has pedigree harking back to the late 1980's and early 1990's, as at least one of its founders was part of the legendary demogroup Future Crew that made some of the most impressive graphics and audio demo software during the BBS era. All three founders were at Bitboys when it was founded in the 1990's and if you haven't heard about Bitboys, you might simply not be old enough. The company was hyped for its Glaze3D graphics architecture that never actually launched, due to the fact that Infineon stopped manufacturing a very specific type of embedded memory that the GPUs were based on.

Bitboys was later acquired by ATI, who in turn of course was taken over by AMD. However, the story doesn't end here, as AMD sold the Imageon business unit to Qualcomm in 2009 and the three founders of Siru moved to Qualcomm for a couple of years, before starting Siru. Since the Intel announcement, the Siru website has been taken down, but the company was working on developing mobile graphics IP, as well as helping other companies develop their own graphics related IP, drivers and so on. As to what Intel is planning on doing with the Siru team isn't entirely clear, but Balaji Kanigicherla, Intel's VP and General Manager, AXG Custom Compute Group Innovating Custom Silicon & Platform Solutions in Blockchain, High Performance Edge Compute and Cloud Computing, Supercomputer, posted on LinkedIn saying that Siru will be joining the AXG Group. You can read the full post below.

AMD Radeon RX 6950XT Beats GeForce RTX 3090 Ti in 3DMark TimeSpy

We are nearing the arrival of AMD's Radeon RX 6x50XT graphics card refresh series, and benchmarks are starting to appear. Today, we received a 3DMark TimeSpy benchmark of the AMD Radeon RX 6950XT GPU and compared it to existing solutions. More notably, we compared it to NVIDIA's GeForce RTX 3090 Ti and came to a surprise. The Radeon RX 6950XT GPU scored 22209 points in the 3DMark TimeSpy test and looking at Graphics score, while the GeForce RTX 3090 Ti GPU scored 20855 points in the same test. Of course, we have to account that 3DMark TimeSpy is a synthetic benchmark and tends to perform very well on AMD RDNA2 hardware, so we have to wait and see for official independent testing like TechPowerUp's reviews.

AMD Radeon RX 6950XT card was tested with Ryzen 7 5800X3D CPU paired with DDR4-3600 memory and pre-released 22.10-220411n drivers on Windows 10. We could experience higher graphics scores with final drivers and see better performance of the upcoming refreshed SKUs.

Sapphire Radeon RX 6950X TOXIC Reportedly Boosts to 2565 MHz at 346W TGP

As AMD is preparing to launch a highly-anticipated refresh of the Radeon RX 6000 series, codenamed RX 6x50 XT series. Alongside AMD, add-in board partners (AIBs) will have their say as well, and today we get to take a look at the alleged specifications of Sapphire's highest-end upcoming products. According to Chiphell member RaulMee, who claims to possess the specification of the newest Sapphire models, we are expected to see a bit higher total board power (TGP) with the arrival of this refresh. First and foremost, the Sapphire RX 6950XT TOXIC is the fastest air-cooled model from Sapphire, with a boost clock of up to 2565 MHz (255 MHz over AMD's reference 2310 MHz model), carrying a TGP of 364 Watts in OC BIOS. Regular TGP for this model is 332 Watts with a boost speed of up to 2532 MHz. Please note that this includes the power output of GPU and memory.

Next up, we have Sapphire's RX 6950XT NITRO+ SKUs. The non-SE card is a minor improvement over the AMD Radeon RX 6950XT reference GPU and offers a Silent BIOS option. The RX 6950XT NITRO+ Special Edition can go up to 325 Watts and 2435 MHz with OC BIOS applied. Silent BIOS is also an option, and it lowers the TGP to 303 Watts and 2368 MHz. The alleged specification chart also carries Sapphires' RX 6750XT & 6650XT NITRO+ GPUs, of which you can check the clock speeds and TGPs below.

VESA Launches AdaptiveSync and MediaSync VRR Standards and Compliance Program

The Video Electronics Standards Association (VESA ) today announced the first publicly open standard for front-of-screen performance of variable refresh rate displays. The VESA Adaptive-Sync Display Compliance Test Specification (Adaptive-Sync Display CTS) provides for a comprehensive and rigorous set of more than 50 test criteria, an automated testing methodology and performance mandates for PC monitors and laptops supporting VESA's Adaptive-Sync protocols.

The Adaptive-Sync Display CTS also establishes a product compliance logo program comprising two performance tiers: AdaptiveSync Display, which is focused on gaming with significantly higher refresh rates and low latency; and MediaSync Display, which is designed for jitter-free media playback supporting all international broadcast video formats. By establishing the VESA Certified AdaptiveSync Display and MediaSync Display logo programs.VESA will enable consumers to easily identify and compare the variable refresh rate performance of displays supporting Adaptive-Sync prior to purchase. Only displays that pass all Adaptive-Sync Display CTS and VESA DisplayPort compliance tests can qualify for the VESA Certified AdaptiveSync Display or MediaSync Display logos.

NVIDIA Allegedly Testing a 900 Watt TGP Ada Lovelace AD102 GPU

With the release of Hopper, NVIDIA's cycle of new architecture releases is not yet over. Later this year, we expect to see next-generation gaming architecture codenamed Ada Lovelace. According to a well-known hardware leaker for NVIDIA products, @kopite7kimi, on Twitter, the green team is reportedly testing a potent variant of the upcoming AD102 SKU. As the leak indicates, we could see an Ada Lovelace AD102 SKU with a Total Graphics Power (TGP) of 900 Watts. While we don't know where this SKU is supposed to sit in the Ada Lovelace family, it could be the most powerful, Titan-like design making a comeback. Alternatively, this could be a GeForce RTX 4090 Ti SKU. It carries 48 GB of GDDR6X memory running at 24 Gbps speeds alongside monstrous TGP. Feeding the card are two 16-pin connectors.

Another confirmation from the leaker is that the upcoming RTX 4080 GPU uses the AD103 SKU variant, while the RTX 4090 uses AD102. For further information, we have to wait a few more months and see what NVIDIA decides to launch in the upcoming generation of gaming-oriented graphics cards.

Sapphire Radeon RX 6400 PULSE Low Profile GPU Pictured

Sapphire looks set to launch one of the first low-profile RDNA2 graphics cards with the single-slot Radeon RX 6400 PULSE that has recently been leaked by VideoCardz. The card features a nearly identical design to the companies existing low-profile Radeon PRO W6400 product offering a single HDMI 2.1 and DisplayPort 1.4 port along with an optional half-height bracket. The Sapphire Radeon RX 6400 PULSE features 768 Stream Processors and 12 Ray Accelerators along with 4 GB of GDDR6 memory running at 16 Gbps. The card doesn't require any additional power connectors with a TDP of 53 W which could make it a good option for low-power builds. The Radeon RX 6400 was first announced by AMD in January for the OEM market with DIY market products set to launch in a few days on April 20th.

NVIDIA Launches "Restocked & Reloaded" GPU Availability Campaign

NVIDIA has recently launched a global campaign to promote the availability of RTX 30 series graphics cards with multiple retailers and manufacturers informing customers of increased shipments. The launch of this campaign also coincides with the 5th consecutive month of price drops for NVIDIA GPU prices with the average price now at 119% of MSRP according to the latest report from 3D Center. The stores participating in the campaign appear to have most cards as now available or restocking with some cards receiving minor price cuts.
NVIDIAGeForce RTX 30 Series graphics cards are now available! Get the ultimate play with immersive ray tracing, a huge AI performance boost with NVIDIA DLSS, game-winning responsiveness with NVIDIA Reflex, and AI-powered voice & video with NVIDIA Broadcast.

Blackmagic Design Announces DaVinci Resolve 18

Blackmagic Design today announced DaVinci Resolve 18, a major new cloud collaboration update which allows multiple editors, colorists, VFX artists and audio engineers to work simultaneously on the same project, on the same timeline, anywhere in the world. DaVinci Resolve 18 supports the Blackmagic Cloud for hosting and sharing projects, as well as a new DaVinci proxy workflow. This update also includes new Resolve FX AI tools powered by the DaVinci Neural Engine, as well as time saving tools for editors, Fairlight legacy fixed bus to FlexBus conversion, GPU accelerated paint in Fusion, and more! DaVinci Resolve 18 public beta is available for download now from the Blackmagic Design web site.

DaVinci Resolve 18 is a major release featuring cloud based workflows for a new way to collaborate remotely. Customers can host project libraries using Blackmagic Cloud and collaborate on the same timeline, in real time, with multiple users globally. The new Blackmagic Proxy generator automatically creates proxies linked to camera originals, for a faster editing workflow. There are new Resolve FX such as ultra beauty and 3D depth map, improved subtitling for editors, GPU accelerated Fusion paint and real time title template playback, Fairlight fixed to FlexBus conversion and more. DaVinci Resolve 18 supports Blackmagic Cloud, so customers can host their project libraries on the DaVinci Resolve Project Server in the cloud. Share projects and work collaboratively with editors, colorists, VFX artists and audio engineers on the same project at the same time, anywhere in the world.

Intel Arc A350M GPU Gets Performance Boost with Dynamic Tuning Technology Disabled

Last month, Intel released its Arc Alchemist lineup for mobile/laptop configurations. As expected, being the first discrete GPU that the company made, there are some hiccups here and there that happen along the way. Today, we have an interesting case of Intel Arc A350M getting a heavy performance boost with Dynamic Tuning Technology (DTT) disabled. The DTT is Intel's solution to automatically and dynamically allocate power between an Intel processor and an Intel Discrete Graphics Card to optimize performance and improve battery life. This is essentially a competing tech for AMD SmartShift and NVIDIA Dynamic Boost implementations. Thanks to a South Korean YouTuber, BullsLab, we have information that disabling DTT in drivers helps Arc 350M GPU reach higher performance targets.

He found when disabling DTT in drivers that the gaming performance improved significantly and that the Arc 350M was outputting 30-80 more frames per second. This is no slight improvement and shows that the drivers are still not yet mature. Creating a discrete graphics card is not an easy task, as noted here; however, we hope to see Intel put out more fixes in the coming weeks and hopefully end this strange behavior.
Below, you can see the YouTube video with benchmarks.

Next-gen NVIDIA "Ada" GPUs to Possibly Use 21 Gbps Memory

Everyone's favorite GPU-news leaker Kopite7kimi has updated his tweet from April 1 with more specific board part numbers, and suddenly the information contained there—which could have been misinterpreted as an April 1st joke—now all seems to add up with our own posting from last month about memory bus widths. The update seems to indicate the boards will now feature 21 Gbps memory, which is the same as what we saw on the recently made available RTX 3090 Ti cards, and Videocardz goes further to speculate the 3090 Ti could have been a dry run for the upcoming cards, including with a similar 600 W TDP rating to follow. Note also that the leaker is shying away from referring to these as the RTX 4080/4090 series, leaving room in case NVIDIA decides to jump in naming scheme for reasons including marketing and what the competition decides.

Announcing Fungible GPU-Connect - Connect Any GPU to Any Server Over Ethernet

Fungible, Inc., the composable infrastructure company, today announced a new product offering, Fungible GPU-Connect (FGC ), an innovative solution engineered to revolutionize how data processing power is accessed in enterprise and service provider infrastructures. Fungible GPU-Connect solves the challenges caused by the growing demand for AI/ML, especially in Edge Data Centers with stranded and underutilized GPUs. FGC leverages the Fungible DPU to dynamically compose GPU and CPU resources across an Ethernet network. This provides significant economic benefits and agility to organizations that provision, allocate, and manage expensive GPU resources. For the first time, organizations can create a GPU-powered infrastructure free from the physical limitations of PCIe.

FGC allows data centers to centralize their existing GPU assets into a single resource pool to be attached to servers on demand. Instead of dedicated GPUs sitting idle most of the time, data centers can provide new users with access to the GPU pool, making greater use of existing assets. This disaggregated solution also removes the constraints of having GPUs and CPUs physically co-located. They can be located anywhere within the same data center, eliminating the stranding of expensive GPU resources. The Fungible DPU creates a secure, virtual PCIe connection between the GPU and the server that is transparent to the server and to applications - no special software or drivers are needed. This connection is managed in hardware by the DPU, ensuring a high-performance and low latency connection. This transparency means FGC can be easily retrofitted into existing environments, and can scale with growing demand, ensuring GPUs are always available when they are needed.

Report: AMD Radeon Software Could Alter CPU Settings Quietly

According to the latest investigation made by a German publication, Igor's Lab, AMD's Adrenalin GPU software could experience unexpected behavior when Ryzen Master software is integrated into it. Supposedly, the combination of the two would allow AMD Adrenalin GPU software to misbehave and accidentally change CPU PBO and Precision Boost settings, disregarding the user's permissions. What Igor's Lab investigated was a case of Adrenalin software automatically enabling PBO or "CPU OC" setting when applying GPU profiles. This also happens when the GPU is in the Default mode, which is set automatically by the software.

Alterations can happen without user knowledge. If a user applies custom voltage and frequency settings in BIOS, Adrenalin software can and sometimes will override those settings to set arbitrary ones, potentially impacting the CPU's stability. The software can also alter CPU power limits as it has the means to do so. This problem only occurs when AMD CPU is combined with AMD GPU and AMD Ryzen Master SDK is installed. If another configuration is present, there is no change to the system. There are ways to bypass this edge case, and that is going back to BIOS to re-apply CPU settings manually or disable PBO. A Reddit user found that creating new GPU tuning profiles without loading older profiles will also bypass Adrenalin from adjusting your CPU settings. AMD hasn't made comments about the software, and so far remains a mystery why this is happening.

AMD Claims Radeon RX 6500M is Faster Than Intel Arc A370M Graphics

A few days ago, Intel announced its first official discrete graphics card efforts, designed for laptops. Called the Arc Alchemist lineup, Intel has designed these SKUs to provide entry-level to high-end options covering a wide range of use cases. Today, AMD has responded with a rather exciting Tweet made by the company's @Radeon Twitter account. The company compared Intel's Arc Alchemist A370M GPU with AMD's Radeon RX 6500M mobile SKUs in the post. These GPUs are made on TSMC's N6 node, feature 4 GB GDDR6 64-bit memory, 1024 FP32 cores, and have the same configurable TDP range of 35-50 Watts.

Below, you can see AMD's benchmarks of the following select games: Hitman 3, Total War Saga: Troy, F1 2021, Strange Brigade (High), and Final Fantasy XIV. The Radeon RX 6500M GPU manages to win in all of these games, thus explaining AMD's "FTW" hashtag on Twitter. Remember that these are vendor-supplied benchmarks runs, so we have to wait for some media results to surface.

Intel Seemingly Reveal Specs of Arc A780 Desktop GPU in Arc Control Video

Although it hasn't been verified, it would appear that Intel might have let slip some details of its upcoming Arc A780 desktop GPU in a video where the company was showing off its Arc Control graphics card control and monitoring software. For a brief second or two, the Live Performance Monitoring part of Arc Control was shown in the video, displaying GPU and VRAM clocks for one of its upcoming GPUs, alongside the GPU power of the same card. As to the exact product, that is now being discussed on the internet, but the current consensus based on all the specs, is that it could be the Arc A780.

The reasoning behind this, is that the Arc A350M, which could in theory boost to 2,250 MHz, doesn't meet the listed GPU power of 175 W, nor does the much slower clocked Arc A770M mobile part. The VRAM clock at 1093 MHz also suggests an effective memory throughput of 17.5 Gbps, which is faster than the fastest mobile GPU according to the specs available so far. It also means that Intel is going for high-performance memory on its high-end parts, as this GPU has higher memory bandwidth than a GeForce RTX 3070, which sits at 14 Gbps. It's also possible that we're looking at a development card here and that these specs won't make it into a final product, so we'll just have to wait until this summer to see what Intel has in store for us.
Return to Keyword Browsing
May 22nd, 2022 12:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts