News Posts matching #GPU

Return to Keyword Browsing

NVIDIA and Global Computer Makers Launch Industry-Standard Enterprise Server Platforms for AI

NVIDIA today introduced a new class of NVIDIA-Certified Systems, bringing AI within reach for organizations that run their applications on industry-standard enterprise data center infrastructure. These include high-volume enterprise servers from top manufacturers, which were announced in January and are now certified to run the NVIDIA AI Enterprise software suite—which is exclusively certified for VMware vSphere 7, the world's most widely used compute virtualization platform.

Further expanding the NVIDIA-Certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, also announced today.

NVIDIA Announces New Professional Ampere Graphics Cards

NVIDIA today announced a range of eight new NVIDIA Ampere architecture GPUs for next-generation laptops, desktops and servers that make it possible for professionals to work from wherever they choose, without sacrificing quality or time. For desktops, the new NVIDIA RTX A5000 and NVIDIA RTX A4000 GPUs feature new RT Cores, Tensor Cores and CUDA cores to speed AI, graphics and real-time rendering up to 2x faster than previous generations. For professionals on the go needing thin and light devices, the new NVIDIA RTX A2000, NVIDIA RTX A3000, RTX A4000 and RTX A5000 laptop GPUs deliver accelerated performance without compromising mobility.

For the data center, there are the new NVIDIA A10 GPU and A16 GPU. The A10 provides up to 2.5x the virtual workstation performance of the previous generation for designers and engineers, while the A16 GPU provides up to 2x user density with lower total cost of ownership and an enhanced virtual desktop infrastructure experience over the previous generation.

NVIDIA Announces Grace CPU for Giant AI and High Performance Computing Workloads

NVIDIA today announced its first data center CPU, an Arm-based processor that will deliver 10x the performance of today's fastest servers on the most complex AI and high performance computing workloads.

The result of more than 10,000 engineering years of work, the NVIDIA Grace CPU is designed to address the computing requirements for the world's most advanced applications—including natural language processing, recommender systems and AI supercomputing—that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. It combines energy-efficient Arm CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency.

NVIDIA Announces New DGX SuperPOD, the First Cloud-Native, Multi-Tenant Supercomputer, Opening World of AI to Enterprise

NVIDIA today unveiled the world's first cloud-native, multi-tenant AI supercomputer—the next-generation NVIDIA DGX SuperPOD featuring NVIDIA BlueField -2 DPUs. Fortifying the DGX SuperPOD with BlueField-2 DPUs—data processing units that offload, accelerate and isolate users' data—provides customers with secure connections to their AI infrastructure.

The company also announced NVIDIA Base Command, which enables multiple users and IT teams to securely access, share and operate their DGX SuperPOD infrastructure. Base Command coordinates AI training and operations on DGX SuperPOD infrastructure to enable the work of teams of data scientists and developers located around the globe.

Global Chip Shortage Takes Another Toll... Now Your Home Router?

The global supply of semiconductor processors has been at risk lately. Starting from GPUs to CPUs, the demand for both has been much greater than the available supply. Manufacturing companies, such as TSMC, have been expanding capacities, however, they have not yet been able to satisfy the demand. We have seen the results of that demand in a form of the scarcity of the latest generation of graphics cards, covering NVIDIA's GeForce RTX 3000 series Ampere, and AMD' Radeon RX 6000 series Big Navi graphics cards. Consumers have had a difficult time sourcing them and they have seen artificial price increase that is much higher than their original MSRP.

However, it doesn't seem like the situation will improve. According to the latest reporting from Bloomberg, the next victim of global chip shortage is... you guessed it, your home internet router. The cited sources have noted that the waiting list to get a batch of ordered routers has doubled the waiting time, from the regular 30 weeks to 60-week waiting time. This represents a waiting list that is more than a year long. With the global COVID-19 pandemic still going strong, there is an increased need for better home router equipment, and delays can only hurt broadband providers that supply routers. Taiwan-based router manufacturer Zyxel Communications, notes that the company has seen massive demand for their equipment. Such a massive demand could lead to insufficient supply, which could increase prices of routers well above their MSRP and bring scarcity of them as well.

AMD Ryzen 7 5700G APU Pictured and Tested

We have received various leaks and benchmarks for AMD's upcoming Ryzen 5000G processors, these were all from engineering samples but we now have our first look at the retail 5700G. The AMD Ryzen 7 5700G features the model number 100-000000263 attributed to earlier rumors and has been tested in CPU-Z scoring 631 points in single-threaded performance along with 6782 points in multi-threaded, and in Cinebench R20 it scored 6040 points. The integrated Vega graphics lack any official drivers but GPU-Z reports a Vega 8 processor with 12 Streaming Multiprocessors and a base clock of 2 GHz. AMD is yet to officially announce any Ryzen 5000G processors so it is unclear how far away their launch is and whether or not they will be made available to the DIY market.

Alphacool Introduces Four New Extremely Large Radiators

Alphacool introduces four new extremely large radiators. The first two radiators are for 180 or 200 mm fans and are a thickness of 86 mm. The 200 and 400 mm NexXxoS Monsta radiators offer a huge cooling surface. Both radiators have 7x standard G1/4" ports. 6 of the ports are for inlet/outlet and the 7th port is located at the end chamber and can only be used for filling, draining, or bleeding the radiator. The fin density has been reduced from 16 to 12 FPI to allow the use of lower static pressure fans and lower RPMs.

The new Nova 1080 UT60 radiator is a thicker version of the Nova 1080 XT45 radiator. The enormous cooling surface is comparable to three 360 mm UT60 radiators. This makes it easy to cool more than 1000 W of excess heat with extremely quiet to silent spinning fans. This is equivalent to a 64 core AMD Threadripper of the latest generation and at least two GeForce RTX 3090 graphics cards.

Intel Xe-HPG DG2 GPU Engineering Sample Pictured

We have recently received pictures of any early engineering sample of Intel's upcoming DG2 GPU from YouTuber Moore's Law is Dead. The card features 512 Execution Units and will be the flagship model for Intel's upcoming Xe-HPG lineup reportedly targeting performance between the RTX 3070 and RTX 3080. The final product is rumored to feature a base clock of 2.2 GHz along with 16 GB GDDR6 memory and a 256-bit bus. The sample has a TDP of 275 W with 8 + 6 pin power connectors up from original targets of 225 W - 250 W.

The report also notes that Intel is still deciding between three cooler designs with the finished card potentially featuring a white shroud. Intel also appears to be working on a NVIDIA DLSS/AMD FidelityFX Super Resolution competitor codenamed XeSS which confirms support for hardware-accelerated raytracing and resolution upscaling tech. The card is unlikely to launch until Q4 2021 with wider availability in 2022, lower end 128 EU, and 256 EU cards will follow shortly afterward. The full report can be viewed below.

AMD Radeon Pro Workstation Card with Navi 21 GPU Pictured

When AMD introduced RDNA 2 architecture and higher-end Navi 21 GPU SKUs, it was only a matter of time before the company launches these GPUs inside professional-grade graphics cards. Today, thanks to the Chiphell forums, we have pictures and some specifications of AMD's upcoming Radeon Pro workstation graphics card. Pictured below is a new RDNA 2 based design that features AMD's Navi 21 GLXL GPU SKU. The new GLXL GPU SKU is supposed to be rather similar to the Navi 21 XL GPU found inside AMD's Radeon RX 6800 XT graphics card, judging by the number and arrangement of capacitors on the back of the card.

When it comes to memory, the upcoming Radeon Pro workstation card is featuring 16 GB of VRAM, likely a variant of GDDR6 found on gaming-oriented graphics cards from RDNA 2 generation. When it comes to cooler design, the Radeon Pro graphics card has a blower-type cooler helping tame the Navi 21 GLXL GPU. Given that blower-type coolers are suitable for situations with less airflow, the TDP of this card could be around or under 250 Watts. You can take a look at the card below, however, do note that it is an engineering sample and the final product can look a bit different.

AMD Patents Chiplet-based GPU Design With Active Cache Bridge

AMD on April 1st published a new patent application that seems to show the way its chiplet GPU design is moving towards. Before you say it, it's a patent application; there's no possibility for an April Fool's joke on this sort of move. The new patent develops on AMD's previous one, which only featured a passive bridge connecting the different GPU chiplets and their processing resources. If you want to read a slightly deeper dive of sorts on what chiplets are and why they are important for the future of graphics (and computing in general), look to this article here on TPU.

The new design interprets the active bridge connecting the chiplets as a last-level cache - think of it as L3, a unifying highway of data that is readily exposed to all the chiplets (in this patent, a three-chiplet design). It's essentially AMD's RDNA 2 Infinity Cache, though it's not only used as a cache here (and for good effect, if the Infinity Cache design on RDNA 2 and its performance uplift is anything to go by); it also serves as an active interconnect between the GPU chiplets that allow for the exchange and synchronization of information, whenever and however required. This also allows for the registry and cache to be exposed as a unified block for developers, abstracting them from having to program towards a system with a tri-way cache design. There are also of course yield benefits to be taken here, as there are with AMD's Zen chiplet designs, and the ability to scale up performance without any monolithic designs that are heavy in power requirements. The integrated, active cache bridge would also certainly help in reducing latency and maintaining chiplet processing coherency.
AMD Chiplet Design Patent with Active Cache Hierarchy AMD Chiplet Design Patent with Active Cache Hierarchy AMD Chiplet Design Patent with Active Cache Hierarchy AMD Chiplet Design Patent with Active Cache Hierarchy

Tianshu Zhixin Big Island GPU is a 37 TeraFLOP FP32 Computing Monster

Tianshu Zhixin, a Chinese startup company dedicated to designing advanced processors for accelerating various kinds of tasks, has officially entered the production of its latest GPGPU design. Called "Big Island" GPU, it is the company's entry into the GPU market, currently dominated by AMD, NVIDIA, and soon Intel. So what is so special about Tianshu Zhixin's Big Island GPU, making it so important? Firstly, it represents China's attempt of independence from the outside processor suppliers, ensuring maximum security at all times. Secondly, it is an interesting feat to enter a market that is controlled by big players and attempt to grab a piece of that cake. To be successful, the GPU needs to represent a great design.

And great it is, at least on paper. The specifications list that Big Island is currently being manufactured on TSMC's 7 nm node using CoWoS packaging technology, enabling the die to feature over 24 billion transistors. When it comes to performance, the company claims that the GPU is capable of crunching 37 TeraFLOPs of single-precision FP32 data. At FP16/BF16 half-precision, the chip is capable of outputting 147 TeraFLOPs. When it comes to integer performance, it can achieve 317, 147, and 295 TOPS in INT32, INT16, and IN8 respectively. There is no data on double-precision floating-point numbers, so the chip is optimized for single-precision workloads. There is also 32 GB of HBM2 memory present, and it has 1.2 TB of bandwidth. If we compare the chip to the competing offers like NVIDIA A100 or AMD MI100, the new Big Island GPU outperforms both at single-precision FP32 compute tasks, for which it is designed.
Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island Tianshu Zhixin Big Island
Pictures of possible solutions follow.

Razer Could Introduce Company's First AMD-Powered Laptop

Razer, the maker of various gaming peripherals and gaming PCs/Laptops, has been a long-time user of Intel CPUs in their laptops devices. However, that might be changing just about now. According to some findings by @_rogame, there was a 3D Mark benchmark run that featured AMD Ryzen 5000 series "Cezanne" mobile processors. What is more interesting is the system it was running in. Called Razer PI411, this system is officially Razer's first AMD-powered laptop. While we don't have many details about it, we have some basic system configuration details. For starters, the laptop carries AMD's top-tier Ryzen 9 5900HX overclockable mobile processor. Carrying a configured TDP of 45 Watts (the maximum is 54 W), the system is likely not equipped with sufficient cooling for overclocking.

When it comes to the rest of the laptop, it features NVIDIA's GeForce RTX 3060 GPU, 16 GB of RAM, and 512 GB of storage. Being that this laptop was codenamed PI411, it could indicate a 14-inch model. However, we still don't know if it is ever going to hit consumer shelves. Being that Razer never carried an AMD CPU option, this could just be an engineering sample that the company was experimenting with, so we have to wait to find out more.

GIGABYTE Launches GeForce RTX 3080 GAMING OC WATERFORCE WB 10G graphics card

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today announced the GIGABYTE WATERFORCE graphics card - GeForce RTX 3080 GAMING OC WATERFORCE WB 10G powered by the NVIDIA GeForce RTX 3080 GPU. Whether users are looking to fulfill the demands of their high-end water-cooled systems, or to enjoy the benefits of water-cooled GPU and CPUs, GIGABYTE GAMING OC WATERFORCE graphics card is the best choice. GIGABYTE provides easy-to-install and quality-guaranteed water-cooled graphics cards for desktop PCs.

With the continuous increase in CPU power consumption, high-end water-cooled motherboards in the market are becoming more and more popular. The easy-to-install and quality-guaranteed GIGABYTE GAMING OC WATERFORCE graphics card is an easy choice for maximizing your graphics power. Just invest a little more than the GAMING OC air-cooled version and you can enjoy the water cooling benefits for the GPU and CPU. The GAMING OC WATERFORCE graphics card is equipped with a top-of-the-line overclocked GPU. It provides an all-around cooling solution for all key components of the graphics card, keeping the GPU, VRAM and MOSFET running cool to ensure stable overclock operation and longer durability.

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

MonsterLabo Plays Flight Simulator with The Beast, Achieves Fully-Fanless Gaming Experience

MonsterLabo, the maker of fanless PC cases designed for gaming with zero noise, has today tested its upcoming flagship offering in the case lineup. Called The Beast, the case is designed to handle high-end hardware with large TDPs and dissipate all that heat without any moving parts. Using only big heatsinks and heat pipes to transfer the heat to the big heatsink area. In a completely fanless configuration, the case can absorb and dissipate a CPU TDP of 150 Watts and a GPU TPD with 250 Watts. However, when equipped with two 140 mm fans running below 500 RPM, it can accommodate a 250 W CPU, and 320 W GPU. MonsterLabo has tested the fully fanless configuration, which was equipped with AMD Ryzen 7 3800XT processor, paired with NVIDIA's latest GeForce RTX 3080 Ampere graphics card.

There were no fans present in the system to help move the heat away, and the PC was being stress-tested using Microsoft's Flight Simulator. The company has posted a chart of CPU and GPU temperatures over time, where we see that the GPU has managed to hit about 75 degrees Celsius at one point. The CPU has remained a bit cooler, where the CPU package hit just above the 70-degree mark. Overall, the case is more than capable of cooling the hardware it was equipped with. By adding two slow-spinning fans, the temperatures would get even lower, however, that is no longer a fanless system. MonsterLabo's The Beast is expected to get shipped in Q3 of this year when reviewers will get their hands on it and test it for themselves. You can watch the videos in MonsterLabo's blog post here.

NVIDIA Repurposing Scrapped RTX 3080 Ti GA102-250 GPUs to GA102-300 for RTX 3090

The NVIDIA RTX 3080 Ti has experienced numerous delays with the card's launch most recently being pushed to mid-May. The unreleased RTX 3080 Ti has gone through various internal revisions with the card expected to use a GA102-250 GPU until plans were scrapped in late January. The RTX 3080 Ti is now expected to feature the GA102-225 GPU instead when it finally releases. NVIDIA having already produced the required processors is now repurposing the GA102-250 GPUs slated for the RTX 3080 Ti for use in RTX 3090 Founders Edition cards. This switch makes sense as the GA102-250 was rumored to feature the same number of cores as the RTX 3090 just with a smaller memory size. NVIDIA appears to have now enabled the full 384-bit memory bus and has laser engraved the chips to show their change to GA102-300's.

Qualcomm Extends the Leadership of its 7-Series with the Snapdragon 780G 5G Mobile Platform

Qualcomm Technologies, Inc. announced the latest addition to its 7-series portfolio, the Qualcomm Snapdragon 780G 5G Mobile Platform. Snapdragon 780G is designed to deliver powerful AI performance and brilliant camera capture backed by the Qualcomm Spectra 570 triple ISP and 6th generation Qualcomm AI Engine, allowing users to capture, enhance, and share their favorite moments seamlessly. This platform enables a selection of premium-tier features for the first time in the 7-series, making next generation experiences more broadly accessible.

"Since introducing the Snapdragon 7-series three years ago, more than 350 devices have launched based on 7-series mobile platforms. Today, we are continuing this momentum by introducing the Snapdragon 780G 5G Mobile Platform," said Kedar Kondap, vice president, product management, Qualcomm Technologies, Inc. "Snapdragon 780G was designed to bring in-demand, premium experiences to more users around the world."

Next-Generation Nintendo Switch SoC to be Powered by NVIDIA's Ada Lovelace GPU Architecture

Nintendo's Switch console is one of the most successful consoles ever made by the Japanese company. It has sold in millions of units and has received great feedback from the gaming community. However, as the hardware inside the console becomes outdated, the company is thinking about launching a new revision of the console, with the latest hardware and technologies. Today, we got ahold of information about the graphics side of things in Nintendo's upcoming console. Powered by NVIDIA Tegra SoC, it will incorporate unknown Arm-based CPU cores. The latest rumors suggest that the CPU will be accommodated with NVIDIA's Ada Lovelace GPU architecture. According to @kopite7kimi, a known hardware leaker, who simply replied to VideoCardz's tweet with "Ada", we are going to see the appearance of Ada Lovelace GPU architecture in the new SoC. Additionally, the new Switch SoC will have hardware accelerated NVIDIA Deep Learning Super Sampling (DLSS) and 4K output.

Raja Koduri Teases "Petaflops in Your Palm" Intel Xe-HPC Ponte Vecchio GPU

Raja Koduri of Intel has today posted an interesting video on his Twitter account. Showing one of the greatest engineering marvels Intel has ever created, Mr. Koduri has teased what is to come when the company launches the Xe-HPC Ponte Vecchio graphics card designed for high-performance computing workloads. Showcased today was the "petaflops in your palm" chip, designed to run AI workloads with a petaflop of computing power. Having over 100 billion transistors, the chip uses as much as 47 tiles combined in the most advanced packaging technology ever created by Intel. They call them "magical tiles", and they bring logic, memory, and I/O controllers, all built using different semiconductor nodes.

Mr. Koduri also pointed out that the chip was born after only two years after the concept, which is an awesome achievement, given that the research of the new silicon takes years. The chip will be the heart of many systems that require massive computational power, especially the ones like AI. Claiming to have the capability to perform quadrillion floating-point operations per second (one petaflop), the chip will be a true monster. So far we don't know other details like the floating-point precision it runs at with one petaflop or the total power consumption of those 47 tiles, so we have to wait for more details.
More pictures follow.

Capcom Announces Resident Evil Village PC Requirements

Capcom, the Japanese video game maker, has today announced specification requirements for its upcoming Resident Evil Village PC game, needed to play the game at certain resolutions/graphics presets. Starting with the minimum settings, Capcom is thinking of 1080p 60 FPS gaming. To achieve that you need at least an Intel Core i5-7500 or AMD Ryzen 3 1200 processor paired with 8 GB of RAM. The minimum specification also requires a DirectX 12 capable GPU, with 4 GB of VRAM, just like NVIDIA GeForce GTX 1050 Ti or AMD Radeon RX 560. The company notes that using this configuration, framerate may drop below 60 FPS during heavy loads. If you want to use raytracing, which is now also present in the game engine, you must switch to at least NVIDIA GeForce RTX 2060 or AMD Radeon RX 6700 XT.

The recommended specification of course requires much beefier hardware compared to the minimum specification. If you want to have a steady 1080p 60 FPS experience without frame drops, Capcom recommends an Intel Core i7 8700 or AMD Ryzen 5 3600 processor, paired with 16 GB of RAM, and a GPU like an NVIDIA GeForce GTX 1070 or AMD Radeon RX 5700. However, if you want the raytracing feature you need a better GPU. To achieve a 4K resolution with 60 FPS and raytracing turned on, the GPU needs a bump to at least an NVIDIA GeForce RTX 3070 or AMD Radeon RX 6900 XT graphics card. You can check out the game requirements in greater detail below.

NVIDIA GeForce RTX 3060 Anti-Mining Feature Bypassed by HDMI Dummy Plug

When NVIDIA introduced its GeForce RTX 3060 graphics card, the company also introduced a new feature to go along with it. As the card is priced well, it is positioning itself as a very good value offer for mining. Given that NVIDIA has now separate products for mining, it naturally would like to limit the number of gaming cards sold to miners. To achieve that, the company introduced an anti-mining algorithm that is essentially a handshake between the driver, RTX 3060 silicon, and the GPU VBIOS. This handshake checks those three components to detect if mining is going on, so it can limit the performance of the card.

However, even such a thing can be bypassed. Usually, miners put their GPUs in rigs where most of the GPUs don't use their video outputs. And the GPU can detect if it is connected to the monitor or not, triggering the anti-mining algorithm. A user from Quasar Zone forums has managed to bypass the restriction by simply installing a dummy HDMI plug. By using the dummy plug, the card thinks that it is connected to a monitor and thus runs normally. Using this workaround, the user was able to set-up a four-way GeForce RTX 3060 mining rig with 48 MH/s hashing power per GPU, for the total 192 MH/s hash rate. You can buy HDMI dummy plugs for as low as $5.99 on Amazon or at any other store.

TrendForce: Consumer DRAM Pricing to Increase 20% in 2Q2021 Due to Increased Demand

According to TrendForce, we technology enthusiasts will have other rising prices to contend with throughout 2021, adding to the already ballooning discrete GPU and latest-gen CPUs from the leading manufacturer. The increased demand due to the COVID pandemic stretched the usual stocks to their limits, and due to the tremendous, multiple-month lead times between semiconductor orders and their fulfillment from manufacturers, the entire supply infrastructure was spread too thin for the increased worldwide needs. This leads to increased component pricing, which in turn leads to higher ASP pricing for DRAM. Adding to that equation, of course, is the fact that companies are now more careful, and are placing bigger orders so as to be able to weather these sudden demand changes.

TrendForce says that DRAM pricing has already increased 3-8% in 1Q2021, and that market adjustments will lead to an additional increase somewhere between 13-18% for contract pricing. Server pricing is projected to increase by 20%; graphics DRAM is expected to increase 10-15% in the same time-span, thus giving us that strange stomach churn that comes from having to expect even further increases in graphics card end-user pricing; and overall DRAM pricing for customers is expected to increase by 20% due to the intensifying shortages. What a time to be a system builder.

GIGABYTE Launches Radeon RX 6700 XT AORUS Elite Graphics Card

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today announced new AMD Radeon RX 6700 XT graphics card - AORUS Radeon RX 6700 XT ELITE 12G, powered by AMD RDNA2 gaming architecture. Inheriting the last-gen RGB three-ring design and light effect, the light source guides light internally so that it creates a brighter and natural RGB light effect. Thereby achieving a wonderful balance between cooling and RGB lights.

AORUS Radeon RX 6700 XT ELITE not only keeps the design spirit of the last-gen, but also has a distinct product recognition in the hardware industry, perfectly expressing the art of gaming, and once again remixes the classic style. Furthermore, gamers have more DOF (degree of freedom) to mix unique RGB lights, customizing with up to 8 kinds of color patterns on the "Dazzling" light effect via RGB Fusion 2.0 software.

AMD's Next-Generation Van Gogh APU Shows Up with Quad-Channel DDR5 Memory Support

AMD is slowly preparing to launch its next-generation client-oriented accelerated processing unit (APU), which is AMD's way of denoting a CPU+GPU combination. The future design is codenamed after Van Gogh, showing AMD's continuous use of historic names for their products. The APU is believed to be a design similar to the one found in the SoC of the latest PlayStation 5 and Xbox Series X/S consoles. That means that there are Zen 2 cores present along with the latest RDNA 2 graphics, side by side in the same processor. Today, one of AMD's engineers posted a boot log of the quad-core Van Gogh APU engineering sample, showing some very interesting information.

The boot log contains information about the memory type used in the APU. In the logs, we see a part that says "[drm] RAM width 256bits DDR5", which means that the APU has an interface for the DDR5 memory and it is 256-bit wide, which represents a quad-channel memory configuration. Such a wide memory bus is typically used for applications that need lots of bandwidth. Given that Van Gogh uses RDNA 2 graphics, the company needs a sufficient memory bandwidth to keep the GPU from starving for data. While we don't have much more information about it, we can expect to hear greater details soon.

First NVIDIA Palit CMP 30HX Mining GPU Available at a Tentative $723

NVIDIA's recently-announced CMP (Cryptocurrency Mining Processor) products seem to already be hitting the market - at least in some parts of the world. Microless, a retailer in Dubai, listed the cryptocurrency-geared graphics card for $723 - $723 which are equivalent to some 26 MH/s, as per NVIDIA, before any optimizatons have been enacted on the clock/voltage/BIOS level, as more serious miners will undoubtedly do.

The CMP 30HX is a re-released TU116 chip (Turing, sans RT hardware), which powered the likes of the GeForce GTX 1660 Super in NVIDIA's previous generation of graphics cards. The card features a a 1,530 MHz base clock; a 1,785 MHz boost clock; alongside 6 GB of GDDR6 memory that clocks in at 14 Gbps (which actually could soon stop being enough to hold the entire workload completely in memory). Leveraging a 192-bit memory interface, the graphics card supplies a memory bandwidth of up to 336 GB/s. It's also a "headless" GPU, meaning that it has no display outputs that would only add to cost in such a specifically-geared product. It's unclear how representative the pricing from Microless actually is of NVIDIA's MSRP for the 30HX products, but considering current graphics cards' pricing worldwide, this pricing seems to be in line with GeForce offerings capable of achieving the same hash rates, so its ability to concentrate demand from miners compared to other NVIDIA mainstream, GeForce offerings depends solely on the prices that are both set by NVIDIA and practiced by retailers.
Return to Keyword Browsing