News Posts matching #graphics cards

Return to Keyword Browsing

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

NVIDIA is slowly preparing to launch its next-generation Ampere graphics cards for consumers after we got the A100 GPU for data-centric applications. The Ampere lineup is getting more and more leaks and speculations every day, so we can assume that the launch is near. In the most recent round of rumors, we have some new information about the GPU SKU and memory of the upcoming GeForce RTX 3070 and RTX 3070 Ti. Thanks to Twitter user kopite7kimi, who had multiple confirmed speculations in the past, we have information that GeForce RTX 3070 and RTX 3070 Ti use a GA104 GPU SKU, paired with GDDR6 memory. The cath is that the Ti version of GPU will feature a new GDDR6X memory, which has a higher speed and can reportedly go up to 21 Gbps.

The regular RTX 3070 is supposed to have 2944 CUDA cores on GA104-400 GPU die, while its bigger brother RTX 3070 Ti is designed with 3072 CUDA cores on GA104-300 die. Paired with new technologies that Ampere architecture brings, with a new GDDR6X memory, the GPUs are set to be very good performers. It is estimated that both of the cards would reach a memory bandwidth of 512 GB/s. So far that is all we have. NVIDIA is reportedly in Design Validation Test (DVT) phase with these cards and is preparing for mass production in August. Following those events is the official launch which should happen before the end of this year, with some speculations indicating that it is in September.

Eurocom launches MXM3 to PCIe Adapter for GPU Developers

Embedded, GPU-accelerated computing is the use of a graphics processing unit (GPU) combined with a central processing unit (CPU) to facilitate advanced computing processes such as Artificial Intelligence of Things (AIoT), Deep Learning, Prototyping/Pre-concept design, CUDA development and medical Hi-Res imaging, to name a few.

Embedded, GPU-accelerated computing is gaining popularity as its potential becomes realized in many high-tech labs and R&D facilities around the world. GPU-accelerated computing plays a huge role in accelerating applications in platforms within these bleeding-edge industries. Whether it is designing self-driving cars, rendering ultra-HD scenes with millions of polygons (4K and above), or working with advanced CAD models for GIS mapping, a multi-GPU system will significantly increase the performance and capabilities advanced developers have available.
Through embedded GPU-accelerated technology, processing and computing times can be cut into a fraction of the time of what could be accomplished compared to traditional, less-sophisticated computing infrastructures. The combination of high-performance computing along with lower power consumption can result in a more significant and more extensive deep learning in neural networks.

Matrox Shipping QuadHead2Go Q155 Card to Accelerate Artistic and Large-Scale Video Wall Development

Matrox is pleased to announce that the Matrox QuadHead2Go Q155 multi-monitor controller card is now shipping. Powering up to four Full HD displays from a single HDMI video source—including HDCP devices—the QuadHead2Go Q155 card provides OEMs, system integrators, and AV installers with an innovative tool to quickly and easily add more displays to video walls of any type and size. Designed to integrate seamlessly into Matrox-based or third-party platforms, QuadHead2Go is ideal for a wide variety of video wall applications, including digital signage, control room, broadcast, and more.

QuadHead2Go Q155 controllers capture a single video signal—of up to 4Kp60 and 8Kx8K—for display across up to four screens, at resolutions up to 1920x1200 per output. The input content could be from any HDMI source—including digital signage players, media players, laptops, and more—while displays can be arranged in a variety of artistic configurations or classic rectangular 2x2, 2x1, 3x1, 4x1, 1x2, 1x3, or 1x4 setups. Pair multiple QuadHead2Go units to add more displays and/or combine with Matrox D-Series quad-4K graphics cards to power up to an astonishing 64 1920x1080p60 monitors from one system.

Raijintek Unveils MORPHEUS 8057 VGA Air Cooler

Raijintek today unveiled the MORPHEUS 8057, a large graphics card air-cooler. The cooler consists of a gargantuan aluminium fin-stack heatsink that's ventilated by a pair of 120 mm fans (not included). The heatsink features a large mirror-finish copper base, from which six 6 mm thick heat pipes emerge in either direction of the base (Raijintek for some reason counts this as 12 heat pipes), conveying heat to a large fin-stack with 112 aluminium fins that have ceramic coating.

The MORPHEUS 8057 heatsink measures 254 mm x 100 mm x 44 mm (WxDxH), weighing 515 g. Among the secondary heatsinks included are 12 heatsinks of various sizes for memory and VRM components; thermal pads, retention clips, and some thermal paste. Among the graphics cards supported are AMD "Navi 10" based graphics cards (RX 5700 series and RX 5600 series); and NVIDIA RTX 2080 Ti, RTX 2080/SUPER, RTX 2070/SUPER, and RTX 2060/SUPER. The company didn't reveal pricing.

ELSA Announces GeForce RTX 2070 Super ERAZOR X & Super S.A.C.

ELSA has announced two new GeForce RTX 2070 Super graphics cards the Super ERAZOR X and Super S.A.C. The two new cards both feature a dual-fan design with backplates. The ELSA GeForce RTX 2070 Super ERAZOR X features a boost clock of 1815 MHz, four 8 mm diameter heat pipes cooled by dual 90 mm "SAC 3" fans. The card comes with an ESLA logo backplate and measures 267 mm (length) x 114 mm (height) x 50 mm (thickness), 2.5 slot size limiting SLI options. The card will be available in Japan from June 26th with a price of 81,200 yen (760 USD).

ELSA GeForce RTX 2070 Super S.A.C is a cheaper option with a boost clock of 1,770 MHz, four 6 mm diameter heat pipes cooled by dual 90 mm "SAC 2" fans. The card comes with a plain backplate and measures 271 mm (length) x 115 mm (height) x 42 mm (thickness). The card will be available in Japan from June 26th with a price of 71,400 yen (670 USD).

NVIDIA GeForce RTX 3090 and RTX 3080 Production Timeline Revealed

NVIDIA's next-generation GeForce "Ampere" RTX 3000 series graphics cards are heading for a September reveal, along with availability shortly after. Much of the news cycle over the past couple of weeks revolved around alleged leaks of the card's cooling solution that provides insights into what the finished product could look like, with some even doubting the veracity of the picture leaks given the September launch. Igor's Lab did some digging into the production timeline of these cards. The leaks seem to perfectly align with the timeline.

The chip design, prototyping, taping-out, and testing of "Ampere" IP completed before the mass-production timeline kicks off. This begins in April/May, with NVIDIA's OEM partners and other suppliers finalizing a bill of materials (BOM). June is also when the products go through the EVT (engineering validation test) and DVT (design validation test). It is at these stages that NVIDIA has the opportunity to approve or summarily reject/change the design of the product and finalize it. By July, there are working samples of the finished products for NVIDIA and its industry partners to validate. This is also when regulators such as the FCC and CE conduct EMI tests. Production validation tests (PVT), or proofing of the production line, occurs in late-July/early-August. The final BIOS is released to the OEM by NVIDIA around this time. Mass-production finally commences in August, and the onward march to distributors rolls on. The media event announcing the product and press reviews follow in September, and market availability shortly thereafter.

AMD Declares That The Era of 4GB Graphics Cards is Over

AMD has declared that the era of 4 GB graphics cards is over and that users should "Game Beyond 4 GB". AMD has conducted testing of its 4 GB RX 5500XT & 8 GB RX 5500XT to see how much of a difference VRAM can make on gaming performance. AMD tested the cards on a variety of games at 1080p high/ultra settings with a 3600X & 16 GB 3200 MHz ram, on average the 8 GB model performed ~19% better than its 4 GB counterpart. With next-gen consoles featuring 16 GB of combined memory and developers showing no sign of slowing down, it will be interesting to see what happens.

NVIDIA Investors Claw Back at Company, Claiming $1 Billion Mining GPU Revenue Hidden Away in the Gaming Division

NVIDIA investors have recently filed a suit against the company, claiming that NVIDIA wrongfully detailed its revenue indicators between departments. The main point of contention here is that investors claim NVIDIA knowingly obfuscated the total value of the crypto market boom (and subsequent bust) from investors, thus painting a picture of the company's outlook than was different from reality (making demand for the Gaming division look higher than it was in reality) and exposing them to a different state of affairs and revenue gains than they expected. The investors say that NVIDIA knew that a not insignificant number of its graphics cards sold between 2017 and 2018 was being bought-up solely for the purpose of crypto mining, and that the company knew this (and even marketed GPUs specifically for that purpose).

The crypto mining boom had miners gobbling up all NVIDIA and AMD graphics cards that they could, with both companies seemingly increasing production to meet the crypto mining bubble demand. However, due to the economics of crypto mining, it was clear that any profits derived from this bubble would ultimately open the door to an explosive logistics problem, as miners offloaded their graphics cards to the second-hand market, which could ultimately harm NVIDIA's financial book. Of course, one can look at NVIDIA's revenue categories at the time to see that crypto would hardly fit neatly into either the Gaming, Professional Visualization, Datacenter, Auto, or OEM & IP divisions.

Bitspower Intros 1S VGA Water Block for NVIDIA GeForce RTX 20-series

Bitspower today introduced its 1S full-coverage graphics card water-block for a wide range of NVIDIA GeForce RTX 20-series custom-design graphics cards, bearing the model number BP-VG2080RD1S. The uses nickel-plated copper as its primary material, coupled with a clear-acrylic top. The block supports a wide range of RTX 2080 Ti, RTX 2080/2080S, RTX 2070/2070S, and RTX 2060/2060S graphics cards. With a thickness of 15.8 mm, 244.5 mm length, and 125 mm height, the block is capable of single-slot applications. Available now, it is priced around USD $130.

MSI Announces new GeForce RTX 2080 Ti GAMING Z TRIO Featuring Mighty Fast Memory

As the world's most popular GAMING graphics card vendor, MSI is proud to unleash the most powerful Turing-based gaming graphics card onto the market. The famous GAMING TRIO series has reached its ultimate form in custom GeForce graphics cards to push the limits of performance once more. Making sure gamers will bask in glorious amounts of frames, MSI has fitted the GeForce RTX 2080 Ti with extremely fast 16 Gbps memory. Outperforming the MSI GeForce RTX 2080 Ti GAMING X TRIO by over 5% out of the box, the industry-leading MSI GeForce RTX 2080 Ti GAMING Z TRIO sets a new bar for extreme gaming.

Rocking the mighty TRI-FROZR design with supreme cooling power allows the MSI GeForce RTX 2080 Ti GAMING Z TRIO to deliver top notch in-game and thermal performance. Featuring MSI's patented TORX 3.0 fans, the unique fan design combines the advantages of two differently shaped fan blades to generate huge amounts of concentrated airflow while remaining virtually silent. The Wave-Curved 2 fin design helps to dissipate heat from the heatsink quicker than ever to keep you cool in the heat of battle.
MSI GeForce RTX 2080 Ti GAMING Z TRIO MSI GeForce RTX 2080 Ti GAMING Z TRIO MSI GeForce RTX 2080 Ti GAMING Z TRIO

Micron to Launch HBM2 Memory This Year

Micron Technologies, in the latest earnings report, announced that they will start shipping High-Bandwidth Memory 2 (HBM2) DRAM. Used for high-performance graphics cards, server processors and all kinds of processors, HBM2 memory is wanted and relatively expensive solution, however, when Micron enters the market of its manufacturing, prices, and the market should adjust for the new player. Previously, only SK-Hynix and Samsung were manufacturing the HBM2 DRAM, however, Micron will join them and they will again form a "big-three" pact that dominates the memory market.

Up until now, Micron used to lay all hopes on its proprietary Hybrid Memory Cube (HMC) DRAM type, which didn't gain much traction from customers and it never really took off. Only a few rare products used it, as Fujitsu SPARC64 XIfx CPU used in Fujitsu PRIMEHPC FX100 supercomputer introduced in 2015. Micron announced to suspend works on HMC in 2018 and decided to devote their efforts to GDDR6 and HBM development. So, as a result, we are seeing that they will launch HBM2 DRAM products sometime this year.
Micron HMC High-Bandwidth Memory

AMD RDNA 2 GPUs to Support the DirectX 12 Ultimate API

AMD today announced in the form of a blog post that its upcoming graphics cards based on RDNA 2 architecture will feature support for Microsoft's latest DirectX 12 Ultimate API. "With this architecture powering both the next generation of AMD Radeon graphics cards and the forthcoming Xbox Series X gaming console, we've been working very closely with Microsoft to help move gaming graphics to a new level of photorealism and smoothness thanks to the four key DirectX 12 Ultimate graphics features -- DirectX Raytracing (DXR), Variable Rate Shading (VRS), Mesh Shaders, and Sampler Feedback." - said AMD in the blog.

Reportedly, Microsoft and AMD have worked closely to enable this feature set and provide the best possible support for RDNA 2 based hardware, meaning that future GPUs and consoles are getting the best possible integration of the new API standard.
AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate

Microsoft DirectX 12 Ultimate: Why it Helps Gamers Pick Future Proof Graphics Cards

Microsoft Thursday released the DirectX 12 Ultimate logo. This is not a new API with any new features, but rather a differentiator for graphics cards and game consoles that support four key modern features of DirectX 12. This helps consumers recognize the newer and upcoming GPUs, and tell them apart from some older DirectX 12 capable GPUs that were released in the mid-2010s. For a GPU to be eligible for the DirectX 12 Ultimate logo, it must feature hardware acceleration for ray-tracing with the DXR API; must support Mesh Shaders, Variable Rate Shading (VRS), and Sampler Feedback (all of the four). The upcoming Xbox Series X console features this logo by default. Microsoft made it absolutely clear that the DirectX 12 Ultimate logo isn't meant as a compatibility barrier, and that these games will work on older hardware, too.

As it stands, the "Navi"-based Radeon RX 5000 series are "obsolete", just like some Turing cards from the GeForce GTX 16-series. At this time, the only shipping product which features the logo is NVIDIA's GeForce RTX 20-series and the TITAN RTX, as they support all the above features.

NVIDIA GeForce RTX GPUs to Support the DirectX 12 Ultimate API

NVIDIA graphics cards, starting from the current generation GeForce RTX "Turing" lineup, will support the upcoming DirectX 12 Ultimate API. Thanks to a slide obtained by our friends over at VideoCardz, we have some information about the upcoming iteration of the DirectX 12 API made by Microsoft. In the new API revision, called "DirectX 12 Ultimate", it looks like there are some enhancements made to the standard DirectX 12 API. From the leaked slide we can see the improvements coming in the form of a few additions.

The GeForce RTX lineup will support the updated version of API with features such as ray tracing, variable-rate shading, mesh shader, and sampler feedback. While we do not know why Microsoft decided to call this the "Ultimate" version, it is possibly used to convey clearer information about which features are supported by the hardware. In the leaked slide there is a mention of consoles as well, so it is coming to that platform as well.

Jon Peddie Research: AMD's Shipments in Q4'19 Increased 22.6%, Overall Shipments Up QoQ but Down YoY

Jon Peddie Research have released their report on the overall market outlook for GPU shipments for Q4'2019, and the news are great for AMD. Due to the launch of more affordable Navi-based 7 nm graphics cards, the company managed to achieve a growth of 22.6% in shipment volume for the last quarter of 2019, compared to Q3 of the same year. This 22.6% volume increase is pretty significant (and is miles ahead of competitors NVIDIA (whose shipments decreased by -1.9%) and Intel (a 0.2% increase), having increased AMD's overall market share by 3%. This means that AMD now commands 19% of the overall GPU market share, surpassing NVIDIA (which counts with 18%) but both being dwarfed by Intel (with a commanding 63% share). It's important to note here that the numbers include integrated- and discrete-GPUs, and AMD's numbers could be assisted by its mobile processor and APU sales, just as iGPUs make up all of Intel's numbers.

Those numbers are skewed, of course, when we look solely at the discrete GPU market share, with NVIDIA commanding a huge, 73% chunk of the market against AMD's paltry (by comparison) 27%. All in all, Jon Peddie Research reports that the overall PC market increased by 1.99% quarter-to-quarter and increased by 3.54% year-to-year, thus resulting a good performance for these "little" chips.

MSI Lists GeForce GTX 1650 with GDDR6 Memory

According to VideoCardz, who found the latest EEC listing, MSI has prepared a new variant of NVIDIA's GeForce GTX 1650 graphics card. In the listing, there are a total of eight new GeForce GTX 1650 graphics cards. Unlike the regular GTX 1650 GPU, these are a bit different. They feature improved GDDR6 memory and make up for the confusing stacking of these products.

The GTX 1650 SUPER model features GDDR6 memory, while the regular GTX 1650 had GDDR5 memory until now. With such a product that is between regular GTX 1650 and the SUPER model, MSI is creating a hybrid card. Having the same specifications as the GTX 1650, the only differentiator will be the memory and memory speed, where GDDR6 equipped card will be faster.

Palit Releases GTX 1650 KalmX - a Passively Cooled, 0dB GPU

Palit has today released the latest addition to its KalmX passive series of graphics cards - the GTX 1650 KalmX. This graphics card is an ITX-sized, 178 mm long GPU that is designed for zero noise, passively cooled builds, where noise is the primary factor. With a heatsink consisting out of two heat-pipes and many fins, the cooling solution should be capable of cooling the 75 W TDP of the GTX 1650 GPU. The cold plate of the heatsink covers the GPU and VRMs to provide safe GPU operation. Being based on the reference design, this card features default speeds of 1485 MHz base, and 1665 MHz boost clocks. For IO, Palit opted to include three ports, where one is HDMI 2.0b and the other two are DisplayPort 1.4a. All the power needed is provided by the PCIe slot, so there are no external power connectors.

ASRock Revenue Soars Due to the Ryzen Effect

ASRock, a Taiwanese manufacturer of motherboards and graphics cards, had an amazing 2019 when it comes to the revenue. Surging around 31% on a Year-over-Year (YoY) basis and delivering revenue of 443.16 million US Dollars, ASRock is expecting to deliver even better results in 2020. When it comes to the underlying reasons for this notable increase, ASRock attributes it to the recent success of AMD's Ryzen family of processors and strong demand for the platform surrounding it. Adopting the AMD Ryzen processors in Mini-PCs, motherboards and server boards, ASRocks see strong demand for these products that should carry over in 2020.

Another reason for strong profits and even better chapters ahead is the developments in the US and European markets. Having previously been focused on the Asian market and marketing its products to that part, ASRock changed the strategy and started advertising its brand more to other regions like the US and Europe. This new strategy is progressing well and is expected to continue in the coming years. Additionally, it is worth noting that ASRock's graphics card sales started to turn profitable in 2019, and now that part of ASRock is attributing to profits as well.

NVIDIA's Next-Generation "Ampere" GPUs Could Have 18 TeraFLOPs of Compute Performance

NVIDIA will soon launch its next-generation lineup of graphics cards based on a new and improved "Ampere" architecture. With the first Tesla server cards that are a part of the Ampere lineup going inside Indiana University Big Red 200 supercomputer, we now have some potential specifications and information about its compute performance. Thanks to the Twitter user dylan552p(@dylan522p), who did some math about the potential compute performance of the Ampere GPUs based on NextPlatform's report, we discovered that Ampere is potentially going to feature up to 18 TeraFLOPs of FP64 compute performance.

With Big Red 200 supercomputer being based on Cray's Shasta supercomputer building block, it is being deployed in two phases. The first phase is the deployment of 672 dual-socket nodes powered by AMD's EPYC 7742 "Rome" processors. These CPUs provide 3.15 PetaFLOPs of combined FP64 performance. With a total of 8 PetaFLOPs planned to be achieved by the Big Red 200, that leaves just a bit under 5 PetaFLOPs to be had using GPU+CPU enabled system. Considering the configuration of a node that contains one next-generation AMD "Milan" 64 core CPU, and four of NVIDIA's "Ampere" GPUs alongside it. If we take for a fact that Milan boosts FP64 performance by 25% compared to Rome, then the math shows that the 256 GPUs that will be delivered in the second phase of Big Red 200 deployment will feature up to 18 TeraFLOPs of FP64 compute performance. Even if "Milan" doubles the FP64 compute power of "Rome", there will be around 17.6 TeraFLOPs of FP64 performance for the GPU.

Matrox to Develop Embedded Graphics Cards with NVIDIA

Matrox today announced a collaboration with NVIDIA for the development of a new range of multi-display embedded graphics cards purpose-built for high-density video walls. Leveraging NVIDIA's industry-renowned GPU technology, Matrox will design graphics innovations powered by a custom-built Quadro embedded GPU to accelerate graphics-intensive video wall applications in commercial and 24/7 critical environments.

"Matrox is thrilled to work alongside NVIDIA, the leader in GPU computing, to bring to market a new standard of high-density video walls," said David Chiappini, executive vice president of research and development at Matrox. "This collaboration is yet another example of our commitment to expanding our video wall portfolio while customers continue to benefit from our graphics expertise, world-class engineering, dedicated technical support, and long product life cycles."

Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

As we approach the release of NVIDIA's Ampere GPUs, which are rumored to launch in the second half of this year, more rumors and information about the upcoming graphics cards are appearing. Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption.

Built using Samsung's 7 nm manufacturing node, Ampere is poised to be the new king among all future GPUs. The rumored 50% performance increase is not impossible, due to features and improvements that the new 7 nm manufacturing node brings. If utilizing the density alone, NVIDIA can extract at least 50% extra performance that is due to the use of a smaller node. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.

Intel DG1 Discrete GPU Shows Up with 96 Execution Units

As we are approaching the year 2020, when Intel is rumored to launch its discrete graphics cards to the hand of consumers around the world, we are gearing up on the number of leaks about the upcoming products. Thanks to Twitter user @KOMACHI_ENSAKA, who found the latest EEC listing, we have new information regarding Intel's upcoming DG1 discrete graphics solution.

In the leaked EEC listing, the DG1 GPU is being presented as a GPU with 96 execution units, meaning that Intel is planning to take on entry-level graphics cards with this GPU. If the graphics unit is following the same design principle of the previous-generation GPUs, then there should be around 8 shading units per one execution unit, totaling 768 shading units for the whole DG1 GPU. If the 12th Gen Xe design inside the DG1 follows a different approach, then we can expect to see a double amount of shading units, meaning 1536 in total.

GIGABYTE Readies EAGLE Graphics Card Brand Extension, includes RX 5600 XT

GIGABYTE is readying a new brand-extension for its graphics cards lineup, called EAGLE. It will join the company's existing brand extensions, namely Aorus Gaming and WindForce series. Regulatory filings by GIGABYTE with the Eurasian Economic Commission (EEC) sees the company launching as many as 30 individual SKUs, spanning nearly every current-generation GPU. On the AMD front, we have cards based on the RX 5700 XT, RX 5700, RX 5500 XT, and the yet-unannounced RX 5600 XT, with its 6 GB of memory.

The NVIDIA lineup includes all its GTX 16-series and RTX 20-series SKUs, with the exception of the RTX 2080 Ti. The inclusion of RTX 2070 and RTX 2080, SKUs believed to have been passively retired with the advent of the RTX 2070 Super and RTX 2080 Super, is interesting. It shows that NVIDIA hasn't given up on the two. The RTX 2070's second-coming in particular, was reportedly triggered by supply shortages in the RX 5700 series, giving NVIDIA room to sell something around the $400-450 mark, bang in the middle of the RTX 2060 Super and RTX 2070 Super.

AMD and Industry Partners to Develop New Blockchain-based Gaming Platforms

AMD today announced that it has joined the Blockchain Game Alliance (BGA) and forged partnerships with leading technology providers to help promote the development and proliferation of new blockchain-powered gaming platforms.

The Blockchain Game Alliance is committed to driving awareness and adoption of blockchain technologies within the game industry, providing an open forum for individuals and companies to share knowledge and collaborate, create common standards, establish best practices, and network. As the first major hardware manufacturer to join the BGA, AMD plans to enable alliance members with efficient and high-performance computing technologies for next-generation blockchain-based gaming platforms that could potentially transform the way games are created, published, purchased and played.
Return to Keyword Browsing