News Posts matching #graphics cards

Return to Keyword Browsing

AMD in Talks with Partners About Custom Radeon RX 6900 XT Designs

Just a few days ago AMD has announced its Radeon RX 6000 series of graphics cards based on the new RDNA 2 architecture. While AMD has given out the "Big Navi" chips to its partners to design custom boards and give users designs with better cooling and possibly higher overclocking capabilities, that doesn't seem to extend to the highest-end parts. So far, we have seen custom designs from companies like ASUS, MSI, etc., and all of them have one thing in common - they only do designs for Radeon RX 6800 or RX 6800 XT. So one would wonder where are the highest-end custom Radeon RX 6900 XT designs.

The first wave of the "custom" cards will be on November 18th, when manufacturers will release designs that are MBA (Made-by-AMD), meaning that the PCB is a reference design, just with a custom cooler installed. When it comes to the custom RX 6900 XT cards, AMD is now in talks with its partners whether to keep the biggest "Big Navi" design available for custom designs, or to keep it as AMD exclusive, with the most likely scenario being the AMD exclusivity. AMD partners could carry the models in their stores and offerings, however, the PCB and cooler design would be AMD's. The situation is yet unresolved so we have to wait and see what comes out of it and if we are getting any custom designs of the Radeon RX 6900 XT model.

Microsoft: Only Consoles Supporting Full RDNA 2 Capabilities Are Xbox Series X and Series S, Excludes PlayStation 5

Microsoft has today published another article on its Xbox Wire blog, dedicated to all the news regarding the Xbox consoles and its ecosystem. In the light of yesterday's launch of AMD Radeon RDNA 2 graphics cards, Microsoft has congratulated its partner and provider of processors SoCs for their next-generation consoles. Besides the celebrations and congratulations, Microsoft has proceeded to show off what the Xbox Series X and Series S consoles are capable of, and how they integrate the RDNA 2 architecture. The company notes that there are hardware accelerated DirectX Raytracing, Mesh Shaders, Sampler Feedback, and Variable Rate Shading units built-in, so game developers can take advantage of it.

Another interesting point Microsoft made was that "Xbox Series X|S are the only next-generation consoles with full hardware support for all the RDNA 2 capabilities AMD showcased today." What this translates into is that Microsoft is the only console maker that uses the full RDNA 2 potential. This could leave Sony out in the dark with its PlayStation 5 console, meaning that it does not support all the features of AMD's new GPU architecture. There are not any specific points, however, we have to wait and see what Sony has left out, if anything.

Bug in HDMI 2.1 Chipset May Cause Black Screen on Your Xbox Series X Console or NVIDIA GPU

A German website, Heise.de, has discovered a bug in HDMI 2.1 chipset that causes black screen issues on specific hardware. On AV chipsets sourced by Panasonic, and used by Denon, Marantz, and Yamaha HDMI 2.1 AV receivers, the chipset experiences a specific issue of a black screen. More specifically, the bug happens once you connect Microsoft's newest console, Xbox Series X, or NVIDIA's Ampere graphics cards. When connecting these sources at resolutions like 4K/120 Hz HDR and 8K/60 Hz HDR to Panasonic HDMI 2.1 chipsets, the black screen happens. This represents a major problem for every manufacturer planning to use the Panasonic HDMI 2.1 chipset in its AV receivers, meaning that the issue has to be addressed. The Audioholics website has reached out to Sound United and Yamaha to see what their responses were, and you can check them out below.

GIGABYTE Intros GeForce RTX 3090 VISION OC Graphics Card

GIGABYTE backed up its late September launch of the RTX 3080-based VISION OC graphics card targeted at creators, with one based on the GeForce RTX 3090 (model: GV-N3090VISION OC-24GD), a GPU that offers greater dividends to creators thanks to its 24 GB video memory. GIGABYTE's VISION brand of graphics cards and motherboards are targeted at creators, and the RTX 3090 VISION OC, when paired with NVIDIA's GeForce Studio drivers, provides a formidable solution halfway between the gaming and professional-visualization market segments.

The GIGABYTE RTX 3090 VISION OC comes with the same board design as the RTX 3080 VISION OC, but with the addition of the NVLink interface for explicit multi-GPU. The card comes with a mild factory-OC which sees the GPU boost up to 1755 MHz (vs. 1695 MHz reference), while the memory is left untouched at 19.5 Gbps (GDDR6X-effective), for 940 GB/s of memory bandwidth. Display interfaces include three DisplayPort 1.4a and two HDMI 2.1 connectors. The card draws power from two 8-pin PCIe power connectors. It uses a triple-slot, triple-fan cooling solution with the VISION design scheme. The company didn't reveal pricing.

EVGA Unleashes XOC BIOS for GeForce RTX 3090 FTW3 Graphics Card

EVGA has today published the "XOC" BIOS version for its GeForce RTX 3090 FTW3 graphics cards. The XOC BIOS version is designed for "extreme overclocking" purposes, as it boosts the power limit of the card by very a few additional Watts. This allows the overclockers to use the card to its full potential so the GPU core is not limited by power. To run XOC BIOS on your GeForce RTX 3090 FTW3 GPU card, you need to have an adequate cooling solution and sufficient power supply. For power, EVGA recommends that you use at least 850w+ Gold PSU, at minimum. This is a sign that shows that XOC bios will boost the system power consumption by quite a bit. The XOC BIOS is enabling the GPU to have a power limit of 500 Watts. It is important to note that EVGA does not guarantee any performance increase or overclock while using this BIOS update.

You can download the EVGA XOC BIOS for GeForce RTX 3090 FTW3 graphics card here. To install it, unzip the file, run Update.exe, and after updating restart your PC. That is the complete update process for the BIOS update. EVGA uploads both the normal BIOS (so you can revert) and XOC BIOS there so be careful when choosing the right files. You can use TechPowerUp GPU-Z tool to verify the BIOS install.

AMD Radeon RX 6000 Series "Big Navi" GPU Features 320 W TGP, 16 Gbps GDDR6 Memory

AMD is preparing to launch its Radeon RX 6000 series of graphics cards codenamed "Big Navi", and it seems like we are getting more and more leaks about the upcoming cards. Set for October 28th launch, the Big Navi GPU is based on Navi 21 revision, which comes in two variants. Thanks to the sources over at Igor's Lab, Igor Wallossek has published a handful of information regarding the upcoming graphics cards release. More specifically, there are more details about the Total Graphics Power (TGP) of the cards and how it is used across the board (pun intended). To clarify, TDP (Thermal Design Power) is a measurement only used to the chip, or die of the GPU and how much thermal headroom it has, it doesn't measure the whole GPU power as there are more heat-producing components.

So the break down of the Navi 21 XT graphics card goes as follows: 235 Watts for the GPU alone, 20 Watts for Samsung's 16 Gbps GDDR6 memory, 35 Watts for voltage regulation (MOSFETs, Inductors, Caps), 15 Watts for Fans and other stuff, and 15 Watts that are used up by PCB and the losses found there. This puts the combined TGP to 320 Watts, showing just how much power is used by the non-GPU element. For custom OC AIB cards, the TGP is boosted to 355 Watts, as the GPU alone is using 270 Watts. When it comes to the Navi 21 XL GPU variant, the cards based on it are using 290 Watts of TGP, as the GPU sees a reduction to 203 Watts, and GDDR6 memory uses 17 Watts. The non-GPU components found on the board use the same amount of power.

NVIDIA Updates Video Encode and Decode Matrix with Reference to Ampere GPUs

NVIDIA has today updated its video encode and decode matrix with references to the latest Ampere GPU family. The video encode/decode matrix represents a table of supported video encoding and decoding standards on different NVIDIA GPUs. The matrix has a reference dating back to the Maxwell generation of NVIDIA graphics cards, showing what video codecs are supported by each generation. That is a useful tool for reference purposes, as customers can check if their existing or upcoming GPUs support a specific codec standard if they need any for video reproduction purposes. The update to the matrix comes in a form of Ampere GPUs, which are now present there.

For example, the table shows that, while supporting all of the previous generations of encoding standards, the Ampere based GPUs feature support for HEVC B Frame standard. For decoding purposes, the Ampere lineup now includes support for AV1 8-bit and 10-bit formats, while also supporting all of the previous generation formats. For a more detailed look at the table please go to NVIDIA's website here.
NVIDIA Encoding and Decoding Standards

Basemark Launches GPUScore Relic of Life RayTracing Benchmark

Basemark is pioneer in GPU benchmarking. Our current product Basemark GPU has been improving the 3D graphics industry since 2016. After releasing GPU 1.2 in March Basemark development team has been really busy developing brand new benchmark - GPUScore. GPUScore benchmark will introduce hyper realistic, true gaming type of content in three different workloads: Relic of Life, Sacret Path and Expedition.

GPUScore Relic of Life is targeted to benchmark high end graphics cards. It is completely new benchmark with many new features. The key new feature is real-time ray traced reflections and reflections of reflections. The benchmark will not only support Windows & DirectX 12, but also Linux & Vulkan raytracing.

Ubisoft Updates Watch Dogs: Legion PC System Requirements

Ubisoft has today updated the PC system requirements for its Watch Dogs: Legion game. Set to release on October 29th this year, we are just a few weeks away from its release. With the arrival of NVIDIA's GeForce RTX 3000 series Ampere graphics cards, Ubisoft has decided to update the official PC system requirements with RTX-on capabilities. The inclusion of raytracing in the game requires a faster CPU, as well as an RTX-capable GPU. At 1080p resolution, you need at least an RTX 2060 GPU to play with high settings, and raytracing turned to the medium, including DLSS. Going up to 1440p, Ubisoft recommends gamers to use at least an RTX 3070 GPU for very high preset, raytracing on high, and DLSS set to quality. If you want to max everything out and play with the highest settings at 4K resolution, you will need an RTX 3080 GPU.
Watch Dogs: Legion Watch Dogs: Legion PC System Requirements

NVIDIA GeForce RTX 3070 Launch Postponed to October 29th

When NVIDIA introduced its Ampere consumer graphics cards, they launched three models - the GeForce RTX 3070, RTX 3080, and RTX 3090 GPUs. Both the RTX 3080 and RTX 3090 have seen the light of the day as they are now available for purchase, however, one card has remained. The GeForce RTX 3070 launch was originally planned for October 15th launch, but it has officially been postponed by NVIDIA. According to the company, the reason behind this sort of delay in the launch is the high demand expected. Production of the cards is ramping up quickly and the company is quickly stocking up the cards. Likely, NVIDIA AIBs are taking their time to stock up on cards, as the mid-range is usually in very high demand.

As a reminder, the GeForce RTX 3070 graphics card features 5888 CUDA cores running at a base frequency of 1.5 GHz and boost frequency of 1.73 GHz. Unlike the higher-end Ampere cards, the RTX 3070 uses older GDDR6 memory on a 256-bit bus with a bandwidth of 448 GB/s. The GPU features a TDP of 220 W and will be offered in a range of variants by AIBs. You will be able to purchase the GPU on October 29th for the price of $499.

NVIDIA Releases Game Ready 456.55 WHQL Driver With Improved Stability of RTX 3000 Series Cards, Support for Star Wars: Squadrons

NVIDIA has today released the latest iteration of its Game Ready driver with the version number 456.55. Marked as a WHQL release, the driver is supposedly going to bring new advancements to the stability of the latest GeForce RTX 3000 series Ampere graphics cards. While the release notes don't officially mention anything on how it improves, it is already confirmed by a few Redditors that the new driver removes crashed experienced with the past version 456.38. In the latest revision, the support has been added for NVIDIA Reflex in Call of Duty: Warzone and Call of Duty: Modern Warfare, as well as support for Star Wars: Squadrons game. Below is the link to the driver download page redirecting to NVIDIA's site, and in no time the TechPowerUp download page will be updated as well.
DOWNLOAD:NVIDIA GeForce 456.55 WHQL Game Ready Drivers

The change-log follows:

MonsterLabo Announces The Beast

MonsterLabo, a maker of fanless PC cases, today announced its latest creation - The Beast. Featuring a design made from glass and 6 mm thick aluminium, the ATX case is resembling a design we usually could see only from the folks like InWin. The whole chassis is actually made up of two 3 KG aluminium heatsinks that feature ten 6 mm copper heat pipes each. All of this is used for heat dissipation and the case can accommodate up to 400 W of TDP in passive mode. When two 140 mm fans, running at 500 rpm, are added the case can cool more than 500 W of TDP. The Beast measures at 450 mm (L) x 380 mm (W) x 210 mm (H), making it for one large and heavy case. It supports graphics cards up to 290 mm in PCB length and is fully capable of supporting the latest NVIDIA GeForce RTX 30 series "Ampere" graphics cards. Pre-orders for The Beast are starting on October 9th, with an unknown pricing. You can expect it to be a high premium over 349 EUR price of The First case. Pre-orders will be shipping in Q1 2021.

RTX 3080 Users Report Crashes to Desktop While Gaming

A number of RTX 3080 users have been reporting crashes to desktop while gaming on their newly-acquired Ampere graphics cards. The reports have surged in numerous hardware discussion venues (ComputerBase, LinusTechTips, NVIDIA, Tom's Hardware, Tweakers and Reddit), and appear to be unlinked to any particular RTX 3080 vendor (ZOTAC, MSI, EVGA, and NVIDIA Founders Edition graphics cards are all mentioned).

Apparently, this crash to desktop happens once the RTX 3080's Boost clock exceeds 2.0 GHz. A number of causes could be advanced for these issues: deficient power delivery, GPU temperature failsafes, or even a simple driver-level problem (though that one seems to be the least likely). Nor NVIDIA nor any of its AIB partners have spoken about this issue, and review outlets failed to mention this happening - likely because it never did, at least on samples sent to reviewers. For now, it seems that manually downclocking the graphics card by 50-100 MHz could be a temporary fix for the issue while it's being troubleshooted. An unlucky turn of events for users of NVIDIA's latest and greatest, but surely it's better to face a very slight performance decrease in exchange for system stability.

The Reason Why NVIDIA's GeForce RTX 3080 GPU Uses 19 Gbps GDDR6X Memory and not Faster Variants

When NVIDIA announced its next-generation GeForce RTX 3080 and 3090 Ampere GPUs, it specified that the memory found in the new GPUs will be Micron's GDDR6X variant with 19 Gbps speed. However, being that there are faster GDDR6X modules already available in a 21 Gbps variant, everyone was left wondering why NVIDIA didn't just use the faster memory from Micron. That is exactly what Igor's Lab, a technology website, has been wondering as well. They have decided to conduct testing with an infrared camera that measures the heat produced. To check out the full testing setup and how they tested everything, you can go here and read it, including watching the video embedded.

Micron chips like GDDR5, GDDR5X, and GDDR6 are rated for the maximum junction temperature (TJ Max) of 100 degrees Celsius. It is recommended that these chips should run anywhere from 0C to 95C for the best results. However, when it comes to the new GDDR6X modules found in the new graphics cards, they are not yet any official specifications available to the public. Igor's Lab estimates that they can reach 120C before they become damaged, meaning that TJ Max should be 110C or 105C. When measuring the temperature of GDDR6X modules, Igor found out that the hottest chip ran at 104C, meaning that the chips are running pretty close to the TJ Max they are (supposedly) specified. It is NVIDIA's PCB design decisions that are leading up to this, as the hottest chips are running next to voltage regulators, which can get pretty hot on their own.

NVIDIA GeForce RTX Ampere Chips Feature Three Binning Tiers, Mostly Good Dies are Present

Chip binning is a process of sorting out the manufactured silicon by quality. That means that each chip that comes from the silicon wafer is tested and sorted by different features. For example, a chip is tested for how much voltage it takes for operation, how cool it runs, and of course how it overclocks. By putting their chips through various testing, manufacturers often create binning tiers, where they can differentiate good and bad chips, so they know where to send, and if they should send the chips. The biggest and most complex approach for sending chips is for graphics cards. As there are different AIBs, manufacturers like NVIDIA and AMD need to send them chips of various qualities to incorporate in their products. It is a rather time-consuming and complex process to find out the bin type and the tier of chips, however today we are getting some information from Igor's Lab.

According to their sources, it is said that NVIDIA's latest GeForce RTX Ampere lineup features three binning tiers. There is "Bin 0" which represents an okay chip that can perform as intended, "Bin 1" chips which are good processors, and "Bin 2" processors which represent the best quality chips with the highest performance characteristics. These "Bin 2" dies run cooler compared to the rest and achieve higher overclocking speed. In reality, the binning represents coordination between the chip designer (NVIDIA in this case) and the manufacturer (Samsung with its 8N 8 nm process). It is said that from the complete pilot run of Ampere chips, Samsung ends up with 30% of the "Bin 0" dies, 60% of "Bin 1" dies, and only 10% of "Bin 2" dies. The production period was quite short and these numbers are good for Samsung, as they probably didn't have much time to work on it, so we can expect these numbers to improve.

MSI Registers 29 Upcoming Ampere Graphics Cards With The EEC

MSI has recently registered 29 new graphics card codes with the Eurasian Economic Commission which are all expected to be upcoming NVIDIA Ampere models. These codes have been registered as "Video Cards" and coincide with NVIDIA's upcoming GeForce Special Event on September 1 where the GeForce RTX 30-series GPUs are expected to be announced including the much-rumored NVIDIA RTX 3090. The codes include 3 main sets with 02-V388, 602-V389, and 602-V390 which coincide with previous generation MSI graphics cards. The V388 likely refers to the RTX 3090, the V389 with the RTX 3080, and the V390 the RTX 3070 if following the same pattern as the GeForce RTX 20-series.

NVIDIA GeForce RTX 3090 "Ampere" Alleged PCB Picture Surfaces

As we are getting close to September 1st, the day NVIDIA launches its upcoming GeForce RTX graphics cards based on Ampere architecture, we are getting even more leaks. Today, an alleged PCB of the NVIDIA's upcoming GeForce RTX 3090 has been pictured and posted on social media. The PCB appears to be a 3rd party design coming from one of NVIDIA's add-in board (AIB) partners - Colorful. The picture is blurred out on the most of the PCB and has Intel CPU covering the GPU die area to hide the information. There are 11 GDDR6X memory modules covering the surrounding of the GPU and being very near it. Another notable difference is the NVLink finger change, as there seems to be the new design present. Check out the screenshot of the Reddit thread and PCB pictures below:
NVIDIA GeForce RTX 3090 PCB NVIDIA GeForce RTX 3090 PCB NVIDIA GeForce RTX 3090 PCB
More pictures follow:

NVIDIA A100 Ampere GPU Benchmarked on MLPerf

When NVIDIA announced its Ampere lineup of the graphics cards, the A100 GPU was there to represent the higher performance of the lineup. The GPU is optimized for heavy computing workloads as well as machine learning and AI tasks. Today, NVIDIA has submitted the MLPerf results on the A100 GPU to the MLPerf database. What is MLPerf and why it matters you might think? Well, MLPerf is a system benchmark designed to test the capability of a system for machine learning tasks and enable comparability between systems. The A100 GPU got benchmarked in the latest 0.7 version of the benchmark.

The baseline for the results was the previous generation king, V100 Volta GPU. The new A100 GPU is average 1.5 to 2.5 times faster compared to V100. So far A100 GPU system beats all offers available. It is worth pointing out that not all competing systems have been submitted, however, so far the A100 GPU is the fastest.
The performance results follow:

GIGABYTE Formally Launches Z490 AORUS Master WaterForce Motherboard

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of motherboards, graphics cards, and hardware solutions, today announced the new Z490 AORUS MASTER WATERFORCE motherboard will join the Z490 water cooling lineup with globally leading technology, which powers the world-only Liquid Cooler AIO 360 design to fulfill the temperature control of overclocking on the 10-core K series Intel Core processors. The 14-phase digital power design with each phase holding up to 90 amps unleash extreme performance and optimizes overclocking ability on the new processor. GIGABYTE Z490 AORUS motherboards use XTREME MEMORY technology with an anti-interference design to improve memory overclocking and stability. Z490 AORUS MASTER WATERFORCE offers innovations as top-of-the-line audio quality, and feature-rich I/O with an integrated I/O shield, and much more, checking all the boxes for users on system performance, power management, thermals, and audio.

The 10th generation of Intel Core processors has been in the market, enhanced by up to 10 cores, 20 threads, and 20 MB total caches, the performance boost with the default TDP ramping up to 125 Watts. The excess heat and TDP also boost especially when overclocking on all cores. Many users choose liquid cooling to effectively dissipate excess heat generated from high-speed operation, and further release the extreme overclocking performance of the processors. However users often hesitate since general AIO liquid cooling cannot cover VRM dissipation, and the monoblock for processor and VRM area of open liquid cooling is always difficult to purchase separately as well as raise the possibility to break the warranty. To provide users a convenient option without breaking the warranty, GIGABYTE released the Z490 AORUS MASTER WATERFORCE motherboard with a whole new design with AIO 360 liquid cooling which can dissipate the heat from the processors and VRM at the same time.

TSMC Doesn't See Intel as Long-Term Customer, Unlikely to Build Additional Capacity for It

TSMC has been the backbone of silicon designers for a long time. Whenever you question where you can use the latest technology and get some good supply capacity, TSMC got everyone covered. That case seems to be similar to Intel and its struggles. When Intel announced that its 7 nm semiconductor node is going to be delayed a full year, the company's customers and contractors surely became worried about the future releases of products and their delivery, like the case is with Aurora exascale supercomputer made for Argonne National Laboratory, which relies on Intel's 7 nm Ponte Vecchio graphics cards for most of the computation power.

To manage to deliver this, Intel is reportedly in talks with TSMC to prepare capacity for the GPUs and deliver them on time. However, according to industry sources of DigiTimes, TSMC is unlikely to build additional capacity for Intel, besides what it can deliver now. According to those sources, TSMC does not see Intel as a long-term customer and it is unknown what treatment will Intel get from TSMC. Surely, Intel will be able to make a deal with TSMC and secure enough of the present capacity for delivering next-generation processors.

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

NVIDIA is slowly preparing to launch its next-generation Ampere graphics cards for consumers after we got the A100 GPU for data-centric applications. The Ampere lineup is getting more and more leaks and speculations every day, so we can assume that the launch is near. In the most recent round of rumors, we have some new information about the GPU SKU and memory of the upcoming GeForce RTX 3070 and RTX 3070 Ti. Thanks to Twitter user kopite7kimi, who had multiple confirmed speculations in the past, we have information that GeForce RTX 3070 and RTX 3070 Ti use a GA104 GPU SKU, paired with GDDR6 memory. The cath is that the Ti version of GPU will feature a new GDDR6X memory, which has a higher speed and can reportedly go up to 21 Gbps.

The regular RTX 3070 is supposed to have 2944 CUDA cores on GA104-400 GPU die, while its bigger brother RTX 3070 Ti is designed with 3072 CUDA cores on GA104-300 die. Paired with new technologies that Ampere architecture brings, with a new GDDR6X memory, the GPUs are set to be very good performers. It is estimated that both of the cards would reach a memory bandwidth of 512 GB/s. So far that is all we have. NVIDIA is reportedly in Design Validation Test (DVT) phase with these cards and is preparing for mass production in August. Following those events is the official launch which should happen before the end of this year, with some speculations indicating that it is in September.

Eurocom launches MXM3 to PCIe Adapter for GPU Developers

Embedded, GPU-accelerated computing is the use of a graphics processing unit (GPU) combined with a central processing unit (CPU) to facilitate advanced computing processes such as Artificial Intelligence of Things (AIoT), Deep Learning, Prototyping/Pre-concept design, CUDA development and medical Hi-Res imaging, to name a few.

Embedded, GPU-accelerated computing is gaining popularity as its potential becomes realized in many high-tech labs and R&D facilities around the world. GPU-accelerated computing plays a huge role in accelerating applications in platforms within these bleeding-edge industries. Whether it is designing self-driving cars, rendering ultra-HD scenes with millions of polygons (4K and above), or working with advanced CAD models for GIS mapping, a multi-GPU system will significantly increase the performance and capabilities advanced developers have available.
Through embedded GPU-accelerated technology, processing and computing times can be cut into a fraction of the time of what could be accomplished compared to traditional, less-sophisticated computing infrastructures. The combination of high-performance computing along with lower power consumption can result in a more significant and more extensive deep learning in neural networks.

Matrox Shipping QuadHead2Go Q155 Card to Accelerate Artistic and Large-Scale Video Wall Development

Matrox is pleased to announce that the Matrox QuadHead2Go Q155 multi-monitor controller card is now shipping. Powering up to four Full HD displays from a single HDMI video source—including HDCP devices—the QuadHead2Go Q155 card provides OEMs, system integrators, and AV installers with an innovative tool to quickly and easily add more displays to video walls of any type and size. Designed to integrate seamlessly into Matrox-based or third-party platforms, QuadHead2Go is ideal for a wide variety of video wall applications, including digital signage, control room, broadcast, and more.

QuadHead2Go Q155 controllers capture a single video signal—of up to 4Kp60 and 8Kx8K—for display across up to four screens, at resolutions up to 1920x1200 per output. The input content could be from any HDMI source—including digital signage players, media players, laptops, and more—while displays can be arranged in a variety of artistic configurations or classic rectangular 2x2, 2x1, 3x1, 4x1, 1x2, 1x3, or 1x4 setups. Pair multiple QuadHead2Go units to add more displays and/or combine with Matrox D-Series quad-4K graphics cards to power up to an astonishing 64 1920x1080p60 monitors from one system.

Raijintek Unveils MORPHEUS 8057 VGA Air Cooler

Raijintek today unveiled the MORPHEUS 8057, a large graphics card air-cooler. The cooler consists of a gargantuan aluminium fin-stack heatsink that's ventilated by a pair of 120 mm fans (not included). The heatsink features a large mirror-finish copper base, from which six 6 mm thick heat pipes emerge in either direction of the base (Raijintek for some reason counts this as 12 heat pipes), conveying heat to a large fin-stack with 112 aluminium fins that have ceramic coating.

The MORPHEUS 8057 heatsink measures 254 mm x 100 mm x 44 mm (WxDxH), weighing 515 g. Among the secondary heatsinks included are 12 heatsinks of various sizes for memory and VRM components; thermal pads, retention clips, and some thermal paste. Among the graphics cards supported are AMD "Navi 10" based graphics cards (RX 5700 series and RX 5600 series); and NVIDIA RTX 2080 Ti, RTX 2080/SUPER, RTX 2070/SUPER, and RTX 2060/SUPER. The company didn't reveal pricing.

ELSA Announces GeForce RTX 2070 Super ERAZOR X & Super S.A.C.

ELSA has announced two new GeForce RTX 2070 Super graphics cards the Super ERAZOR X and Super S.A.C. The two new cards both feature a dual-fan design with backplates. The ELSA GeForce RTX 2070 Super ERAZOR X features a boost clock of 1815 MHz, four 8 mm diameter heat pipes cooled by dual 90 mm "SAC 3" fans. The card comes with an ESLA logo backplate and measures 267 mm (length) x 114 mm (height) x 50 mm (thickness), 2.5 slot size limiting SLI options. The card will be available in Japan from June 26th with a price of 81,200 yen (760 USD).

ELSA GeForce RTX 2070 Super S.A.C is a cheaper option with a boost clock of 1,770 MHz, four 6 mm diameter heat pipes cooled by dual 90 mm "SAC 2" fans. The card comes with a plain backplate and measures 271 mm (length) x 115 mm (height) x 42 mm (thickness). The card will be available in Japan from June 26th with a price of 71,400 yen (670 USD).
Return to Keyword Browsing