News Posts matching #GPU

Return to Keyword Browsing

NVIDIA GeForce GTX 1660 SUPER Launching October 29th, $229 With GDDR6

NVIDIA's GeForce GTX 1660 SUPER, the first non raytracing-capable Turing-based SUPER graphics card from the company, is set to drop on October 29th. Contrary to other SUPER releases though, the GTX 1660 SUPER won't feature a new GPU ship brought down from the upwards performance tier. This means it will make use of the same TU116-300 as the GTX 1660 with 1408 CUDA cores, not the 1536 CUDA count of the GTX 1660 Ti. Instead, NVIDIA has increased performance of this SUPER model by endowing it with GDDR6 memory.

The new GDDR6 memory ticks at 14 Gbps, which gives it an advantage over the GTX 1660 Ti model which will still cost more than it. When all is said and done, the GTX 1660 SUPER will feature memory bandwidth in the range of 336 GB/s, significantly more than the GTX 1660 Ti's 288 GB/s, and a huge differentiating factor from the 192 GB/s of the GTX 1660. Of course, the fewer CUDA core resources compared to the GTX 1660 Ti mean it should still deliver lower performance than that graphics card. This justifies its price-tag set at $229 - $20 higher than the GTX 1660, but $50 less than the GTX 1660 Ti.

New NVIDIA EGX Edge Supercomputing Platform Accelerates AI, IoT, 5G at the Edge

NVIDIA today announced the NVIDIA EGX Edge Supercomputing Platform - a high-performance, cloud-native platform that lets organizations harness rapidly streaming data from factory floors, manufacturing inspection lines and city streets to securely deliver next-generation AI, IoT and 5G-based services at scale, with low latency.

Early adopters of the platform - which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices - include Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

Intel Could Unveil First Discrete 10 nm GPUs in mid-2020

According to the sources close to DigiTimes, Intel will unveil its first discrete 10 nm graphics cards named "Xe" very soon, with the first wave of Xe GPUs expected to arrive some time in 2020. Said to launch mid year, around July or August, Intel will start selling initial Xe GPU models of the long awaited product to consumers, in hope of gaining a share in the massive market using GPU for acceleration of all kinds of tasks.

Perhaps one of the most interesting notes DigiTimes reported is that "... Intel's GPUs have already received support from the upstream supply chain and has already been integrated into Intel's CPUs to be used in the datacenter and AI fields.", meaning that AIB partners already have access to first 10 nm graphics chips that are ready for system integration. First generation of Xe graphics cards will cover almost whole GPU market, including PC, datacenter, and AI applications where NVIDIA currently holds the top spot.

Intel and Wargaming Join Forces to Deliver Ray Tracing to World of Tanks

Intel has been very serious about its efforts in computer graphics lately, mainly because of its plans to launch a dedicated GPU lineup and bring new features to the graphics card market. Today, Intel and Wargaming, a maker of MMO titles like World of Tanks, World of Warships, and World of Warplanes, partnered to bring ray tracing feature to the Wargaming's "Core" graphics engine, used in perhaps one of the best-known MMO title - World of Tanks.

Joint forces of Intel and Wargaming developers have lead to the implementation of ray tracing, using only regular software techniques without a need for special hardware. Being hardware agnostic, this implementation works on any graphics card that can run DirectX 11, as the "Core" engine is written in DirectX 11 API. To achieve this, developers had to make a solution that uses CPU's resources for fast, multi-threaded bounding volume hierarchy which then feeds the GPU's compute shaders for ray tracing processing, thus making the ray tracing feature entirely GPU shader/core dependent. Many features are reworked with emphasis put on shadow quality. In the images below you can see exactly what difference the new ray-tracing implementation makes, and you can use almost any graphics card to get it. Wargaming notes that "some FPS" will be sacrificed if ray tracing is turned on, so your GPU shouldn't struggle too much.

Intel Mobility Xe GPUs to Feature Up to Twice the Performance of Previous iGPUs

Intel at the Intel Developer Conference 'IDC' 2019 in Tokyo revealed their performance projections for mobility Xe GPUs, which will supersede their current consumer-bound UHD620 graphics under the Gen 11 architecture. The company is being vocal in that they can achieve an up to 2x performance uplift over their previous generation - but that will likely only take place in specific scenarios, and not as a rule of thumb. Just looking at Intel's own performance comparison graphics goes to show that we're mostly looking at between 50% and 70% performance improvements in popular eSports titles, which are, really, representative of most of the gaming market nowadays.

The objective is to reach above 60 FPS in the most popular eSports titles, something that Gen 11 GPUs didn't manage with their overall IPC and dedicated die-area. We've known for some time that Intel's Xe (as in, exponential) architecture will feature hardware-based raytracing, and the architecture is being developed for scalability that goes all the way from iGPUs to HPC platforms.

The End of a Collaboration: Intel Announces Discontinuation of Kaby Lake-G with AMD Radeon Vega Graphics

The marriage of Intel and AMD IPs in the form of the Kaby Lake-G processors was met with both surprised grunts from the company and a sense of bewilderment at what could come next. Well, we now know what came next: Intel hiring several high-level AMD employees on the graphics space and putting together its own motley crew of discrete GPU developers, who should be putting out Intel's next-gen high-performance graphics accelerators sometime next year.

The Kaby Lake-G processors, however, showed promise, pairing both Intel's (at the time) IPC dominance and AMD's graphics IP performance and expertise on a single package by placing the two components in the same substrate and connecting them via a PCIe link. A new and succinct Intel notice on the Kaby Lake-G page sets a last order time (January 31, 2020, as the last date for orders, and July 31, 2020, as the date of last shipments), and explains that product market shifts have moved demand from Kaby Lake-G products "to other Intel products". Uptake was always slow on this particular collaboration - most of it, we'd guess, because of the chips' strange footprint arrangement for embedding in systems, which required custom solutions that had to be designed from scratch. And with Intel investing into their own high-performance graphics, it seems clear that there is just no need to flaunt their previous collaborations with other companies in this field. Farewell, Intel-AMD Kaby Lake-G. We barely knew you.

NVIDIA Could Launch Next-Generation Ampere GPUs in 1H 2020

According to the sources over at Igor's Lab, NVIDIA could launch its next generation of GPUs, codenamed "Ampere", as soon as first half of the 2020 arrives. Having just recently launched GeForce RTX Super lineup, NVIDIA could surprise us again in the coming months with replacement for it's Turing lineup of graphics cards. Expected to directly replace high-end GPU models that are currently present, like GeForce RTX 2080 Ti and RTX 2080 Super, Ampere should bring many performance and technology advancements a new graphics card generation is usually associated with.

For starters, we could expect a notable die shrink to take place in form of 7 nm node, which will replace the aging 12 nm process that Turing is currently being built on. This alone should bring more than 50% increase in transistor density, resulting in much more performance and lower power consumption compared to previous generation. NVIDIA's foundry of choice is still unknown, however current speculations are predicting that Samsung will manufacture Ampere, possibly due to delivery issues that are taking place at TSMC. Architectural improvements should take place as well. Ray tracing is expected to persist and get enhanced with possibly more hardware allocated for it, along with better software to support the ray tracing ecosystem of applications.

Intel Gen12 iGPU With 96 Execution Units Rears Its Head in Compubench

Intel's upcoming Gen12 iGPU solutions are being touted as sporting Intel's greatest architecture shift in their integrated graphics technologies in a decade. For one, each Execution unit will be freed of the additional workload of having to guarantee data coherency between register reads and writes - that work is being handed over to a reworked compiler, thus freeing up cycles that could be better spent processing triangles. But of course, there are easier ways to improve a GPU's performance without extensive reworks of their design (as AMD and NVIDIA have shown us time and again) - simply by increasing the number of execution units. And it seems Intel is ready to do just that with their Gen12 as well.

An unidentified Intel Gen12 iGPU was benchmarked in CompuBench, and the report includes interesting tidbits, such as the number of Execution Units - 96, a vast increase over Intel's most powerful iGPU to date, the Iris Pro P580, with its 72 EU - and far, far away from the consumer market's UHD 630 and its 24 EUs. The Gen12 iGPU that was benchmarked increases the EU count by 33% compared to Intel's top performing iGPU - add to that performance increases through the "extensive architecture rework", and we could be looking at an Intel iGPU part that achieves some 40% (speculative) better performance than their current best performer. The part was clocked at 1.1 GHz - and the Iris Pro P580 also clocked to that maximum clock under the best Boost conditions. Let's see what next-gen Intel has in store for us, shall we?

Power Matters with EVGA PowerLink - Clean up your Power and System!

Everyone knows that the EVGA PowerLink does wonders to improve your cable management for your graphics card. But did you know that the EVGA PowerLink also stabilizes the power going into your graphics card? The EVGA PowerLink is designed to provide both a more stable power source and reduce ripple and noise, compared to connecting your power supply directly to the graphics card. The EVGA PowerLink features two solid state capacitors that help to filter/suppress ripple and noise from the power supply.

The practical impact can be seen in power graphs. The 12V line going into graphics card without a PowerLink under load rates a Peak-to-Peak voltage of 1,008mV, while the 12V line going into the graphics card with a PowerLink is only 728mV. That's nearly a 28% reduction in voltage variation from the external power source!

NVIDIA Could Launch GTX 1650 Ti on October 22nd

According to the recent round of rumors, NVIDIA could extend its budget GPU offering on October 22nd when it will launch the new GeForce GTX 1650 Ti graphics card. Expected to sit between GTX 1650 and GTX 1660, the new graphics card is supposed to be NVIDIA's answer to AMD's unannounced, low-end NAVI GPUs rumored to be called the RX 5600 series.

As per ITHome, GTX 1650 Ti will be priced at 1100 yuan which translates to $155, meaning that either GTX 1650 will get a price cut to sit below new Ti model or upcoming GTX 1650 Ti will have a price slightly above the rumored number. Envisioned to feature 4 GB of VRAM and anything between 1024 to 1280 CUDA cores, the new GPU could provide a good balance between current offerings and reduce the gap between GTX 1650 and GTX 1660 graphics cards.

TSMC Trembles Under 7 nm Product Orders, Increases Delivery Lead Times Threefold - Could Hit AMD Product Availability

TSMC is on the vanguard of chipset fabrication technology at this exact point in time - its 7 nm technology is the leading-edge of all large volume processes, and is being tapped by a number of companies for 7 nm silicon. One of its most relevant clients for our purposes, of course, is AMD - the company now enjoys a fabrication process lead over arch-rival Intel much due to its strategy of fabrication spin-off and becoming a fabless designer of chips. AMD's current product stack has made waves in the market by taking advantage of 7 nm's benefits, but it seems this may actually become a slight problem in the not so distant future.

TSMC has announced a threefold increase in its delivery lead times for 7 nm orders, from two months to nearly six months, which means that orders will now have to wait three times longer to be fulfilled than they once did. This means that current channel supplies and orders made after the decision from TSMC will take longer to materialize in actual silicon, which may lead to availability slumps should demand increase or maintain. AMD has its entire modern product stack built under the 7 nm process, so this could potentially affect both CPUs and GPUs from the company - and let's not forget AMD's Zen 3 and next-gen RDNA GPUs which are all being designed for the 7 nm+ process node. TSMC is expected to set aside further budget to expand capacity of its most advanced nodes, whilst accelerating investment on their N7+, N6, N5, and N3 nodes.

NVIDIA Partners With Activision in Launching Call of Duty: Modern Warfare Bundles

Call of Duty: Modern Warfare's re-release will see the game supporting real-time raytracing, and given NVIDIA's current stand in the market as the only provider of hardware-accelerated raytracing capable GPUs, this partnership makes total sense. NVIDIA has announced that they will be bundling Call of Duty: Modern Warfare with select RTX series GPUs, which gives gamers on the fence towards buying an RTX graphics card one more reason to take the plunge.

The bundle is available for eligible GeForce RTX 2080 Ti, 2080, 2070 Super, 2070, 2060 Super and 2060 products, whether in discrete card or laptop/desktop pre-built form. The game will be taking advantage of ray tracing and adaptive shading, and will bring gamers back to Soap and Price's story.

Intel Says Its Upcoming Gen12 GPUs Will Feature Biggest Architecture Change In A Decade

Intel is slowly realizing plans to "one up" its GPU game starting from first 10 nm Ice Lake CPUs that feature Gen11 graphics, equipping users of integrated GPUs with much more performance than they previously got. Fortunately, Intel doesn't plan to stop there. Thanks to the recent pull request found on GitLab Mesa repository, we can now expect to receive biggest GPU performance bump in over a decade with the arrival of Gen12 based GPUs, found on next generation Tiger Lake processors.

In this merge request, Francisco Jerez, member of Intel's open source Linux graphics team, stated the following: "Gen12 is planned to include one of the most in-depth reworks of the Intel EU ISA since the original i965. The encoding of almost every instruction field, hardware opcode and register type needs to be updated in this merge request. But probably the most invasive change is the removal of the register scoreboard logic from the hardware, which means that the EU will no longer guarantee data coherency between register reads and writes, and will require the compiler to synchronize dependent instructions anytime there is a potential data hazard..."

TechPowerUp GPU-Z v2.25.0 Released

TechPowerUp today released the latest version of TechPowerUp GPU-Z, the definitive graphics subsystem information, diagnostic, and monitoring utility. Version 2.25.0 adds several new features, support for more GPUs, and fixes various bugs. To begin with, you'll notice that the main screen displays a second row of APIs supported by your graphics card. These include Vulkan, DirectX Raytracing, DirectML, and OpenGL. The last one in particular help you figure out if your graphics drivers have been supplied by Microsoft of your computer's OEM (and lack OpenGL or Vulkan ICDs). Among the new GPUs supported are Quadro P2200, Quadro RTX 4000 Mobile, Quadro T1000 Mobile; AMD Radeon Pro WX 3200, Barco MXRT 7600, 780E Graphics, HD 8330E; and Intel Gen11 "Ice Lake."

With GPU-Z 2.25.0, we've improved AMD Radeon "Navi" support even further, by making the clock-speed measurement more accurate, and displaying base, gaming, and boost clocks in the "Advanced" tab. A workaround is added for the AMD bug that causes fan-speeds to lock when idle fan-stop is engaged on custom-design "Navi" graphics cards; and a faulty "65535 RPM" fan-speed reading for "Navi." A BSOD caused in QEMU/KVM machines by MSR register access has also been fixed. Grab it from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.25.0
The change-log follows.

CORSAIR Releases Hydro X RX-SERIES GPU Water Block for AMD Radeon 5700 XT

If our review of CORSAIR's Hydro X series XG7 GPU water block for the NVIDIA GTX 1080 interested you and made you want to look into their offerings for newer cards, then you may be just as interested in knowing that AMD's latest and greatest in the discrete GPU market gets some Hydro X love too. CORSAIR has added to their custom watercooling product portfolio with the new RX-SERIES GPU block which is compatible with all reference design AMD Radeon RX 5700 and RX 5700 XT offerings. The block has the same feature set as with their other XG7 GPU blocks, with full coverage (GPU, VRM, VRAM), integrated dRGB lighting supported by iCUE, pre-applied thermal pads and paste for easy installation, a full-length aluminium backplate included in the package, and a transparent top coupled with a flow indicator wheel. It costs $149.99 for customers in the USA, and is available immediately as of the time of this post.

Primate Labs Introduces GeekBench 5, Drops 32-bit Support

Primate Labs, developers of the ubiquitous benchmarking application GeekBench, have announced the release of version 5 of the software. The new version brings numerous changes, and one of the most important (since if affects compatibility) is that it will only be distributed in a 64-bit version. Some under the hood changes include additions to the CPU benchmark tests (including machine learning, augmented reality, and computational photography) as well as increases in the memory footprint for tests so as to better gauge impacts of your memory subsystem on your system's performance. Also introduced are different threading models for CPU benchmarking, allowing for changes in workload attribution and the corresponding impact on CPU performance.

On the Compute side of things, GeekBench 5 now supports the Vulkan API, which joins CUDA, Metal, and OpenCL. GPU-accelerated compute for computer vision tasks such as Stereo Matching, and augmented reality tasks such as Feature Matching are also available. For iOS users, there is now a Dark Mode for the results interface. GeekBench 5 is available now, 50% off, on Primate Labs' store.

AMD CEO Lisa Su: "CrossFire Isn't a Significant Focus"

AMD CEO Lisa Su at the Hot Chips conference answered some questions from the attending press. One of these regarded AMD's stance on CrossFire and whether or not it remains a focus for the company. Once the poster child for a scalable consumer graphics future, with AMD even going as far as enabling mixed-GPU support (with debatable merits). Lisa Su came out and said what we all have been seeing happening in the background: "To be honest, the software is going faster than the hardware, I would say that CrossFire isn't a significant focus".

There isn't anything really new here; we've all seen the consumer GPU trends as of late, with CrossFire barely being deserving of mention (and the NVIDIA camp does the same for their SLI technology, which has been cut from all but the higher-tier graphics cards). Support seems to be enabled as more of an afterthought than a "focus", and that's just the way things are. It seems that the old, old practice of buying a lower-tier GPU at launch and then buying an additional graphics processor further down the line to leapfrog performance of higher-performance, single GPU solutions is going the way of the proverbial dodo - at least until an MCM (Multi-Chip-Module) approach sees the light of day, paired with a hardware syncing solution that does away with the software side of things. A true, integrated, software-blind multi-GPU solution comprised of two or more smaller dies than a single monolithic solution seems to be the way to go. We'll see.

NVIDIA CEO Says Buying a GPU Without Ray Tracing "Is Crazy"

During NVIDIA's second quarter earnings call, the company's co-founder and CEO, Jensen Huang, talked about earnings and what drives demand. When talking about sales, Huang noted a few things about NVIDIA's RTX lineup of graphics cards and why buying one is the only reasonable thing to do.

Specifically, Huang said that "SUPER is off to a super start for and at this point, it's a foregone conclusion that we're going to buy a new graphics card, and it's going to the last 2, 3, 4 years to not have ray tracing is just crazy. Ray tracing content just keeps coming out. And between the performance of SUPER and the fact that it has ray tracing hardware, it's going to be super well positioned for throughout all of next year."

AMD Patents new System and Method for Protecting GPU Memory Instructions Against Faults

With ever increasing number of exploits, processor manufacturers are finding new and improved ways to secure their system against such dangers. Exploits can be found on hardware and software level, but ones on hardware level are harder to patch and protect against. If you remember Spectre and Meltdown, they used CPU's branch speculation to enforce unwanted instruction stream. At software/firmware level we also got a fair number of exploits like recent "Screwed Drivers" incident, where drivers signed and approved by Microsoft are susceptible to privilege escalation.

However, AMD has patented a new way for protecting GPU memory instruction against faults by using a new system method. The proposed method uses system's "master and slave" devices and manipulates their instruction streams and check for any errors in the process. Firstly, the proposed system converts "slave" device request to dummy operations like NOP (No OPeration) is, and modifies the memory arbiter to issue N master and N slave global/shared memory instructions per cycle, sending master memory requests to memory system. Then it uses slave requests to check for errors and enter master requests in to memory FIFO aka First In First Out memory buffer. Slave request is stored in a register. Finally two values from register, where slave request was stored, and FIFO are compared to see if there are any differences.

NVIDIA Issues Warning to Upgrade Drivers Due to Security Patches

NVIDIA has found a total of five security vulnerabilities with its Windows drivers for GeForce, Quadro and Tesla lineup of graphics cards. These new security risks are labeled as very dangerous and have the potential to cause local code execution, denial of service, or escalation of privileges, unless the system is updated. Users are advised to update their Windows drivers as soon as possible in order to stay secure and avoid all of these vulnerabilities, so be sure to check your drivers for latest version. Exploits are only accessible on Windows based OSes, starting from Windows 7 to Windows 10.

However, one fact that's reassuring is that in order to exploit a system, attacker must have local access to the machine that is running NVIDIA GPU, as remote exploit can not happen. Bellow are the tables provided by NVIDIA that show type of exploit along with rating it carries and which driver versions are affected. There are no mitigations for this exploit, as driver update is the only available solution to secure the system.

Intel Plans to Launch Its Discrete GPU Lineup Starting at $200

During interview with Russian YouTube channel called PRO Hi-Tech, Raja Koduri, Intel's chief architect and senior vice president of architecture, software and graphics, talked about his career, why he left AMD, and where Intel is going with its discrete GPU attempts. However, one of the most notable things Mr Koduri said was regarding upcoming GPU lineup code-named Arctic Sound. He noted that Intel plans to release first GPU as a mid-range model at a price of $200, while enterprise solutions that utilize HBM memory will follow that.

Koduri said that he wants to replicate AMD's strategy of capturing high-volume price-points, such as the $199 Radeon RX 480. The plan here is to bring an affordable, good performing GPU to the masses - "GPUs for everyone" as he calls them. Additionally, he states that Intel's current strategy revolves around price, not performance, providing best possible value to consumers. Intel's approach for the next two or three years is to launch a complete lineup of GPUs, with a common architecture being used for everything from iGPUs found inside consumer CPUs to data-center GPUs.

Update: PRO Hi-Tech has posted a snippet of Raja Koduri interview, without the Russian overlay commentary. What he said was actually: "...Eventually our architecture, as publicly said, has to get from mainstream, which is starting at around $100, all the way to data-center class graphics with HBM memory...". This means that the previous speculation about $200 graphics card is false, as he didn't say that. All he said is that Intel wants to enter the "mainstream" GPU market and work its way up to data center.

Intel Launches First 10th Gen Core Processors: Redefining the Next Era of Laptop Experiences

Today, Intel officially launched 11 new, highly integrated 10th Gen Intel Core processors designed for remarkably sleek 2 in 1s and laptops. The processors bring high-performance artificial intelligence (AI) to the PC at scale, feature new Intel Iris Plus graphics for stunning entertainment and enable the best connectivity with Intel Wi-Fi 6 (Gig+) and Thunderbolt 3. Systems are expected from PC manufacturers for the holiday season.

"These 10th Gen Intel Core processors shift the paradigm for what it means to deliver leadership in mobile PC platforms. With broad-scale AI for the first time on PCs, an all-new graphics architecture, best-in-class Wi-Fi 6 (Gig+) and Thunderbolt 3 - all integrated onto the SoC, thanks to Intel's 10nm process technology and architecture design - we're opening the door to an entirely new range of experiences and innovations for the laptop."
-Chris Walker, Intel corporate vice president and general manager of Mobility Client Platforms in the Client Computing Group

AMD Could Launch New Navi GPUs Soon

AMD's president and CEO Dr. Lisa Su was talking during AMD's Q2 earnings Q&A conference and got asked a very interesting question. When prompted about high end Navi GPUs, Dr. Su answered with "I would say they are coming. You should expect that our execution on those are on track and we have a rich 7 nm portfolio beyond the products that we have already announced in the upcoming quarters."

This answer gives us hope to see more powerful Navi GPUs possibly by the end of the year, meaning that AMD's answer to Turing is almost ready. As we saw earlier in the rumors, we might get additional higher end GPU models in form of alleged RX 5800 and RX 5900, with XT variants available for both of those models. The RX 5800 is supposed to utilize a new GPU core called Navi 12, while the core for RX 5900 is still unknown.

Intel Starts Shipping 10 nm Ice Lake CPUs to OEMs

During its second quarter earnings call, Intel announced that it has started shipping of 10th generation "Core" CPUs to OEMs. Making use of 10 nm lithography, the 10th generation of "Core" CPUs, codenamed Ice Lake, were qualified by OEMs earlier in 2019 in order to be integrated into future products. Ice Lake is on track for holiday season 2019, meaning that we can expect products on-shelves by the end of this year. That is exciting news as the 10th generation of Core CPUs is bringing some exciting micro-architectural improvements along with the long awaited and delayed Intel's 10nm manufacturing process node.

The new CPUs are supposed to get around 18% IPC improvement on average when looking at direct comparison to previous generation of Intel CPUs, while being clocked at same frequency. This time, even regular mobile/desktop parts will get AVX512 support, alongside VNNI and Cryptography ISA extensions that are supposed to bring additional security and performance for the ever increasing number of tasks, especially new ones like Neural Network processing. Core configurations will be ranging from dual core i3 to quad core i7, where we will see total of 11 models available.

EK Water Blocks Unveils EK-FC GV100 Pro, A Water Block for Professionals

EK Water Blocks, the premium computer liquid cooling gear manufacturer, is releasing a workstation/server grade water block for some of the most powerful Workstation GPUs on the market today based on the NVIDIA GV100 graphic chip. That includes both the Quadro GV100 and Tesla V100, as well as the Titan V. The EK-FC GV100 Pro water block spans across the entire length of the card cooling all critical components.

With the launch of this water block, its clear that EKs plan of expansion into the professional workstation and server grade market is well under way. In the following months you can expect many more worksation and enterprise cooling solutions from EK.
Return to Keyword Browsing