News Posts matching #GPU

Return to Keyword Browsing

Intel Scores Another AMD Graphics Higher-up: Ali Ibrahim

To support its efforts to build a competitive consumer GPU lineup under the Xe brand, which Intel likes to call its "Odyssey," the company scored another higher-up from AMD, this time Ali Ibrahim. He joined Intel this month as a vice-president within the Architecture, Graphics and Software group, although the company didn't specify his responsibilities. "We are thrilled that Ali has joined Intel as Vice President, Platform Architecture and Engineering - dGPUs to be part of the exciting Intel Xe graphics journey," said an Intel spokesperson in a comment to CRN.

During his 13-year tenure at AMD, Ali Ibrahim was the chief-architect of the company's cloud gaming and console SoC businesses, which provides valuable insight into Intel's breakneck efforts to build high-end discrete GPUs (something it lacked for the past two decades). Intel is the only other company that is capable of building semi-custom chips for someone like Microsoft or Sony as the inventor of x86, provided it has a GPU that can match AMD's in the console space. Likewise, with gaming taking baby steps to the cloud as big players such as Google betting on it, Intel sees an opportunity for cloud gaming GPUs that aren't too different from its "Ponte Vecchio" scalar processors. The transfer of talent isn't one-way, as AMD recently bagged Intel's server processor lead Dan McNamara to head the EPYC brand.

NVIDIA Announces Quadro Experience

Experience matters. And with NVIDIA Quadro Experience—a new application for Quadro GPUs—professionals across industries can boost their creativity and increase their productivity like never before.

Quadro Experience, available now, helps professionals simplify time-consuming tasks, streamline workflows and ensure your favorite applications always have the latest updates. NVIDIA Quadro Experience makes sharing content easier by providing screen capture and desktop recording in 4K, so teams can easily upload content and even broadcast their work directly from their desktop or laptop.
NVIDIA Quadro Experience NVIDIA Quadro Experience NVIDIA Quadro Experience

Arm Announces new IP Portfolio with Cortex-A78 CPU

During this unprecedented global health crisis, we have experienced rapid societal changes in how we interact with and rely on technology to connect, aid, and support us. As a result of this we are increasingly living our lives on our smartphones, which have been essential in helping feed our families through application-based grocery or meal delivery services, as well as virtually seeing our colleagues and loved ones daily. Without question, our Arm-based smartphones are the computing hub of our lives.

However, even before this increased reliance on our smartphones, there was already growing interest among users to explore the limits of what is possible. The combination of these factors with the convergence of 5G and AI, are generating greater demand for more performance and efficiency in the palm of our hands.
Arm Cortex-A78

NVIDIA Investors Claw Back at Company, Claiming $1 Billion Mining GPU Revenue Hidden Away in the Gaming Division

NVIDIA investors have recently filed a suit against the company, claiming that NVIDIA wrongfully detailed its revenue indicators between departments. The main point of contention here is that investors claim NVIDIA knowingly obfuscated the total value of the crypto market boom (and subsequent bust) from investors, thus painting a picture of the company's outlook than was different from reality (making demand for the Gaming division look higher than it was in reality) and exposing them to a different state of affairs and revenue gains than they expected. The investors say that NVIDIA knew that a not insignificant number of its graphics cards sold between 2017 and 2018 was being bought-up solely for the purpose of crypto mining, and that the company knew this (and even marketed GPUs specifically for that purpose).

The crypto mining boom had miners gobbling up all NVIDIA and AMD graphics cards that they could, with both companies seemingly increasing production to meet the crypto mining bubble demand. However, due to the economics of crypto mining, it was clear that any profits derived from this bubble would ultimately open the door to an explosive logistics problem, as miners offloaded their graphics cards to the second-hand market, which could ultimately harm NVIDIA's financial book. Of course, one can look at NVIDIA's revenue categories at the time to see that crypto would hardly fit neatly into either the Gaming, Professional Visualization, Datacenter, Auto, or OEM & IP divisions.

Asetek Unveils Rad Card Industry's First Slot-In PCIe Radiator Card

Asetek, the creator of the all-in-one (AIO) liquid cooler and the global leader in liquid cooling solutions for gaming PCs and DIY enthusiasts, today announced its Rad Card GPU Cooler, bringing liquid cooled GPUs to space constrained PC cases. Asetek's Rad Card GPU Cooler, the industry's first slot-in PCIe radiator card, is first available in Dell-Alienware's newly introduced Alienware Aurora R11 PC.

Space concerns are a real issue for PC manufacturers, leaving GPU air cooling as the only option, until now. Asetek took this challenge head-on, innovating a new approach to radiator technology that reimagines the shape and location of the radiator. The Asetek Rad Card GPU Cooler fits into your motherboard's PCIe slot, just like any other add-in card. By utilizing PCIe slots, Asetek has defined a way to overcome PC manufacturers' dilemma of finding additional space inside the case for a liquid cooled GPU heat exchanger (HEx).

Update May 18th: This card may not be limited to just OEMs with Asetek tweeting "Not all of them made it to Alienware. Not what to do with these...". Asetek is very open about seeking feedback and is watching demand for this product from consumers, possibly even getting ready for a giveaway so it will be exciting to see what comes from this.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

NVIDIA Tesla A100 GPU Pictured

Thanks to the sources of VideoCardz, we now have the first picture of the next-generation NVIDIA Tesla A100 graphics card. Designed for computing oriented applications, the Tesla A100 is a socketed GPU designed for NVIDIA's proprietary SXM socket. In a post few days ago, we were suspecting that you might be able to fit the Tesla A100 GPU in the socket of the previous Volta V100 GPUs as it is a similar SXM socket. However, the mounting holes have been re-arranged and this one requires a new socket/motherboard. The Tesla A100 GPU is based on GA100 GPU die, which we don't know specifications of. From the picture, we can only see that there is one very big die attached to six HBM modules, most likely HBM2E. Besides that everything else is unknown. More details are expected to be announced today at the GTC 2020 digital keynote.
NVIDIA Tesla A100

NVIDIA CEO Jensen Huang has been Cooking the World's Largest GPU - Is this Ampere?

NVIDIA is rumored to introduce their next-generation Ampere architecture very soon, at its GTC event happening on May 14th. We're expecting to see an announcement for the successor to the company's DGX lineup of pre-built compute systems—using the upcoming Ampere architecture of course. At the heart of these machines, will be a new GA100 GPU, that's rumored to be very fast. A while ago, we've seen NVIDIA register a trademark for "DGX A100", which seems to be a credible name for these systems featuring the new Tesla A100 graphics cards.

Today, NVIDIA's CEO was spotted in an unlisted video that's published on the official NVIDIA YouTube channel. It shows him pulling out of the oven what he calls "world's largest GPU", that he has been cooking all the time. Featuring eight Tesla A100 GPUs, this DGX A100 system appears to be based on a similar platform design as previous DGX systems, where the GPU is a socketed SXM2 design. This looks like a viable upgrade path for owners of previous DGX systems—just swap out the GPUs and enjoy higher performance. It's been a while since we have seen Mr. Huang appear with his leather jacket, and in the video, he isn't wearing one, is this the real Jensen? Jokes aside, you can check out the video below, if it is not taken down soon.
NVIDIA DGX A100 System
Update May 12th, 5 pm UTC: NVIDIA has listed the video and it is not unlisted anymore.

TSMC 5 nm Customers Listed, Intel Rumored to be One of Them

TSMC is working hard to bring a new 5 nm (N5 and N5+) despite all the hiccups the company may have had due to the COVID-19 pandemic happening. However, it seems like nothing can stop TSMC, and plenty of companies have already reserved some capacity for their chips. With mass production supposed to start in Q3 of this year, 5 nm node should become one of the major nodes over time for TSMC, with predictions that it will account for 10% of all capacity for 2020. Thanks to the report of ChinaTimes, we have a list of new clients for the TSMC 5 nm node, with some very interesting names like Intel appearing on the list.

Apple and Huawei/HiSilicon will be the biggest customers for the node this year with A14 and Kirin 1000 chips being made for N5 node, with Apple ordering the A15 chips and Huawei readying the Kirin 1100 5G chip for the next generation N5+. From there, AMD will join the 5 nm party for Zen 4 processors and RDNA 3 graphics cards. NVIDIA has also reserved some capacity for its Hopper architecture, which is expected to be a consumer-oriented option, unlike Ampere. And perhaps the most interesting entry to the list is Intel Xe graphics cards. The list shows that Intel might use the N5 process form TSMC so it can ensure the best possible performance for its future cards, in case it has some issues manufacturing its own nodes, just like it did with 10 nm.
TSMC 5 nm customers

AMD Adds Four New Graphics Technologies to Its FidelityFX Software Stack via GPUOpen

AMD today via its newly released GPUOpen website has announced that it is adding four new graphics technologies to its FidelityFX software stack. Before you ask, no; there is no included Ray Tracing graphics libraries among these four new technologies. However, considering the use-case for these is to give developers an almost plug-in flexibility on various graphics technologies they would otherwise have to find other ways to integrate in their rendering pass, added layers to GPUOpen are always a welcome sight. And rest assured that "classic" shading techniques will still be widely used even in the advent of top to bottom raytracing capabilities on graphics hardware - which likely won't happen in the next GPU hardware generation anyway.

Added technologies to the previously-released Contrast Adaptive Sharpening are libraries for SSSR (Stochastic Screen Space Reflections) for better reflections without the usage of raytracing; CACAO (Combined Adaptive Compute Ambient Occlusion) for added depth to shadows and object quality; LPM (Luminance Preserving Mapper) for eased application of an HDR rendering pipeline with correct values, preventing overblown details; and SPD (Single Pass Downsampler) which will allow developers to seamlessly downsample required assets (think something along the lines of Variable Rate Shading) to achieve FPS targets. The GPUOpen is an effort from AMD to create an open graphics library that will allow developers to easily integrate AMD-optimized technologies to their graphics workflow.

Intel Gen12 Xe GPU with 96 Execution Units Shows Up on SiSoft Database

An Intel Gen12 Xe GPU, possibly a discrete- DG1 prototype, showed up on the SiSoft SANDRA online database. The GPU is detailed by SANDRA as having 768 unified shaders across 96 execution units (EUs), a 1.50 GHz GPU clock speed, 1 MB of on-die L2 cache, and 3 GB of dedicated video memory of an unknown type (likely GDDR6). This is probably a different chip from the DG1-SDV, which caps out at 900 MHz GPU clock, although its SIMD muscle is identical.

At a clock-speed of 1.50 GHz, the chip would feature an FP32 throughput of 2,303 GFLOPs (we know this from the DG1-SDV offering 1382 GFLOPs at 900 MHz). If the software-side optimization backs this hardware, the resulting product could end up with performance in the league of the 8 CU Radeon "Vega" solution found in the AMD "Renoir" APU, or the Radeon RX 560 discrete GPU, which are just about enough for PUBG at 1080p with medium settings.

Samsung/AMD Radeon GPU for Smartphones is Reportedly Beating the Competition

Samsung and AMD announced last year their strategic partnership to bring AMD RDNA GPUs to the Samsung mobile chips and use that as the only GPU going forward. And now, some performance numbers are going around about the new RDNA smartphone GPU that is compared to Qualcomm Adreno 650 GPU. Thanks to the South Korean technology forum "Clien", they have obtained some alleged performance results of new GPU in the GFXBench benchmark. The baseline in these tests is the Qualcomm Adreno 650 GPU, which scored 123 FPS in Manhattan 3.1 test, 53 FPS in Aztec Normal, and 20 FPS in Aztec High.

The welcome surprise here is the new RDNA GPU Samsung is pursuing. It has scored an amazing 181 FPS in Manhattan 3.1 test (up 47% from Adreno 650), 138 FPS in Aztec Normal (up almost 200% from Adreno 650), and 58 FPS in Aztec High which is 190% higher compared to Adreno 650. This performance results could be very true, as the Samsung and AMD collaboration should give first results in 2021 when the competition will be better, and they need to prepare for that. You always start designing a processor for next-generation workloads and performance if you want to be competitive by the time you release a product.
AMD RDNA GPU

Intel Teases "Big Daddy" Xe-HP GPU

The Intel Graphics Twitter account was on fire today, because they posted an update on the development of the Xe graphics processor, mentioning that samples are ready and packed up in quite an interesting package. The processor in question was discovered to be a Xe-HP GPU variant with an estimated die size of 3700 mm², which means we sure are talking about a multi-chip package here. How we concluded that it is the Xe-HP GPU, is by words of Raja Koduri, senior vice president, chief architect, general manager for Architecture, Graphics, and Software at Intel. He made a tweet, which was later deleted, that says this processor is a "baap of all", meaning "big daddy of them all" when translated from Hindi.

Mr. Koduri previously tweeted a photo of the Intel Graphics team at India, which has been working on the same "baap of all" GPU, which suggests this is a Xe-HP chip. It seems that this is not the version of the GPU made for HPC workloads (this is reserved for the Xe-HPC GPU), this model could be a direct competitor to offers like NVIDIA Quadro or AMD Radeon Pro. We can't wait to learn more about Intel's Xe GPUs, so stay tuned. Mr. Koduri has confirmed that this GPU will be used only for Data Centric applications as it is needed to "keep up with the data we are generating". He has also added that the focus for gaming GPUs is to start off with better integrated GPUs and low power chips above that, that could reach millions of users. That will be a good beginning as that will enable software preparation for possible high-performance GPUs in future.

Update May 2: changed "father" to "big daddy", as that's the better translation for "baap".
Update 2, May 3rd: The GPU is confirmed to be a Data Center component.

AMD Reports First Quarter 2020 Financial Results

AMD today announced revenue for the first quarter of 2020 of $1.79 billion, operating income of $177 million, net income of $162 million and diluted earnings per share of $0.14. On a non-GAAP* basis, operating income was $236 million, net income was $222 million and diluted earnings per share was $0.18.

"We executed well in the first quarter, navigating the challenging environment to deliver 40 percent year-over-year revenue growth and significant gross margin expansion driven by our Ryzen and EPYC processors," said Dr. Lisa Su, AMD president and CEO. "While we expect some uncertainty in the near-term demand environment, our financial foundation is solid and our strong product portfolio positions us well across a diverse set of resilient end markets. We remain focused on strong business execution while ensuring the safety of our employees and supporting our customers, partners and communities. Our strategy and long-term growth plans are unchanged."

Khronos Group Releases OpenCL 3.0

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, publicly releases the OpenCL 3.0 Provisional Specifications. OpenCL 3.0 realigns the OpenCL roadmap to enable developer-requested functionality to be broadly deployed by hardware vendors, and it significantly increases deployment flexibility by empowering conformant OpenCL implementations to focus on functionality relevant to their target markets. OpenCL 3.0 also integrates subgroup functionality into the core specification, ships with a new OpenCL C 3.0 language specification, uses a new unified specification format, and introduces extensions for asynchronous data copies to enable a new class of embedded processors. The provisional OpenCL 3.0 specifications enable the developer community to provide feedback on GitHub before the specifications and conformance tests are finalized.
OpenCL

NVIDIA is Secretly Working on a 5 nm Chip

According to the report of DigiTimes, which talked about TSMC's 5 nm silicon manufacturing node, they have reported that NVIDIA is also going to be a customer for it and they could use it in the near future. And that is very interesting information, knowing that these chips will not go in the next generation of GPUs. Why is that? Because we know that NVIDIA will utilize both TSMC and Samsung for their 7 nm manufacturing nodes for its next-generation Ampere GPUs that will end up in designs like GeForce RTX 3070 and RTX 3080 graphics cards. These designs are not what NVIDIA needs 5 nm for.

Being that NVIDIA already has a product in its pipeline that will satisfy the demand for the high-performance graphics market, maybe they are planning something that will end up being a surprise to everyone. No one knows what it is, however, the speculation (which you should take with a huge grain of salt) would be that NVIDIA is updating its Tegra SoC with the latest node. That Tegra SoC could be used in a range of mobile devices, like the Nintendo Switch, so could NVIDIA be preparing a new chip for Nintendo Switch 2?
NVIDIA Xavier SoC

AMD's Next-Generation Radeon Instinct "Arcturus" Test Board Features 120 CUs

AMD is preparing to launch its next-generation of Radeon Instinct GPUs based on the new CDNA architecture designed for enterprise deployments. Thanks to the popular hardware leaker _rogame (@_rogame) we have some information about the configuration of the upcoming Radeon Instinct MI100 "Arcturus" server GPU. Previously, we obtained the BIOS of the Arcturus GPU that showed a configuration of 128 Compute Units (CUs), which resulted in 8,192 of CDNA cores. That configuration had a specific setup of 1334 MHz GPU clock, SoC frequency of 1091 MHz, and memory speed of 1000 MHz. However, there was another GPU test board spotted which featured a bit different specification.

The reported configuration is an Arcturus GPU with 120 CUs, resulting in a CDNA core count of 7,680 cores. These cores are running at frequencies of 878 MHz for the core clock, 750 MHz SoC clock, and a surprising 1200 MHz memory clock. While the SoC and core clocks are lower than the previous report, along with the CU count, the memory clock is up by 200 MHz. It is important to note that this is just a test board/variation of the MI100, and actual frequencies should be different.
AMD Radeon Instinct MI60

Microsoft Flight Simulator Requirements Listed, Ideal Specification Requires 150 GB of Drive Space and RTX 2080 GPU

Microsoft's flight simulator, an upcoming game designed to bring real-life scenarios of flying an airplane, just got a list of system requirements needed to run a game. To play with Flight Simulator, you would at least need to have a quad-core CPU like AMD Ryzen 3 1200 or Intel Core i5-4460 equipped system, along with 8 GB of RAM. For graphics, you would need a GPU with at least 2 GB of VRAM, where the requirement is either AMD Radeon RX 570 or NVIDIA GeForce GTX 770 GPU. Another interesting observation is the requirement of 150 GB of drive space, meaning that this game will be pretty big. Internet connection needs to be 5 Mbps at minimum, and as you up the resolution and graphics, you would need a faster connection. You can check out the entire table below.

The need for incrementally faster connection comes out of one reason - adaptive streaming. The game looks stunning, and if you wish to play at the highest quality, parts of the game will be rendered in the cloud. Microsoft is using its Azure infrastructure to help and render parts of the game and stream it down to your PC. This ensures that your PC is capable of playing the game and Microsoft is showing how they can tap the power of cloud for uses like this.
Microsoft Flight Simulator System Requirements Microsoft Flight Simulator Microsoft Flight Simulator Microsoft Flight Simulator

GDP Win Max is an 8-inch Gaming Laptop with Intel's Ice Lake CPU

GDP, a company specializing in the creation of tiny laptops designed for gaming, has just announced the latest addition to its family of tiny notebooks - the GDP Win Max gaming laptop. This model is an 8-inch gaming laptop packing a lot for its size. On the outside, this laptop is equipped with joysticks on both sides, so there is even an option to directly play games using these joysticks instead of the built-in keyboard. The display of the device is an IPS screen that features a 1280×800 resolution, resulting in a 16:10 aspect ratio of the display. What's more important, however, is what is under the hood of the small body.

It is powered by Intel's latest Ice Lake CPU - the Intel Core i5-1035G7. Being a 4 core/ 8 threaded CPU with Gen11 Iris Plus 940 graphics it is accompanied by 16 GB of LPDDR4X RAM and 512 GB SSD. GDP has provided some of the benchmark results of this configuration which you can check out below, however, please take these with a grain of salt. As far as I/O goes, this small laptop is rather well equipped with plenty of ports. There is one Thunderbolt 3 port to connect to external GPU is it is needed. There is one USB Type-C 3.1 Gen2 port and two USB Type-A 3.1 Gen1 ports for the connection of external peripherals. If you wish to connect the laptop to the outside screen, there are options of HDMI, USB Type-C or Thunderbolt 3 ports for connection. A welcome addition to I/O is the inclusion of the RJ45 connector, meaning that if you have access to ethernet you can easily plug it into this laptop.
GDP Win Max GDP Win Max GDP Win Max Benchmarks GDP Win Max Benchmarks

Apple's A12Z SoC Features the Same A12X Silicon

With an introduction of new iPad Pro tablets, Apple has brought another new silicon to its offerings in the form of A12Z SoC. Following the previous king in tablet space, the A12X SoC, Apple has decided to update its silicon and now there is another, more advanced stepping in form of an A12Z SoC. Thanks to the report from TechInsights, their analysis has shown that the new SoC used in Apple's devices is pretty much the same compared to the A12X SoC of last year, except the GPU used. Namely, the configuration of A12X is translated into the A12Z - there are four Apple Vortex and four Apple Tempest cores for the CPU. There is a 128-bit memory bus designed for LPDDR4X memory, the same as the A12X.

What is different, however, is the GPU cluster configuration. In A12X there was a cluster filled with 7 working and one disabled A12-gen GPU core. In A12Z SoC all of the 8 GPUs present are enabled and working, and they are also of the same A12 generation. The new SoC is even built using the same N7 7 nm manufacturing process from TSMC. While we don't know the silicon stepping revision of the A12Z, there aren't any new features besides the additional GPU core.
Apple A12Z Bionic

Huawei Rumored To Enter GPU Server Market

Huawei may become the 4th player in the GPU server market if a new report by Korean news outlet The Elec is to be believed. The Elec has received reports from Industry Sources that Huawei is readying to enter the market in 2020, this will put them in direct competition with industry leader NVIDIA along with AMD and newcomer Intel. Huawei Korea will reportedly assign the project to the new Cloud and AI Business Group division, talent scouting has already begun with rumors of current and former NVIDIA staff getting poached.

Huawei is no newcomer to the server market having already launched the Ascend 910 one of the worlds most advanced AI accelerators in August 2019. The Ascend 910 outperforms the Tesla V100 by a factor of two, and is developed on a more advanced 7 nm+ technology compared to the 12 nm Tesla V100. In January 2020 Huawei launched their next server product the Kunpeng 920 a big data CPU along with a new server lineup featuring the chip. Considering Huawei's experience and resources in the server market along with Intel's entrance the GPU server landscape is set to become very competitive.

NVIDIA Quietly Refreshes Mobile GeForce RTX Graphics Cards

NVIDIA has recently introduced its GeForce RTX 20-series SUPER mobile graphics cards, with the heap of new features that are present on the new GPU models. According to a report from ComputerBase, NVIDIA has refreshed few of the RTX 20-series mobile GPUs in addition to the previously announced SUPER models. Now there are a few models/variants of these RTX 20-series GPUs. Firstly, there is a standard RTX mobile GPU variant called the "Max-P" version - this represents the RTX 20-series GPUs that were launched initially in January of 2019 and it includes three SKUs like RTX 2060, 2070 and 2080. Secondly, there is a "Max-Q" revision of these graphics cards, meant to bring maximum efficiency. These SKUs are power-limited, so they are a bit slower compared to the regular "Max-P" version, and they have a Total Graphics Power (TGP) between 65 W and 90 W depending on the model.

Next up we have GeForce RTX SUPER 20-series of GPUs that were launched on April 2nd just a few days ago. These GPUs are special because they bring a heap of new features like more CUDA cores, low-power (1.25 V compared to the old 1.35 V) GDDR6 memory, DynamicBoost, and Optimus support. So what are these new, refresh GPUs? On ComputerBase Forums, XMG (laptop maker) representative has revealed that NVIDIA refreshed some of the older RTX 20-series mobile GPU that were already in use.

Steam Hardware Survey March 2020: Intel CPUs, NVIDIA Graphics Cards Rising

Steam has just released its hardware survey for March 2020, and it paints an interesting picture on the overall Steam user market. First of all, a disclaimer: Steam's survey's don't refer to the entirety of the systems that are running its digital distribution platform; it hails from only a part of the connected machines which are randomly selected - ideally, in such as way so as for its hardware data to be representative of the overall Steam ecosystem.

The March 2020 hardware survey shows Intel CPUs gaining traction over AMD's, albeit in a very limited fashion (a 0.24% increase in total number of machines running with GenuineIntel CPUs). Cross referencing this increase with the average clockspeeds of Intel CPUs, though, paints an interesting picture: the higher clockspeeds in the 3.3 to 3.8 GHz range (which can relatively cleanly be associated with the company's higher-performance products), saw a decline in usage (contrary to AMD, which saw the reverse happening).

Acer Announces New Predator Triton 500 and Nitro 5 Gaming Notebooks

Acer today announced its new Predator Triton 500 and Acer Nitro 5 gaming notebooks. Alongside significant developments in thermal performance, both devices come with the latest 10th Gen Intel Core processors and newly announced NVIDIA GeForce RTX SUPER and GTX GPUs.

"In addition to the latest processors and GPUs, we've made exciting across-the-board updates to the Predator Triton 500 and Acer Nitro 5 this year," said James Lin, General Manager, Notebooks, IT Products Business, Acer. "Most importantly we've applied new thermal technology which keeps the devices cool so gamers can enjoy the performance improvements from the latest silicon technology."
Acer Predator Triton 500 Acer Predator Triton 500 Acer Predator Triton 500

MSI Announces new Creator and Gaming Laptops with Intel 10th Gen Core-H and GeForce RTX SUPER

MSI has comprehensively evolved! On top of the award-winning GE66 Raider and GS66 Stealth, MSI grandly revealed the complete line-up of six series of gamer-oriented laptops. Only the strong one can survive and evolve! MSI is the only laptop manufacturer that provides a stable supply under the global disruption of the supply chain. Now be the vanguard and evolve to next-generation by choosing the latest MSI gaming laptops equipped with 10th Gen Intel Core i9 processor (Comet Lake H-series) and latest GeForce RTX Super series graphics, newly designed for gamers in style!

The next-gen performance can entertain and satisfy the gamers in all aspects. The revamped MSI laptops can reveal the true computing power of the 10th Gen Intel Core i9 Processor. Featuring up to i9-10980HK processor, the latest Intel chip has a noticeable impact on FPS in demanding games and multitasking. With the latest Intel chip's 50% boot in computing and the Single-Core Turbo boost reaching 5.3 GHz, now gamers can enjoy an unparalleled gaming experience. Furthermore, see real-time Ray Tracing in games with up to the new NVIDIA GeForce RTX 2080 SUPER, AI-enhanced graphics, which can also provide a faster efficiency than its erstwhile generation.
MSI Creator 17 MSI Creator 17 MSI GS66 Stealth MSI GP75 Leopard
Return to Keyword Browsing