News Posts matching #NVIDIA

Return to Keyword Browsing

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

PNY Technologies Unveils NVIDIA IGX Orin, NVIDIA Holoscan, and Magic Leap 2 Developer Platform

PNY Technologies, a pioneer in high-performance computing, proudly announces the launch of a groundbreaking developer platform, uniting the formidable capabilities of NVIDIA IGX Orin, NVIDIA Holoscan and Magic Leap 2. This visionary kit empowers software and technology vendors to pioneer cutting-edge solutions in healthcare and other industries, redefining the boundaries of innovation.

Key Features of the NVIDIA IGX + Magic Leap 2 XR Bundle:
  • Zero Physical World Latency for Mission-Critical Applications: Ensure zero physical world latency for mission-critical applications, offering unparalleled precision and real-time data processing.
  • AI Inference and Local Computation: Leverage NVIDIA IGX Orin for AI inference and local computation of complex models, using NVIDIA Holoscan as its real-time multimodal AI sensor processing platform and NVIDIA Metropolis software to offer XR use cases.
  • Ultra-Precise Augmented Reality Interface: Magic Leap 2 delivers an ultra-precise augmented reality interface for accurate and immersive experiences.

Samsung Shows Off 32 Gbps GDDR7 Memory at GTC

Samsung Electronics showed off its latest graphics memory innovations at GTC, with an exhibit of its new 32 Gbps GDDR7 memory chip. The chip is designed to power the next generation of consumer and professional graphics cards, and some models of NVIDIA's GeForce RTX "Blackwell" generation are expected to implement GDDR7. The chip Samsung showed off at GTC is of the highly relevant 16 Gbit density (2 GB). This is important, as NVIDIA is rumored to keep graphics card memory sizes largely similar to where they currently are, while only focusing on increasing memory speeds.

The Samsung GDDR7 chip shown is capable of its 32 Gbps speed at a DRAM voltage of just 1.1 V, which beats the 1.2 V that's part of JEDEC's GDDR7 specification, which along with other power management innovations specific to Samsung, translates to a 20% improvement in energy efficiency. Although this chip is capable of 32 Gbps, NVIDIA isn't expected to give its first GeForce RTX "Blackwell" graphics cards that speed, and the first SKUs are expected to ship with 28 Gbps GDDR7 memory speeds, which means NVIDIA could run this Samsung chip at a slightly lower voltage, or with better timings. Samsung also made some innovations with the package substrate, which decreases thermal resistance by 70% compared to its GDDR6 chips. Both NVIDIA and AMD are expected to launch their first discrete GPUs implementing GDDR7, in the second half of 2024.

Chinese Research Institute Utilizing "Banned" NVIDIA H100 AI GPUs

NVIDIA's freshly unveiled "Blackwell" B200 and GB200 AI GPUs will be getting plenty of coverage this year, but many organizations will be sticking with current or prior generation hardware. Team Green is in the process of shipping out compromised "Hopper" designs to customers in China, but the region's appetite for powerful AI-crunching hardware is growing. Last year's China-specific H800 design, and the older "Ampere" A800 chip were deemed too potent—new regulations prevented further sales. Recently, AMD's Instinct MI309 AI accelerator was considered "too powerful to gain unconditional approval from the US Department of Commerce." Natively-developed solutions are catching up with Western designs, but some institutions are not prepared to queue up for emerging technologies.

NVIDIA's new H20 AI GPU as well as Ada Lovelace-based L20 PCIe and L2 PCIe models are weakened enough to get a thumbs up from trade regulators, but likely not compelling enough for discerning clients. The Telegraph believes that NVIDIA's uncompromised H100 AI GPU is currently in use at several Chinese establishments—the report cites information presented within four academic papers published on ArXiv, an open access science website. The Telegraph's news piece highlights one of the studies—it was: "co-authored by a researcher at 4paradigm, an AI company that was last year placed on an export control list by the US Commerce Department for attempting to acquire US technology to support China's military." Additionally, the Chinese Academy of Sciences appears to have conducted several AI-accelerated experiments, involving the solving of complex mathematical and logical problems. The article suggests that this research organization has acquired a very small batch of NVIDIA H100 GPUs (up to eight units). A "thriving black market" for high-end NVIDIA processors has emerged in the region—last Autumn, the Center for a New American Security (CNAS) published an in-depth article about ongoing smuggling activities.

NVIDIA GeForce 551.86 WHQL Drivers Released

NVIDIA today released the latest version of its GeForce Game Ready drivers. Version 551.86 WHQL comes with optimization for "Horizon: Forbidden West" Complete Edition, which includes support for DLSS 3 Frame Generation, and Reflex. The game also supports DLSS 2, and the DLAA preset that provides the best possible image quality. Next up, the drivers add optimization for the latest update of Portal with RTX; which adds support for DLSS 3.5 Ray Reconstruction, and RTX-IO (also implemented as DirectStorage). The drivers also add GeForce Experience optimal settings for "Helldivers 2" and "Three Kingdoms Zhao Yun." Lastly, NVIDIA expanded the list of G-SYNC compatible displays with 11 new monitor models.

Among the gaming bugs fixed with this release are a problem where Advanced Optimus-featured notebooks won't wake up in NVIDIA GPU-only display modes; and a stutter noticed in "Hitman 3" on some Advanced Optimus notebooks. A bug that caused RTX HDR saturation and tone-curve matching Gamma 2.0 instead of Gamma 2.2 has been fixed. Incorrect RTX Digital Vibrance detection for Paint.net as a supported app, has been fixed.

DOWNLOAD: NVIDIA GeForce 551.86 WHQL

Thermaltake Launches Twelve Exclusive LCGS Reactor Gaming Desktops at Best Buy

Thermaltake USA, the leading PC Case, Cooling, Power, and memory solutions, proudly announces the launch of twelve innovative models in the LCGS (Liquid Cooled Gaming System) Reactor series, available exclusively at Best Buy. This launch marks a significant milestone for Thermaltake, introducing the next generation of gaming PCs, meticulously designed with the newly unveiled The Tower 300 Micro Tower Chassis. These gaming powerhouses are equipped with the latest high-performance components, including the cutting-edge Intel Core 14th Gen i9/i7 processors, NVIDIA GeForce RTX 40 Super Series/4090 graphics cards, 32 GB of DDR5 5600 MT/s memory, and expansive 2 TB NVMe M.2 SSD storage.

Each system in the LCGS Reactor series boasts a 360 mm radiator liquid cooling solution, ensuring exceptional cooling performance and stability during intense gaming sessions. The addition of vibrant color options such as Snow, Black, Hydrangea Blue, and Turquoise adds a personalized touch to these systems, catering to gamers' and content creators' diverse tastes and styles. Tailored for the most demanding gamers and creative professionals, these pre-built gaming desktop PCs are assembled and rigorously tested by Thermaltake's skilled engineers in the USA, ensuring unparalleled build quality and reliability.

NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

Following Monday's blockbuster announcements of the "Blackwell" architecture and NVIDIA's B100, B200, and GB200 AI GPUs, all eyes are now on its client graphics derivatives, or the GeForce RTX GPUs that implement "Blackwell" as a graphics architecture. Leading the effort will be the new GB202 ASIC, a successor to the AD102 powering the current RTX 4090. This will be NVIDIA's biggest GPU with raster graphics and ray tracing capabilities. The GB202 is rumored to be followed by the GB203 in the premium segment, the GB205 a notch lower, and the GB206 further down the stack. Kopite7kimi, a reliable source with NVIDIA leaks, says that the GB202 silicon will be built on the same TSMC 4N foundry node as the GB100.

TSMC 4N is a derivative of the company's mainline N4P node, the "N" in 4N stands for NVIDIA. This is a nodelet that TSMC designed with optimization for NVIDIA SoCs. TSMC still considers the 4N as a derivative of the 5 nm EUV node. There is very little public information on the power- and transistor density improvements of the TSMC 4N over TSMC N5. For reference, the N4P, which TSMC regards as a 5 nm derivative, offers a 6% transistor-density improvement, and a 22% power efficiency improvement. In related news, Kopite7kimi says that with "Blackwell," NVIDIA is focusing on enlarging the L1 caches of the streaming multiprocessors (SM), which suggests a design focus on increasing the performance at an SM-level.

NVIDIA Digital Human Technologies Bring AI Characters to Life

NVIDIA announced today that leading AI application developers across a wide range of industries are using NVIDIA digital human technologies to create lifelike avatars for commercial applications and dynamic game characters. The results are on display at GTC, the global AI conference held this week in San Jose, Calif., and can be seen in technology demonstrations from Hippocratic AI, Inworld AI, UneeQ and more.

NVIDIA Avatar Cloud Engine (ACE) for speech and animation, NVIDIA NeMo for language, and NVIDIA RTX for ray-traced rendering are the building blocks that enable developers to create digital humans capable of AI-powered natural language interactions, making conversations more realistic and engaging.

Unwrapping the NVIDIA B200 and GB200 AI GPU Announcements

NVIDIA on Monday, at the 2024 GTC conference, unveiled the "Blackwell" B200 and GB200 AI GPUs. These are designed to offer an incredible 5X the AI inferencing performance gain over the current-gen "Hopper" H100, and come with four times the on-package memory. The B200 "Blackwell" is the largest chip physically possible using existing foundry tech, according to its makers. The chip is an astonishing 208 billion transistors, and is made up of two chiplets, which by themselves are the largest possible chips.

Each chiplet is built on the TSMC N4P foundry node, which is the most advanced 4 nm-class node by the Taiwanese foundry. Each chiplet has 104 billion transistors. The two chiplets have a high degree of connectivity with each other, thanks to a 10 TB/s custom interconnect. This is enough bandwidth and latency for the two to maintain cache coherency (i.e. address each other's memory as if they're their own). Each of the two "Blackwell" chiplets has a 4096-bit memory bus, and is wired to 96 GB of HBM3E spread across four 24 GB stacks; which totals to 192 GB for the B200 package. The GPU has a staggering 8 TB/s of memory bandwidth on tap. The B200 package features a 1.8 TB/s NVLink interface for host connectivity, and connectivity to another B200 chip.

ASRock Rack Unveils GPU Servers Supporting NVIDIA Blackwell GB200

ASRock Rack Inc., a leading innovative server company, is announcing its 6U8X-EGS2 series at booth 1617 during the NVIDIA GTC global AI conference in San Jose, USA. The 6U8X-EGS2 NVIDIA H100 and 6U8X-EGS2 NVIDIA H200 are ASRock Rack's most powerful AI training systems, capable of accommodating NVIDIA HGX H200 8-GPUs. The 6U rack mounts are able of providing airflow for the highest CPU and GPU performance. In addition to the eight-way configuration, the 6U8X-EGS2 series offers 12 PCIe Gen 5 NVMe drive bays and multiple PCIe 5.0 x16 slots, as well as a 4+4 PSU for full redundancy.

ASRock Rack is also developing servers that support the new NVIDIA HGX B200 8-GPU to handle the most demanding generative AI applications, accelerate large language models, and cater to data analytics and high-performance computing workloads. "At GTC, NVIDIA announced its new NVIDIA Blackwell platform, and we are glad to contribute to the new era of computing by providing a wide range of server hardware products that will support it," said Hunter Chen, Vice President at ASRock Rack. "Our products provide organizations with the foundation to transform their businesses and leverage the advancements of accelerated computing."

ASUS Presents MGX-Powered Data-Center Solutions

ASUS today announced its participation at the NVIDIA GTC global AI conference, where it will showcase its solutions at booth #730. On show will be the apex of ASUS GPU server innovation, ESC NM1-E1 and ESC NM2-E1, powered by the NVIDIA MGX modular reference architecture, accelerating AI supercomputing to new heights. To help meet the increasing demands for generative AI, ASUS uses the latest technologies from NVIDIA, including the B200 Tensor Core GPU, the GB200 Grace Blackwell Superchip, and H200 NVL, to help deliver optimized AI server solutions to boost AI adoption across a wide range of industries.

To better support enterprises in establishing their own generative AI environments, ASUS offers an extensive lineup of servers, from entry-level to high-end GPU server solutions, plus a comprehensive range of liquid-cooled rack solutions, to meet diverse workloads. Additionally, by leveraging its MLPerf expertise, the ASUS team is pursuing excellence by optimizing hardware and software for large-language-model (LLM) training and inferencing and seamlessly integrating total AI solutions to meet the demanding landscape of AI supercomputing.

Supermicro Launches Three NVIDIA-Based, Full-Stack, Ready-to-Deploy Generative AI SuperClusters

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing its latest portfolio to accelerate the deployment of generative AI. The Supermicro SuperCluster solutions provide foundational building blocks for the present and the future of large language model (LLM) infrastructure. The three powerful Supermicro SuperCluster solutions are now available for generative AI workloads. The 4U liquid-cooled systems or 8U air-cooled systems are purpose-built and designed for powerful LLM training performance, as well as large batch size and high-volume LLM inference. A third SuperCluster, with 1U air-cooled Supermicro NVIDIA MGX systems, is optimized for cloud-scale inference.

"In the era of AI, the unit of compute is now measured by clusters, not just the number of servers, and with our expanded global manufacturing capacity of 5,000 racks/month, we can deliver complete generative AI clusters to our customers faster than ever before," said Charles Liang, president and CEO of Supermicro. "A 64-node cluster enables 512 NVIDIA HGX H200 GPUs with 72 TB of HBM3e through a couple of our scalable cluster building blocks with 400 Gb/s NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking. Supermicro's SuperCluster solutions combined with NVIDIA AI Enterprise software are ideal for enterprise and cloud infrastructures to train today's LLMs with up to trillions of parameters. The interconnected GPUs, CPUs, memory, storage, and networking, when deployed across multiple nodes in racks, construct the foundation of today's AI. Supermicro's SuperCluster solutions provide foundational building blocks for rapidly evolving generative AI and LLMs."

NVIDIA Omniverse Expands Worlds Using Apple Vision Pro

NVIDIA is bringing OpenUSD-based Omniverse enterprise digital twins to the Apple Vision Pro. Announced today at NVIDIA GTC, a new software framework built on Omniverse Cloud APIs, or application programming interfaces, lets developers easily send their Universal Scene Description (OpenUSD) industrial scenes from their content creation applications to the NVIDIA Graphics Delivery Network (GDN), a global network of graphics-ready data centers that can stream advanced 3D experiences to Apple Vision Pro.

In a demo unveiled at the global AI conference, NVIDIA presented an interactive, physically accurate digital twin of a car streamed in full fidelity to Apple Vision Pro's high-resolution displays. The demo featured a designer wearing the Vision Pro, using a car configurator application developed by CGI studio Katana on the Omniverse platform. The designer toggles through paint and trim options and even enters the vehicle - leveraging the power of spatial computing by blending 3D photorealistic environments with the physical world.

Microsoft and NVIDIA Announce Major Integrations to Accelerate Generative AI for Enterprises Everywhere

At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.

"Together with NVIDIA, we are making the promise of AI real, helping to drive new benefits and productivity gains for people and organizations everywhere," said Satya Nadella, Chairman and CEO, Microsoft. "From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability."

"AI is transforming our daily lives - opening up a world of new opportunities," said Jensen Huang, founder and CEO of NVIDIA. "Through our collaboration with Microsoft, we're building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world."

AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation

Amazon Web Services (AWS), an Amazon.com company, and NVIDIA today announced that the new NVIDIA Blackwell GPU platform - unveiled by NVIDIA at GTC 2024 - is coming to AWS. AWS will offer the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies' long standing strategic collaboration to deliver the most secure and advanced infrastructure, software, and services to help customers unlock new generative artificial intelligence (AI) capabilities.

NVIDIA and AWS continue to bring together the best of their technologies, including NVIDIA's newest multi-node systems featuring the next-generation NVIDIA Blackwell platform and AI software, AWS's Nitro System and AWS Key Management Service (AWS KMS) advanced security, Elastic Fabric Adapter (EFA) petabit scale networking, and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster hyper-scale clustering. Together, they deliver the infrastructure and tools that enable customers to build and run real-time inference on multi-trillion parameter large language models (LLMs) faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2.

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale

NVIDIA today announced its next-generation AI supercomputer—the NVIDIA DGX SuperPOD powered by NVIDIA GB200 Grace Blackwell Superchips—for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory—scaling to more with additional racks.

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

Powering a new era of computing, NVIDIA today announced that the NVIDIA Blackwell platform has arrived—enabling organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.

The Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and generative AI—all emerging industry opportunities for NVIDIA.

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

NVIDIA today announced that TSMC and Synopsys are going into production with NVIDIA's computational lithography platform to accelerate manufacturing and push the limits of physics for the next generation of advanced semiconductor chips. TSMC, the world's leading foundry, and Synopsys, the leader in silicon to systems design solutions, have integrated NVIDIA cuLitho with their software, manufacturing processes and systems to speed chip fabrication, and in the future support the latest-generation NVIDIA Blackwell architecture GPUs.

"Computational lithography is a cornerstone of chip manufacturing," said Jensen Huang, founder and CEO of NVIDIA. "Our work on cuLitho, in partnership with TSMC and Synopsys, applies accelerated computing and generative AI to open new frontiers for semiconductor scaling." NVIDIA also introduced new generative AI algorithms that enhance cuLitho, a library for GPU-accelerated computational lithography, dramatically improving the semiconductor manufacturing process over current CPU-based methods.

Gigabyte Unveils Comprehensive and Powerful AI Platforms at NVIDIA GTC

GIGABYTE Technology and Giga Computing, a subsidiary of GIGABYTE and an industry leader in enterprise solutions, will showcase their solutions at the GIGABYTE booth #1224 at NVIDIA GTC, a global AI developer conference running through March 21. This event will offer GIGABYTE the chance to connect with its valued partners and customers, and together explore what the future in computing holds.

The GIGABYTE booth will focus on GIGABYTE's enterprise products that demonstrate AI training and inference delivered by versatile computing platforms based on NVIDIA solutions, as well as direct liquid cooling (DLC) for improved compute density and energy efficiency. Also not to be missed at the NVIDIA booth is the MGX Pavilion, which features a rack of GIGABYTE servers for the NVIDIA GH200 Grace Hopper Superchip architecture.

MemVerge and Micron Boost NVIDIA GPU Utilization with CXL Memory

MemVerge, a leader in AI-first Big Memory Software, has joined forces with Micron to unveil a groundbreaking solution that leverages intelligent tiering of CXL memory, boosting the performance of large language models (LLMs) by offloading from GPU HBM to CXL memory. This innovative collaboration is being showcased in Micron booth #1030 at GTC, where attendees can witness firsthand the transformative impact of tiered memory on AI workloads.

Charles Fan, CEO and Co-founder of MemVerge, emphasized the critical importance of overcoming the bottleneck of HBM capacity. "Scaling LLM performance cost-effectively means keeping the GPUs fed with data," stated Fan. "Our demo at GTC demonstrates that pools of tiered memory not only drive performance higher but also maximize the utilization of precious GPU resources."

Phison Announces Strategic Partnerships Deploying aiDAPTIV+ at NVIDIA GTC 2024

Phison Electronics, a leading provider of NAND controllers and storage solutions, today announced aiDAPTIV+ partnerships with ASUS, Gigabyte, MAINGEAR, and MediaTek. At GTC 2024, Phison and partners will demonstrate aiDAPTIV+, a hybrid hardware and software large language model (LLMs) fine-tune training solution that enables small and medium-sized businesses (SMBs) to process and retain local control of their sensitive machine learning (ML) data.

Foundational training of LLMs gives a broad understanding of language but aiDAPTIV+ enables the fine-tuning process that molds these models into specialized tools capable of understanding the topics that deliver precise results for your business needs. Commodity workstation hardware is enhanced with aiDAPTIV+ software, and first generation aiDAPTIVCache Series ai100 SSDs to enable larger training models than previously possible in a workstation form-factor.

MAINGEAR Introduces PRO AI Workstations Featuring aiDAPTIV+ For Cost-Effective Large Language Model Training

MAINGEAR, a leading provider of high-performance custom PC systems, and Phison, a global leader in NAND controllers and storage solutions, today unveiled groundbreaking MAINGEAR PRO AI workstations with Phison's aiDAPTIV+ technology. Specifically engineered to democratize Large Language Model (LLM) development and training for small and medium-sized businesses (SMBs), these ultra-powerful workstations incorporate aiDAPTIV+ technology to deliver supercomputer LLM training capabilities at a fraction of the cost of traditional AI training servers.

As the demand for large-scale generative AI models continues to surge and their complexity increases, the potential for LLMs also expands. However, this rapid advancement in LLM AI technology has led to a notable boost in hardware requirements, making model training cost-prohibitive and inaccessible for many small to medium businesses.

ASUS Republic of Gamers Announces Strix XG27UCS and XG27ACS Gaming Monitors

ASUS Republic of Gamers (ROG) today announced the new ROG Strix series gaming monitors, XG27UCS and XG27ACS. These gaming monitors boast a new stand design with a smaller footprint, a mobile device groove, USB-C connectivity for clutter-free setup, and AI-powered GamePlus technology. Empowering mainstream gamers with next-level performance and visuals, the XG27UCS offers stunning 4K 160 Hz visuals, while the XG27ACS delivers smooth 1440p 180 Hz gameplay. Both models feature a 1 ms response time, NVIDIA G-SYNC compatibility, and HDR technology for an immersive experience. The XG27ACS features the classic black ROG finish; an all-white model (XG27ACS-W) is also available for those with white-themed setups.

The ROG Strix XG27UCS and XG27ACS bring games to life. The Fast IPS panel offers a 180 Hz and 160 Hz refresh rates respectively and 1 ms GTG response time for the smoothest action-packed visuals, to give gamers an edge in competitive play. G-SYNC compatibility and Extreme Low Motion Blur Sync (ELMB Sync) enables the XG27UCS and XG27ACS to deliver super-smooth tear-free gameplay. Additionally, DisplayHDR 400 certification and wide DCI-P3 gamut ensure stunning HDR imagery and color performance. Plus, Integrated ASUS Variable Overdrive 2.0 technology enables the displays to dynamically alter overdrive settings as frame rates fluctuate for optimal visuals. There are 20 overdrive level settings available to eliminate ghosting for exceptionally smooth visuals. Overall, both monitors pack high performance and value into an accessible package. AI-powered GamePlus technology enables the monitor to analyze scenes in real-time and adjusts the GamePlus crosshair to improve accuracy. Plus, the Dynamic Shadow Boost feature brightens dark areas without overexposing bright areas to give gamers an advantage in low-light scenes.

NVIDIA GeForce RTX 4070 Ti Drops to $680 as Retailers Move to Clear Inventories

NVIDIA has discontinued production the GeForce RTX 4070 Ti graphics card, as it replaced it with the RTX 4070 Ti SUPER at the $800 price-point, equipped with a wider 256-bit memory bus, 16 GB of memory, and more CUDA cores. With a rather wide $200 price gap between the RTX 4070 Ti SUPER and the RTX 4070 SUPER that's priced at $600, retailers found an opportunity to sell their remaining inventories of the RTX 4070 Ti for around the $700-mark. The cheapest RTX 4070 Ti is the MSI Ventus 2X OC, which is listed at $699 on Newegg, with a coupon shaving off a further $20, bringing the price down to $679.

Would you pay the extra $80 to grab an RTX 4070 Ti over the $600 RTX 4070 SUPER? Our testing shows that the RTX 4070 Ti is about 8% faster than the RTX 4070 SUPER at 1440p, when averaged across our tests. With the latest prices, the RTX 4070 Ti remains 13% pricier than the RTX 4070 SUPER. The RTX 4070 Ti maxes out the 5 nm AD104 silicon, enabling all 60 streaming multiprocessors (SM), for 7,680 CUDA cores, 240 Tensor cores, 60 RT cores, 240 TMUs, and 80 ROPs, besides the chip's full 48 MB L2 cache. Unlike the RTX 4070 SUPER, the RTX 4070 Ti comes with dual NVDEC units.
Return to Keyword Browsing
Apr 26th, 2024 05:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts