News Posts matching #16 nm

Return to Keyword Browsing

Ruijie RG-CT7800 Mini-PC Among First Zhaoxin KaiXian Designs, Tip of China's 3-5-2 Spear

With a 2.4-liter volume, a conventional black plastic body, and essential connectivity, the Ruijie RG-CT7800 may come across as a run-of-the-mill mini-PCs for small businesses or those who do precious little offline, except what's under the hood. This humble compact desktop is among the first design wins of China's ambitious effort at having an x86 processor built entirely on Chinese soil, the Zhaoxin KaiXian. This processor is making its way to products, and was recently pictured on an embedded motherboard. The KaiXian, along with the notebooks, motherboards, micro-servers, and mini-PCs that implement it, form the tip of China's 3-5-2 policy, an ambitious plan to rid all state- and state-owned institutions of "foreign hardware."

The numerals in "3-5-2" are supposed to correspond to foreign hardware replacement targets set by the country's Central Government - 30% by the end of 2020, an additional 50% by the end of 2021, and the remaining 20% by the end of 2022. To support this plan, the Chinese electronics industry, flush with state investment, has indigenized several key components of the modern PC, including DRAM, NAND flash, and now CPU. The country already dominates the global electronic components market. The RG-CT7800 implements The KaiXian KX-U6780A SoC that sports eight x64 CPU cores running at 2.70 GHz. Interestingly, the chip is manufactured on TSMC's 16 nm FinFET node (a de facto "foreign" source, but one that's de jure China from Beijing's perspective). Ruijie is equipping the RG-CT7800 with 8 GB of DDR4 memory, and 256 GB of SSD-based storage. One can make out industry-standard USB, Ethernet, 3.5 mm audio jacks, etc., from the pictures. The box will be compatible with UOS and NeoKylin (Linux distros built under scrutiny of the Chinese Government). With state institutions being on the clock to implement their 3-5-2 targets, it's possible that the first volumes of RG-CT7800 will be sold exclusively to state customers.

VIA CenTaur CHA NCORE AI CPU Pictured, a Socketed LGA Package

VIA's CenTaur division sprung an unexpected surprise in the CPU industry with its new CHA x86-64 microarchitecture and an on-die NCORE AI co-processor. This would be the first globally-targeted x86 processor launch by a company other than Intel and AMD in close to 7 years, and VIA's first socketed processor in over 15 years. SemiAccurate scored a look at mock-up of the CenTaur CHA NCORE 8-core processor and it turns out that the chip is indeed socketed.

Pictured below, the processor is a flip-chip LGA. We deduce it is socketed looking at its alignment notches and traces for ancillaries on the reverse-side (something BGAs tend to lack). On the other hand, the "contact points" of the package appear to cast shadows, and resemble balls on a BGA package. Topside, we see an integrated heatspreader (IHS), and underneath is a single square die. CenTaur built the CHA NCORE on TSMC's 16 nm FinFET process. The package appears to have quite a high pin-count for a die this size, but that's probably because of its HEDT-rivaling I/O, which includes a quad-channel DDR4 memory interface and 44 PCI-Express gen 3.0 lanes.

Zhaoxin KaiXian x86 Processor Now Commercially Available to the DIY Channel

Zhaoxin is a brand that makes multi-core 64-bit x86 processors primarily for use in Chinese state IT infrastructure. It's part of the Chinese Government's ambitious plan to make its IT hardware completely indigenous. Zhaoxin's x86-64 CPU cores are co-developed by licensee VIA, specifically its CenTaur subsidiary that's making NCORE AI-enabled x86 processors. The company's KaiXian KX-6780A processor is now commercially available in China to the DIY market in the form of motherboards with embedded processors.

The KaiXian KX-6780A features an 8-core/8-thread x86-64 CPU clocked up to 2.70 GHz, 8 MB of last-level cache, a dual-channel DDR4-3200 integrated memory controller, a PCI-Express gen 3.0 root-complex, and an iGPU possibly designed by VIA's S3 Graphics division, which supports basic display and DirectX 11.1 readiness. The CPU features modern ISA, with instruction sets that include AVX, AES-NI, SHA-NI, and VT-x comparable virtualization extensions. The chip has been fabricated on TSMC 16 nm FinFET process.

TSMC Extends 16 nm Lead Time, Possibly Because the Fab is Swamped

TSMC has increased its 7 nm delivery time by as much as three times because of extra demand from customers who want their products made on 7 nm manufacturing process. While we thought the struggles with the delivery of 7 nm will only be present for that node, it turns out that TSMC is facing some issues with the delivery of its 16 nm node as well.

There is no clear indication of why is TSMC having issues meeting demand for its 16 nm node, just now. What might be the reason is that a large number of manufacturers are still designing and manufacturing their products on 16 nm, as it is quite cheaper than smaller nodes, so the 16 nm manufacturing facilities may be "overloaded". Another possible reason is that wafer output faced some issues that are now affecting both the 7 and 16 nm node delivery time being extended. That can be anything from a small power cut to a large issue like contamination of cleanrooms where processors are made.

China Starts Production of Domestic DRAM Chips

China's semiconductor industry is seeking independence in every sector of its industry, with an emphasis of homemade products for domestic use, especially government facilities, where usage of homegrown products is most desirable. According to the report of China Securities Journal, Chinese firm has started production of DRAM memory.

A company named ChangXin Memory Technology, founded in 2016 to boost domestic silicon production, on Monday started production of DRAM memory, aiming to directly replace the current supply of foreign memory from companies like Micron, SK Hynix and Samsung. Being build using 18 nm technology which ChangXin calls "10-nanometer class" node, this DRAM chip isn't too far behind offers from competitors it tries to replace. Micron, Samsung and SK Hynix use 12, 14, and 16 nm nodes for production of their DRAM chips, so Chinese efforts so far are very good. The company promises to produce around 120.000 wafers per month and plans to deliver first chips by the end of this year.

Xilinx Announces Virtex UltraScale+, the World's Largest FPGA

Xilinx, Inc., the leader in adaptive and intelligent computing, today announced the expansion of its 16 nanometer (nm) Virtex UltraScale+ family to now include the world's largest FPGA — the Virtex UltraScale+ VU19P. With 35 billion transistors, the VU19P provides the highest logic density and I/O count on a single device ever built, enabling emulation and prototyping of tomorrow's most advanced ASIC and SoC technologies, as well as test, measurement, compute, networking, aerospace and defense-related applications.

The VU19P sets a new standard in FPGAs, featuring 9 million system logic cells, up to 1.5 terabits per-second of DDR4 memory bandwidth and up to 4.5 terabits per-second of transceiver bandwidth, and over 2,000 user I/Os. It enables the prototyping and emulation of today's most complex SoCs as well as the development of emerging, complex algorithms such as those used for artificial intelligence, machine learning, video processing and sensor fusion. The VU19P is 1.6X larger than its predecessor and what was previously the industry's largest FPGA — the 20 nm Virtex UltraScale 440 FPGA.

Cerebras Systems' Wafer Scale Engine is a Trillion Transistor Processor in a 12" Wafer

This news isn't properly today's, but it's relevant and interesting enough that I think warrants a news piece on our page. My reasoning is this: in an era where Multi-Chip Modules (MCM) and a chiplet approach to processor fabrication has become a de-facto standard for improving performance and yields, a trillion-transistor processor that eschews those modular design philosophies is interesting enough to give pause.

The Wafer Scale engine has been developed by Cerebras Systems to face the ongoing increase in demand for AI-training engines. However, in workloads where latency occur a very real impact in training times and a system's capability, Cerebras wanted to design a processor that avoided the need for a communication lane for all its cores to communicate - the system is only limited, basically, by transistors' switching times. Its 400,000 cores communicate seamlessly via interconnects, etched on 42,225 square millimeters of silicon (by comparison, NVIDIA's largest GPU is 56.7 times smaller at "just" 815 square millimeters).

NVIDIA RTX Logic Increases TPC Area by 22% Compared to Non-RTX Turing

Public perception on NVIDIA's new RTX series of graphics cards was sometimes marred by an impression of wrong resource allocation from NVIDIA. The argument went that NVIDIA had greatly increased chip area by adding RTX functionality (in both its Tensor ad RT cores) that could have been better used for increased performance gains in shader-based, non-raytracing workloads. While the merits of ray tracing oas it stands (in terms of uptake from developers) are certainly worthy of discussion, it seems that NVIDIA didn't dedicate that much more die area to their RTX functionality - at least not to the tone of public perception.

After analyzing full, high-res images of NVIDIA's TU106 and TU116 chips, reddit user @Qesa did some analysis on the TPC structure of NVIDIA's Turing chips, and arrived at the conclusion that the difference between NVIDIA's RTX-capable TU106 compared to their RTX-stripped TU116 amounts to a mere 1.95 mm² of additional logic per TPC - a 22% area increase. Of these, 1.25 mm² are reserved for the Tensor logic (which accelerates both DLSS and de-noising on ray-traced workloads), while only 0.7 mm² are being used for the RT cores.

TSMC Fab 14 B Hit With Chemical Contamination; NVIDIA, MEDIATEK, Huawei, Hisilicon Lines Affected

TSMC's Fab 14 B has been affected with a chemical contamination that has put a considerable number of wafers in suspend mode. Fab 14 B essentially produces 12 and 16 nm, 300 mm wafers for 14 companies, including NVIDIA, MEDIATEK, Huawei and Hisilicon. Reportedly, between 10,000 and 30,000 wafers have been affected (though not scrapped, so there might be salvageable bits and pieces here and there). Of course, every wafer will have to go through a thorough certification process, and the fab will have to go down for the company to purge any remains of these botched chemical compounds.

To put things into perspective, though, Fab 14 B is one of TSMC's Gigafabs, which have a rated monthly output of 100k wafers - so production worth between three and ten days could be affected already, with the additional downtime accruing lost potential fabrication. This event isn't expected to significantly affect availability of any of the products for any of the companies, but these are becoming, at the very least, late inventory - this could well play into some speculative increases in pricing from some players in the market.

Micron Ready With 96-Layer Flash & 1Y nm DRAM in 2H 2018

In their recent earnings call, Micron commented that they have 96-layer 3D NAND technology on track for volume shipments in the second half of 2018. Most of today's SSDs typically use 32-layer technology, with 64-layer flash chips used in some recent releases like the Crucial MX500. 96-layer is the third generation of 3D NAND and increases storage capacity per chip even further which allows smaller and more energy efficient mobile devices to be built. Of course it will be cheaper too, compared to current-generation 64 layer NAND, which should bring SSD pricing down even more, and of course generally help pricing of consumer products which use flash memory.

The second important note from the presentation is that Micron expects 1X nm (18 nm) DRAM production to exceed that of previous generations before the end of this year. Their next-generation 1Y nm (15/16 nm) DRAM is on track to begin production shipments in the second half of 2018, too. As they noted in a previous event, their product and process roadmap for DRAM 1z looks solid and 1-alpha development programs already under way.

VIA Making a Comeback to x86 CPU Market with Zhaoxin R&D Monies

The only other active x86 architecture licensee than AMD, VIA Technologies, is planning a comeback to the x86 processor market, bolstered by R&D investment by Shanghai Zhaoxin Semiconductor. VIA and Zhaoxin have been co-developing the ZX family of x86 processors for rollout in 2018, and at least on paper, the chips appear to have the chops to take on Intel's "Gemini Lake" SoCs. The new VIA-Zhaoxin combine CPU family begins with the KX-5000 "Wudaoku" SoCs launched late-2017. These are full-fledged SoCs, which completely integrate the chipset (including the southrbidge).

The KX-5000 chips feature 4 or 8 CPU cores without SMT, 2.00-2.20 GHz nominal CPU clock, 2.40 GHz boost clock, a dual-channel DDR4 IMC, a PCI-Express gen 3.0 root complex, an integrated graphics core, and platform I/O that includes SATA 6 Gbps, and USB 3.1 gen 2. This chip debuted on only one product from a major OEM, the Lenovo M6200 desktop model launched in China. 2018 could see a broader launch of VIA-Zhaoxin chips, with the KX-6000. While the older chips were built on the 28 nm process, the KX-6000 series will be built on the newer 16 nm process, feature 4 or 8 CPU cores clocked at speeds of up to 3.00 GHz, while retaining the feature-set of the KX-5000 series. These chips could realistically be touted as low-cost alternatives to Intel "Gemini Lake" SoCs, although Zhaoxin is making bold claims about its performance nearing that of AMD Ryzen processors.

KFA2 Intros GeForce GTX 1070 Ti EX-SNPR White Graphics Card

GALAX' EU-regional wing KFA2 rolled out the GeForce GTX 1070 Ti EX-SNPR White graphics card. A step above the GTX 1070 Ti EX graphics card, the SNPR White comes with a white metallic cooler shroud, while fan impellers, white LED lighting over the fans, and a white metallic back-plate with RGB LED diffusers. RGB LEDs line the periphery of the back-plate, and the "GeForce GTX" decal on top of the card. The card ships with NVIDIA-reference clock speeds of 1607 MHz core, and 1683 MHz GPU Boost. Its memory ticks at 8.00 GHz (GDDR5-effective).

Drawing power from a combination of 6-pin and 8-pin PCIe power connectors, the card conditions it using a 7-phase VRM. It uses a custom-design aluminium fin-stack heatsink that's ventilated by a pair of 100 mm fans, which stay off when the card is idling. Display outputs include three DisplayPort 1.4, and one each of HDMI 2.0 and DVI-D. Based on the 16 nm "GP104" silicon, the GTX 1070 Ti features 2,432 CUDA cores, 152 TMUs, 64 ROPs, and a 256-bit wide GDDR5 memory interface, holding 8 GB of memory. The KFA2 GeForce GTX 1070 Ti EX-SNPR White is priced at 479€ (including VAT).

NVIDIA GeForce GTX 1070 Ti by Late October

It looks like NVIDIA's next performance-segment graphics card, the GeForce GTX 1070 Ti, could be launched sooner than expected. A report by NordicHardware, pins its launch date at October 26, 2017; ahead of the "early-November" date which was doing rounds earlier. It's also entirely possible that the card will be launched on October 26, and reviews of the card being posted, but market-availability beginning in November.

Based on the 16 nm "GP106" silicon, the GTX 1070 Ti is being designed to be almost as fast as the GTX 1080. It features 2,432 CUDA cores, 152 TMUs, 64 ROPs, and a 256-bit wide GDDR5 memory interface, holding 8 GB of memory. The card is expected to perform (and be priced) within 12 percent of the GTX 1080. Its main competitor from the AMD stable is the Radeon RX Vega 56.

Possible GeForce GTX 1070 Ti Specifications Surface

It turns out that NVIDIA is giving the GeForce GTX 1070 more than a minor refresh. The new performance-segment SKU, which is slated to come out just before Holiday 2017, could perform very very close to the GTX 1080, although sufficiently spaced out from the GTX 1080 refresh (featuring 11 Gbps memory). According to specifications leaked by Chinese tech publication MyDrivers, NVIDIA will give this SKU the coveted "Ti" moniker after all, and carve it out from the "GP104" silicon.

According to the report, the GTX 1070 Ti will be carved out of the "GP104" silicon by disabling just 1 out of 20 streaming multiprocessors, compared to the GTX 1070 desktop, which has 5 out of 20 disabled. This results in a CUDA core count of 2,432, which is just 128 fewer than that of the GTX 1080. The clock speeds of the GTX 1070 Ti are higher than the GTX 1070, too. It comes with a core clock of 1607 MHz, 1683 MHz GPU Boost, and an untouched 8.00 GHz (GDDR5-effective) memory clock. Interestingly, the TDP of this chip is 180W, which is the same as the GTX 1080. NVIDIA will reportedly launch the GeForce GTX 1070 Ti in early-November, 2017, at a price that's 12.5 percent cheaper than the GTX 1080.

Xbox One X Hardware Specs Give Gaming Desktops a Run for their Money

Microsoft Sunday dropped its mic with the most powerful game console on paper, the Xbox One X, formerly codenamed "Project Scorpio." The bottom-line of this console is that it enables 4K Ultra HD gaming at 60 Hz. Something like this requires you to spend at least $1,200 on a gaming desktop right now. Unlike a Windows 10 PC that's been put together by various pieces of hardware, the Xbox One X is built on a closed ecosystem that's tightly controlled by Microsoft, with heavily optimized software, and a lot of secret sauce the company won't talk about. The console still puts up some mighty impressive hardware specs on paper.

To begin with, at the heart of the Xbox One X is a semi-custom SoC Microsoft co-developed with AMD, built on TSMC's 16 nm FinFET node (the same one NVIDIA builds its "Pascal" GPUs on). This chip features a GPU with almost quadruple the single-precision floating point compute power as the one which drives the Xbox One. It features 40 Graphics CoreNext (GCN) compute units (2,560 stream processors) based on one of the later versions of GCN (likely "Polaris"). The GPU is clocked at 1172 MHz. The other big component of the SoC is an eight-core CPU based on an unnamed micro-architecture evolved from "Jaguar" rather than "Bulldozer" or even "Zen." The eight cores are arranged in two quad-core units of four cores, each; with 4 MB of L2 cache. The CPU is clocked at 2.30 GHz.

NVIDIA Announces GeForce MX150 Laptops: Supercharged For Work and Play

Remember that MX150 mobile graphics card we covered recently? NVIDIA has just let the cat out of the bag, with an announcement that seemingly confirms the specs we were expecting. NVIDIA is selling this mobile GPU's space as the expected IGP-upgrade, citing up to 3x superior performance-per-Watt compared to previous-generation Maxwell-based GeForce 940MX laptops. In other words, GeForce MX150 enables thinner laptops that run applications faster while sipping less power.

Inno3D Intros GeForce GTX 1050 Ti Single-slot Graphics Card

Inno3D rolled out a single-slot GeForce GTX 1050 Ti graphics card (model: N105T2-1SDV-M5CM). The card combines NVIDIA reference PCB for the GTX 1050 Ti, with a single slot cooling solution. This cooler consists of a lateral-blower which pushes air through a dense aluminium channel heatsink, with copper base-plate over the GPU. The card relies on the PCI-Express slot for all its power.

Based on the 16 nm GP107 silicon, the GTX 1050 Ti features 768 CUDA cores, 48 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface. The Inno3D GTX 1050 Ti single-slot card sticks to NVIDIA-reference clock speeds of 1291 MHz core, 1392 MHz GPU Boost, and 7.00 GHz (GDDR5-effective) memory. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and dual-link DVI. The company didn't reveal pricing.

Plextor Extends EX1 Portable SSD Warranty

Plextor today announced that it revised the warranty policy of its recently announced EX1 portable solid-state drive to be effective for 5 years, from its existing 3 years. This change is applicable to all products already sold and with retailers as unsold inventory. The EX1 is a series of portable SSDs with 10 Gb/s USB 3.1 host interface, with a single cable handling both power and host connectivity. The drives combine Silicon Motion-made controllers with SK Hynix 16 nm TLC NAND flash memory, and come in capacities of 128 GB, 256 GB, 512 GB, priced at 79€, 119€, and 213€, respectively.

NVIDIA Preparing GeForce GTX 1080 Ti for 2017 CES Launch

NVIDIA is preparing its next high-end graphics card under the GeForce GTX brand, the GTX 1080 Ti, for launch along the sidelines of the 2017 International CES, early next January. The card will be positioned between the $599-$699 GeForce GTX 1080, and the $1199 TITAN X Pascal, and will be based on the 16 nm "GP102" silicon.

Chinese tech publication Zol.com.cn reports a few possible specifications of the SKU, adding to what we know from an older report. NVIDIA is carving the GTX 1080 Ti out from the GP102 silicon by enabling 26 out of 30 streaming multiprocessors, resulting in a CUDA core count of 3,328. This sets the TMU count at 208. The ROP count is unchanged at 96. The card features a 384-bit wide GDDR5X memory interface (and not the previously-thought GDDR5). It will have an identical memory bandwidth to the TITAN X Pascal, of 480 GB/s. The card will feature 12 GB of standard memory amount. Its GPU clock speeds are expected to be 1503 MHz core, with 1623 MHz GPU Boost.

NVIDIA Announces the GeForce GTX 1060 3GB

NVIDIA rolled out the GeForce GTX 1060, a new mid-range graphics card starting at US $199 (MSRP), with custom-design cards ranging between $199-229. Based on the 16 nm "GP106" silicon, this card has not just half the memory as the original GTX 1060, but also fewer CUDA cores. The card features 3 GB of memory across its 192-bit GDDR5 memory interface. It features 1,152 of the 1,280 CUDA cores present on the silicon. Other specs include 72 TMUs, and 48 ROPs. Its core is clocked at 1506 MHz, with 1708 MHz GPU Boost, and 8 GHz (GDDR5-effective) memory. The card is targeted at gamers still on 1080p.

NVIDIA Announces the GeForce GTX 10 Series for Notebooks

NVIDIA today announced the GeForce GTX 10-series for notebooks. The lineup includes three SKUs - the GeForce GTX 1080 Mobile, the GTX 1070 Mobile, and the GTX 1060 Mobile. Thanks to huge energy-efficiency gains with the "Pascal" architecture and the new 16 nm silicon fab process, this round of NVIDIA's mobile GPUs aren't "gimped out" in comparison its desktop discrete GPU lineup, in that they SKUs don't feature fewer CUDA cores to their corresponding desktop counterparts.

The lineup begins with the GTX 1080 Mobile. Based on the GP104 silicon, this chip features all 2,560 CUDA cores, 160 TMUs, and 64 ROPs physically present on the chip. The chip is endowed with 8 GB of 256-bit GDDR5X memory ticking at 10 Gbps. The core clock speeds remain unknown. The GTX 1070 Mobile is an interesting SKU in that it is better endowed than its desktop counterpart. It features 2,048 CUDA cores (the desktop GTX 1070 features 1,920), 128 TMUs, 64 ROPs, and 8 GB of 256-bit GDDR5 memory running at 8 Gbps. This chip features 1443 MHz core, and 1645 MHz GPU Boost. At the bottom of the pile is the GTX 1060 Mobile. Based on the GP106 silicon, this chip features 1,280 CUDA cores, 80 TMUs, 48 ROPs, and 6 GB of 192-bit GDDR5 memory running at 8 Gbps; with clock speeds of 1405 MHz core, with 1569 MHz GPU Boost.

Samsung to Optical-Shrink NVIDIA "Pascal" to 14 nm

It looks like NVIDIA won't skip the 14 nm process en route sub-10 nm nodes, despite meeting its energy-efficiency targets with the 16 nm FinFET node, after all. The company has reportedly concluded talks with Samsung Electronics, to optically-shrink its current GeForce "Pascal" architecture down to the newer 14 nanometer FinFET node, by Samsung. It's unclear as to whether specific upcoming (unreleased) Pascal GPUs will get 14 nm treatment, or if this is a series-wide die-shrink of the kind NVIDIA did between the 65 nm and 55 nm nodes. The Samsung-made 14 nm "Pascal" GPUs should enter production before year-end.

SoC Powering Xbox One S Leverages 16 nm FinFET from TSMC

Microsoft's new slim Xbox One S console achieves its slimness - including its inbuilt power-supply, by significantly reducing thermal load of its key components. This begins at the heart of the console, its SoC. A semi-custom chip by Microsoft and AMD, the SoC powering the Xbox One S is built on the 16 nm FinFET process at TSMC. The chip powering the original Xbox One was built on the same foundry's 28 nm node.

The new SoC isn't merely an optical shrink of the original 28 nm chip down to 16 nm FinFET, Microsoft added a few components to the chip, including an HEVC hardware decoder, hardware CODECs for Blu-ray UHD with HDR; and a revamped display controller with HDMI 2.0 and HDCP 2.2. The chip also performs 1080p to 4K UHD upscaling, with a native upscaling algorithm. The eSRAM memory bandwidth is increased slightly from 204 GB/s from 219 GB/s.

NVIDIA TITAN X Pascal Available from Today

NVIDIA's flagship graphics card targeted at gamers and PC enthusiasts, the TITAN X Pascal, will be available from today, exclusively through the GeForce website, at this page. NVIDIA will be directly marketing the card. The card is priced at US $1,199 (excl taxes). Based on the 16 nm "GP102," derived from the "Pascal" architecture, the TITAN X Pascal features 3,584 CUDA cores, 224 TMUs, 96 ROPs, and a 384-bit wide GDDR5X memory, holding 12 GB of memory. The chip is clocked at 1417 MHz core, with 1531 MHz GPU Boost, and 10 Gbps memory, working out to 480 GB/s memory bandwidth. Like the GTX 1080 and GTX 1070, the TITAN X Pascal appears to be limited to 2-way SLI.

More pictures follow.

NVIDIA Accelerates Volta to May 2017?

Following the surprise TITAN X Pascal launch slated for 2nd August, it looks like NVIDIA product development cycle is running on steroids, with reports emerging of the company accelerating its next-generation "Volta" architecture debut to May 2017, along the sidelines of next year's GTC. The architecture was originally scheduled to make its debut in 2018.

Much like "Pascal," the "Volta" architecture could first debut with HPC products, before moving on to the consumer graphics segment. NVIDIA could also retain the 16 nm FinFET+ process at TSMC for Volta. Stacked on-package memory such as HBM2 could be more readily available by 2017, and could hit sizable volumes towards the end of the year, making it ripe for implementation in high-volume consumer products.
Return to Keyword Browsing