News Posts matching #ASIC

Return to Keyword Browsing

NVIDIA Readies New GeForce RTX 30-series SKU Positioned Between RTX 3070 and RTX 3080

Possibly unsure of the GeForce RTX 3070 tackling AMD's Radeon RX 6000 series parts, NVIDIA is designing a new RTX 30-series SKU positioned between the RTX 3070 and RTX 3080. This is not a 16 GB variant of the RTX 3070, but rather a new SKU based on the 8 nm "GA102" silicon, according to a reliable source with NVIDIA leaks, kopite7kimi. The SKU is based on the GA102 with the ASIC code "GA102-150-KD-A1." The silicon is configured with 7,424 CUDA cores across 58 streaming multiprocessors (29 TPCs), 232 tensor cores, 232 TMUs, 58 RT cores, and an unknown number of ROPs. According to kopite7kimi, the card is configured with a 320-bit wide memory interface, although it's not known if this is conventional GDDR6, like the RTX 3070 has, or faster GDDR6X, like that on the RTX 3080.

NVIDIA recently "cancelled" a future 16 GB variant of the RTX 3070, and 20 GB variant of the RTX 3080, which is possibly the company calibrating its response to the Radeon RX 6000 series. We theorize that doubling in memory amounts may not have hit the desired cost-performance targets; and the company probably believes the competitive outlook of the RTX 3080 10 GB is secure. This explains the need for a SKU with performance halfway between that of the RTX 3070 and RTX 3080. As for pricing, with the RTX 3070 positioned at $500 and the RTX 3080 at $700, the new SKU could be priced somewhere in between. AMD's RDNA2-based Radeon RX 6000 series GPUs are expected to feature DirectX 12 Ultimate logo compliance, meaning that there is a level playing ground between AMD and NVIDIA in the performance segment.

Intel Partners with Heidelberg University Computing Center to Establish oneAPI Academic Center of Excellence

Intel and Heidelberg University Computing Center (URZ) today announced that they have established oneAPI Academic Center of Excellence (CoE) at UZR. The newly established CoE has a goal to further develop Intel's oneAPI standard and enable it to work on AMD GPUs. This information is a bit shocking, however, Intel believes that the technology should work on a wide range of processors, no matter the vendor. The heterogeneous hardware programming is the main goal here. In a Twitter thread, an Intel employee specifies that Intel has also been working with Arm and NVIDIA to bring Data-Parallel C++ (DPC++), a core of oneAPI, to those vendors as well. That should bring this universal programming model to every device and adapt to every platform, which is a goal of heterogeneous programming - whatever you need to program a CPU, GPU, or some other ASIC, it is covered by a single API, specifically oneAPI.
UZR
URZ's work as a oneAPI CoE will add advanced DPC++ capabilities into hipSYCL, which supports systems based on AMD GPUs, NVIDIA GPUs, and CPUs. New DPC++ extensions are part of the SYCL 2020 provisional specification that brings features such as unified shared memory to hipSYCL and the platforms it supports - furthering the promise of oneAPI application support across system architectures and vendors.

NVIDIA GeForce RTX 3090 "CEO Edition" Rears its Head, Most Likely a Fake

Social media is abuzz with a screengrab of a regional webpage of the NVIDIA website purporting a "GeForce RTX 3090 CEO Edition" graphics card. Positioned a notch above the "Founders Edition," this $3,499 card, if even remotely real, could be a limited edition product. The screengrab references "48 GB of G6X" memory. We're not sure how this is even possible. The RTX 3090 already uses 8 Gbit GDDR6X chips, piggybacking two chips per 32-bit memory channel, unless Micron has done the unthinkable by launching a 16 Gbit G6X chip within 2020. Frankly, we're very interested to see how the next-gen Quadro RTX even achieves its alleged 48 GB of GDDR6.

That aside, the alloy frame now comes with a gold finish. We wonder if memory and a fancy trim is all that NVIDIA is asking the extra 2 Grand for, or if it even maxed out the "GA102" ASIC (there are two more TPCs left to unlock). As for the name "CEO Edition," there have been instances of tech CEOs flexing their vanity on limited edition products. Limited edition Ryzen and Radeon products, for example, bear the AMD CEO's signature. So the name "CEO Edition" by itself isn't implausible. Just not with these specs, and not this price.

Rambus Advances HBM2E Performance to 4.0 Gbps for AI/ML Training Applications

Rambus Inc. (NASDAQ: RMBS), a premier silicon IP and chip provider making data faster and safer, today announced it has achieved a record 4 Gbps performance with the Rambus HBM2E memory interface solution consisting of a fully-integrated PHY and controller. Paired with the industry's fastest HBM2E DRAM from SK hynix operating at 3.6 Gbps, the solution can deliver 460 GB/s of bandwidth from a single HBM2E device. This performance meets the terabyte-scale bandwidth needs of accelerators targeting the most demanding AI/ML training and high-performance computing (HPC) applications.

"With this achievement by Rambus, designers of AI and HPC systems can now implement systems using the world's fastest HBM2E DRAM running at 3.6 Gbps from SK hynix," said Uksong Kang, vice president of product planning at SK hynix. "In July, we announced full-scale mass-production of HBM2E for state-of-the-art computing applications demanding the highest bandwidth available."

Marvell Unveils the Industry's Most Comprehensive Custom ASIC Offering

Marvell today announced a unique custom ASIC offering that addresses the stringent requirements of next generation 5G carriers, cloud data centers, enterprise and automotive applications. Marvell's comprehensive custom ASIC solution enables a multitude of customization options and a differentiated approach with best-in-class standard product IP including Arm -based processors, embedded memories, high-speed SerDes, networking, security and a wide range of storage controller and accelerators in 5 nm and beyond. By partnering with Marvell, customers gain enhanced performance, power and area resulting in accelerated time-to-market and providing optimal returns on investment.

Traditionally, data infrastructure manufacturers and cloud data center operators have had to choose between securing standard products or a full custom silicon solution designed in-house, while developing or licensing foundational IP as needed. Now, for the first time, Marvell is offering full access to its broad and growing portfolio of industry-leading data infrastructure standard product IP and technologies, which can be integrated and enabled in custom ASIC solutions at the most advanced technology nodes.

AMD RDNA2 "Navi 21" GPU to Double CU Count Over "Navi 10"

AMD's RDNA2 graphics architecture, which sees real-time ray-tracing among other DirectX 12 Ultimate features, could see the company double the amount of stream processors generation-over-generation, according to a specs leak by _rogame. The increase in stream processors would fall in line with AMD's effort to increase performance/Watt by 50%. It may appear like the resulting SKUs finally measure up to the likes of the RTX 2080 Ti, but AMD has GeForce "Ampere" in its competitive calculus, and should the recent specs reveal hold up, the new "Navi 21" could end up being a performance-segment competitor to GeForce graphics cards based on the "GA104" ("TU104" successor), rather than a flagship-killer.

The RDNA2-based "Navi 21" GPU allegedly features 80 RDNA2 compute units amounting to 5,120 stream processors. AMD might tap into a refined 7 nm-class silicon fabrication node by TSMC to build these chips, either N7P or N7+. The die-size could measure up to 505 mm², and AMD could aim for a 50% performance/Watt gain over the "Navi 10." AMD could carve out as many as 10 SKUs out of the "Navi 21," but only three are relevant to the gamers. The SKU with the PCI device ID "0x731F: D1" succeeds the RX 5700 XT. The one bearing "0x731F: D3" succeeds the RX 5700, with a variant name "Navi 21 XL." The "Navi 21 XE" variant has a PCI ID of "0x731F: DF," and succeeds the RX 5600 XT.

Vicor 1200A Hydra ChiP-set Enables Higher Performance AI Accelerator Cards

Vicor Corporation today announced a ChiP-set for high performance GPU, CPU, and ASIC ("XPU") processors powered directly from 48 V. A driver, MCD4609, and a pair of MCM4609 current multiplier modules supply up to 650 A continuous and 1200 A peak. Owing to their small footprint and low profile (45.7 x 8.6 x 3.2 mm), current multipliers are placed close to the processor enabling reduced power distribution network (PDN) losses and higher power system efficiency. Powering GPU and OCP Accelerator Module (OAM) Artificial Intelligent (AI) cards, the 4609 ChiP-set is in mass production and available to new customers on the Vicor Hydra II evaluation board.

The 4609 ChiP-set adds to the Vicor Power-on-Package portfolio of Lateral Power Delivery (LPD) solutions. To raise the bar of current capability above the limits of LPD, Vicor's pioneering Vertical Power Delivery (VPD) will soon enable much higher current density. The VPD system delivers current from power converters vertically stacked under a processor through a capacitor network geared to a processor-specific pin-map. A GCM ("Gearbox Current Multiplier") is a specialized VPD embodiment incorporating a gearbox capacitor network as a layer in the vertical stack. By supplying current directly under the processor and eliminating PDN losses, GCMs will soon facilitate current densities reaching up to 2 A per mm².

Intel Reports First-Quarter 2020 Financial Results

Intel Corporation today reported first-quarter 2020 financial results. "Our first-quarter performance is a testament to our team's focus on safeguarding employees, supporting our supply chain partners and delivering for our customers during this unprecedented challenge," said Bob Swan, Intel CEO."The role technology plays in the world is more essential now than it has ever been, and our opportunity to enrich lives and enable our customers' success has never been more vital. Guided by our cultural values, competitive advantages and financial strength, I am confident we will emerge from this situation an even stronger company."

In the first quarter, Intel achieved 34 percent data-centric revenue growth and 14 percent PC-centric revenue growth YoY. The company maintained essential factory operations with greater than 90 percent on-time delivery while supporting employees, customers and communities in response to the COVID-19 pandemic. This includes a new Intel Pandemic Response Technology Initiative to combat the virus where we can uniquely make a difference with Intel technology, expertise, and resources.

TSMC and Broadcom Enhance the CoWoS Platform with World's First 2X Reticle Size Interposer

TSMC today announced it has collaborated with Broadcom on enhancing the Chip-on-Wafer-on-Substrate (CoWoS ) platform to support the industry's first and largest 2X reticle size interposer. With an area of approximately 1,700mm2, this next generation CoWoS interposer technology significantly boosts computing power for advanced HPC systems by supporting more SoCs as well as being ready to support TSMC's next-generation five-nanometer (N5) process technology.

This new generation CoWoS technology can accommodate multiple logic system-on-chip (SoC) dies, and up to 6 cubes of high-bandwidth memory (HBM), offering as much as 96 GB of memory. It also provides bandwidth of up to 2.7 terabytes per second, 2.7 times faster than TSMC's previously offered CoWoS solution in 2016. With higher memory capacity and bandwidth, this CoWoS solution is well-suited for memory-intensive workloads such as deep learning, as well as workloads for 5G networking, power-efficient datacenters, and more. In addition to offering additional area to increase compute, I/O, and HBM integration, this enhanced CoWoS technology provides greater design flexibility and yield for complex ASIC designs in advanced process nodes.

Rambus Achieves Industry-Leading GDDR6 Performance at 18 Gbps

Rambus Inc., a premier silicon IP and chip provider making data faster and safer, today announced it has achieved industry-leading 18 Gbps performance with the Rambus GDDR6 Memory PHY. Running at the industry's fastest data rate of 18 Gbps, the Rambus GDDR6 PHY IP delivers peak performance four-to-five times faster than current DDR4 solutions and continues the company's longstanding tradition of developing leading-edge products. The Rambus GDDR6 PHY pairs with the companion GDDR6 memory controller from the recent acquisition of Northwest Logic to provide a complete and optimized memory subsystem solution.

Increased data usage in applications such as AI, ML, data center, networking and automotive systems is driving a need for higher bandwidth memory. The coming introduction of high-bandwidth 5G networks will exacerbate this challenge. Working closely with our memory partners, the Rambus GDDR6 solution gives system designers more options in selecting the memory system that meets both their bandwidth and cost requirements.

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

Intel Sets Up New Network and Custom-logic Group

In recent conversations with Intel customers, two words kept coming up: disruption and opportunity. Disruption because almost every single executive I talk with has seen business disrupted in one way or another or is worried about keeping up with new technology trends and keeping a competitive edge. And opportunity because when these customers discuss their needs -- be it how to better leverage data, how to modernize their infrastructure for 5G or how to accelerate artificial intelligence (AI) and analytics workloads -- they realize the massive prospects in front of them.

To help our customers capitalize on the opportunities ahead, Intel has created a new organization that combines our network infrastructure organization with our programmable solutions organization under my leadership. This new organization is called the Network and Custom Logic Group.
Both original organizations executed on record design wins and revenues in 2018. Their merger allows Intel to bring maximum value to our customers by delivering unprecedented and seamless access to Intel's broad portfolio of products, from Intel Xeon processors SoC, FPGA, eASIC, full-custom ASIC, software, IP, and systems and solutions across the cloud, enterprise, network, embedded and IoT markets. To that end, FPGA and custom silicon will continue to be important horizontal technologies. And this is just the beginning of a continuum of Custom Logic Portfolio of FPGA, eASIC, and ASIC to support our customers' unique needs throughout their life cycles. No other company in the world can offer that.

Intel Announces New Chief People Officer Sandra Rivera

Intel has announced that Sandra Rivera will take on a new role as the company's chief people officer and executive vice president, reporting to CEO Bob Swan. She will lead the human resources organization and serve as steward of Intel's culture evolution as it transforms to a data-centric company. Previously, Rivera was responsible for the Network Platforms Group, and served as Intel's 5G executive sponsor.

"Sandra is a role model for an Intel that is customer obsessed, collaborative and fearless while firmly grounded in trust, transparency and inclusivity. I am thrilled that Sandra will lead this critical part of our strategy to power a data-centric world," Swan said. "In a company driven by deep, technical talent, Sandra is an excellent technical leader who builds successful businesses by first building great teams. I am confident Sandra, as chief people officer, will help us accelerate our transformation and position our Intel team to play a bigger role in our customers' success."

AMD "Navi" Graphics Card PCB Pictured, uses GDDR6

Pictures of an upcoming AMD Radeon "Navi" graphics card bare PCB made it to the web over the weekend. The picture reveals a fairly long (over 25 cm) board with AMD markings, and a layout that doesn't match with any reference-design PCB AMD launched so far. At the heart of the PCB is a large ASIC pad that appears to be within 5 percent of the size of a "Polaris10" chip. The ASIC is surrounded by eight GDDR6 memory pads. We could guess they're GDDR6 looking at the rectangularity of their pin-layout compared to GDDR5.

The PCB has provision for up to two 8-pin PCIe power inputs, and an 8+1 phase VRM that uses premium components such as rectangular tantalum capacitors, DrMOS, and a high-end VRM controller chip. There's also provision for dual-BIOS. The display I/O completely does away with DVI provisioning, and only includes the likes of DisplayPort, HDMI, and even USB-C based outputs such as VirtualLink. The fan header looks complex, probably offering individual fan-speed control for the card's multi-fan cooling solution that could resemble that of the Radeon VII. Looking purely at the feature-set on offer, and the fact that "Navi" will be more advanced than "Vega 2.0," we expect this card to be fairly powerful, going after the likes of NVIDIA's RTX 2070 and RTX 2060. AMD is expected to unveil this card at the 2019 Computex, this June.

Intel Driving Data-Centric World with New 10nm Intel Agilex FPGA Family

Intel announced today a brand-new product family, the Intel Agilex FPGA. This new family of field programmable gate arrays (FPGA) will provide customized solutions to address the unique data-centric business challenges across embedded, network and data center markets. "The race to solve data-centric problems requires agile and flexible solutions that can move, store and process data efficiently. Intel Agilex FPGAs deliver customized connectivity and acceleration while delivering much needed improvements in performance and power for diverse workloads," said Dan McNamara, Intel senior vice president, Programmable Solutions Group.

Customers need solutions that can aggregate and process increasing amounts of data traffic to enable transformative applications in emerging, data-driven industries like edge computing, networking and cloud. Whether it's through edge analytics for low-latency processing, virtualized network functions to improve performance, or data center acceleration for greater efficiency, Intel Agilex FPGAs are built to deliver customized solutions for applications from the edge to the cloud. Advances in artificial intelligence (AI) analytics at the edge, network and the cloud are compelling hardware systems to cope with evolving standards, support varying AI workloads, and integrate multiple functions. Intel Agilex FPGAs provide the flexibility and agility required to meet these challenges and deliver gains in performance and power.

AMD Brings Back the "XT" Moniker with China-specific Radeon RX 560 XT

Back in the glory days of ATI Radeon, the XT brand extension denoted the better-endowed variant among two or more graphics card models based on the same silicon, such as the Radeon HD 2900 XT. After AMD's takeover, the XT, Pro, XL, and other lesser used extensions such as XTX and All-in-Wonder were retired in favor of numerical variant numbers, beginning with the HD 3870. The company continued to use "XT" and "Pro" internally to differentiate ASIC variants, although those monikers were seldom if not never used in marketing materials. That's about to change. AMD launched its first overtly XT brand-extended product in close to 15 years, with the China-specific Radeon RX 560 XT, but alas, it's a lousy re-brand.

The RX 560 XT is positioned between the RX 560 4 GB and RX 570 4 GB, and is based on the "Polaris 20" or "Polaris 30" silicon (we don't know which). AMD enabled 28 out of 36 NGCUs on this silicon, resulting in 1,792 stream processors, 112 TMUs, and 32 ROPs. The memory is 4 GB across a 256-bit wide memory interface, although the memory clock-speed is dialed down to 6.6 Gbps (211.2 GB/s). What makes the RX 560 XT a re-brand is that AMD launched an SKU with the same exact specifications, called Radeon Pro 570, and there are several odd-ball RX 570-branded cards in the wild with this core-config. There's no reference-design board of the RX 560 XT, and the SKU is entirely in the hands of board partners to come up with custom-designs of their own.

Update: AMD has informed us that the RX 560 XT is based on the 14 nm "Polaris 10" silicon, and not "Polaris 20" or "Polaris 30." Polaris 10 is the first implementation of the "Polaris" architecture.

NVIDIA Readies GeForce GTX 1660 Ti Based on TU116, Sans RTX

It looks like RTX technology won't make it to sub-$250 market segments as the GPUs aren't fast enough to handle real-time raytracing, and it makes little economic sense for NVIDIA to add billions of additional transistors for RT cores. The company is hence carving out a sub-class of "Turing" GPUs under the TU11x ASIC series, which will power new GeForce GTX family SKUs, such as the GeForce GTX 1660 Ti, and other GTX 1000-series SKUs. These chips offer "Turing Shaders," which are basically CUDA cores that have the IPC and clock-speeds rivaling existing "Turing" GPUs, but no RTX capabilities. To sweeten the deal, NVIDIA will equip these cards with GDDR6 memory. These GPUs could still have tensor cores which are needed to accelerate DLSS, a feature highly relevant to this market segment.

The GeForce GTX 1660 Ti will no doubt be slower than the RTX 2060, and be based on a new ASIC codenamed TU116. According to a VideoCardz report, this 12 nm chip packs 1,536 CUDA cores based on the "Turing" architecture, and the same exact memory setup as the RTX 2060, with 6 GB of GDDR6 memory across a 192-bit wide memory interface. The lack of RT cores and a lower CUDA core count could make the TU116 a significantly smaller chip than the TU106, and something NVIDIA can afford to sell at sub-$300 price-points such as $250. The GTX 1060 6 GB is holding the fort for NVIDIA in this segment, besides other GTX 10-series SKUs such as the GTX 1070 occasionally dropping below the $300 mark at retailers' mercy. AMD recently improved its sub-$300 portfolio with the introduction of Radeon RX 590, which convincingly outperforms the GTX 1060 6 GB.

Sapphire Outs an RX 570 Graphics Card with 16GB Memory, But Why?

Sapphire has reportedly developed an odd-ball Radeon RX 570 graphics card, equipped with 16 GB of GDDR5 memory, double the memory amount the SKU is possibly capable of. The card is based on the company's NITRO+ board design common to RX 570 thru RX 590 SKUs, and uses sixteen 8 Gbit GDDR5 memory chips that are piggybacked (i.e. chips on both sides of the PCB). When Chinese tech publication MyDrivers reached out to Sapphire for an explanation behind such a bizarre contraption, the Hong Kong-based AIB partner's response was fascinating.

Sapphire in its response said that they wanted to bolster the card's crypto-currency mining power, and giving the "Polaris 20" GPU additional memory would improve its performance compared to ASIC miners using the Cuckoo Cycle algorithm. This can load up the video memory anywhere between 5.5 GB to 11 GB, and giving the RX 570 16 GB of it was Sapphire's logical next step. Of course Cuckoo Cycle is being defeated time and again by currency curators. This card will be a stopgap for miners until ASIC mining machines with expanded memory come out, or the proof-of-work systems are significantly changed.

AMD Radeon RX 570 and China-specific RX 580 2048SP Based on Exact Same Chip

It's no news that AMD's Radeon RX 570 graphics card is carved out of the same "Polaris 20" silicon as the RX 580, by disabling 4 out of 36 GCN compute units. AMD kicked a controversy recently, when it launched a China-specific Radeon RX 580-branded SKU with the core-configuration of the cheaper RX 570, confusing Chinese consumers. It turns out that this RX 580 2,048 SP SKU is based on the same exact ASIC variant of the "Polaris 20" silicon as the RX 570, with the only difference being device ID.

We watch a lot of GamersNexus content. Our GPU Database curator noticed something interesting in their recent teardown of a Dataland Radeon RX 580 (2,048 SP) graphics card directly imported from China. The unique ASIC sub-variant code etched on the GPU's aluminium reinforcement brace matches that of the RX 570. AMD internally refers to the RX 570 as "Polaris 20 XL," and its ASIC code etched is supposed to be "215-0910052." For the RX 580, the real one, aka "Polaris 20 XTX," the code etched is "215-0910038." Thanks to GamersNexus' high-resolution filming, our curator was able to spot the ASIC code for "Polaris 20 XL" on the Dataland card's GPU. This confirms that AMD merely took an RX 570 and gave it a different device ID to create the RX 580 2,048 SP, leaving consumers to wade through the confusion.

NVIDIA Segregates Turing GPUs; Factory Overclocking Forbidden on the Cheaper Variant

While working on GPU-Z support for NVIDIA's RTX 20-series graphics cards, we noticed something curious. Each GPU model has not one, but two device IDs assigned to it. A device ID is a unique identification that tells Windows which specific device is installed, so it can select and load the relevant driver software. It also tells the driver, which commands to send to the chip, as they vary between generations. Last but not least, the device ID can be used to enable or lock certain features, for example in the professional space. Two device IDs per GPU is very unusual. For example, all GTX 1080 Ti cards, whether reference or custom design, are marked as 1B06. Titan Xp on the other hand, which uses the same physical GPU, is marked as 1B02. NVIDIA has always used just one ID per SKU, no matter if custom-design, reference or Founders Edition.

We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.

GlobalFoundries Puts its 7 nm Program on Hold Indefinitely

GLOBALFOUNDRIES today announced an important step in its transformation, continuing the trajectory launched with the appointment of Tom Caulfield as CEO earlier this year. In line with the strategic direction Caulfield has articulated, GF is reshaping its technology portfolio to intensify its focus on delivering truly differentiated offerings for clients in high-growth markets.

GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely and restructuring its research and development teams to support its enhanced portfolio initiatives. This will require a workforce reduction, however a significant number of top technologists will be redeployed on 14/12nm FinFET derivatives and other differentiated offerings.

NVIDIA "TU102" RT Core and Tensor Core Counts Revealed

The GeForce RTX 2080 Ti is indeed based on an ASIC codenamed "TU102." NVIDIA was referring to this 775 mm² chip when talking about the 18.5 billion-transistor count in its keynote. The company also provided a breakdown of its various "cores," and a block-diagram. The GPU is still laid out like its predecessors, but each of the 72 streaming multiprocessors (SMs) packs RT cores and Tensor cores in addition to CUDA cores.

The TU102 features six GPCs (graphics processing clusters), which each pack 12 SMs. Each SM packs 64 CUDA cores, 8 Tensor cores, and 1 RT core. Each GPC packs six geometry units. The GPU also packs 288 TMUs and 96 ROPs. The TU102 supports a 384-bit wide GDDR6 memory bus, supporting 14 Gbps memory. There are also two NVLink channels, which NVIDIA plans to later launch as its next-generation multi-GPU technology.

NVIDIA GeForce RTX 2080 Ti TU102 Die-size Revealed

Here are some of the first pictures of NVIDIA's GeForce RTX 2080 Ti ASIC, probably codenamed "TU102." GamersNexus took a ruler to this chip, and discovered that it's one of the biggest non-storage chips in existence. The rectangular die measures 31 mm x 25 mm, or 775 mm². The package has no IHS, but a metal brace along the periphery of the fiberglass substrate distributes mounting pressure from the cooler. NVIDIA is building the "Turing" family of GPUs on TSMC 12 nm FinFET node.

NVIDIA GTX 1080-successor a Rather Hot Chip, Reference Cooler Has Dual-Fans

The GeForce GTX 1080 set high standards for efficiency. Launched as a high-end product that was faster than any other client-segment graphics card at the time, the GTX 1080 made do with just a single 8-pin PCIe power connector, and had a TDP of just 180W. The reference-design PCB, accordingly, has a rather simple VRM setup. The alleged GTX 1080-successor, called either GTX 1180 or GTX 2080 depending on who you ask, could deviate from its ideology of extreme efficiency. There were telltale signs of this departure on the first bare PCB shots.

The PCB pictures revealed preparation for an unusually strong VRM design, given that this is an NVIDIA reference board. It draws power from a combination of 6-pin and 8-pin PCIe power connectors, and features a 10+2 phase setup, with up to 10 vGPU and 2 vMem phases. The size of the pads for the ASIC and no more than 8 memory chips confirmed that the board is meant for the GTX 1080-successor. Adding to the theory of this board being unusually hot is an article by Chinese publication Benchlife.info, which mentions that the reference design (Founders Edition) cooling solution does away with a single lateral blower, and features a strong aluminium fin-stack heatsink ventilated by two top-flow fans (like most custom-design cards). Given that NVIDIA avoided such a design for even big-chip cards such as the GTX 1080 Ti FE or the TITAN V, the GTX 1080-successor is proving to be an interesting card to look forward to. But then what if this is the fabled GTX 1180+ / GTX 2080+, slated for late-September?

NVIDIA Briefs AIC Partners About Next-gen GeForce Series

NVIDIA has reportedly briefed its add-in card (AIC) partners about its upcoming GeForce product family, codenamed "Turing," and bearing a commercial nomenclature of either GeForce 11-series, or GeForce 20-series. This sets in motion a 2-3 month long process of rolling out new graphics cards by board partners, beginning with reference-design "Founders Edition" SKUs, followed by custom-design SKUs. Sources tell Tom's Hardware Germany that AIC partners have began training product development teams. NVIDIA has also released a BoM (bill of materials) to its partners, so aside from the ASIC itself, they could begin the process of sourcing other components for their custom-design products (such as coolers, memory chips, VRM components, connectors, etc.).

The BoM also specifies a timeline for the tentative amount of time it takes for each of the main stages of the product development, leading up to mass-production. It stipulates 11-12 weeks (2-3 months) leading up to mass-production and shipping, which could put product-launch some time in August (assuming the BoM was released some time in May-June). A separate table also provides a fascinating insight to the various stages of development of a custom-design NVIDIA graphics card.
Return to Keyword Browsing