News Posts matching "Turing"

Return to Keyword Browsing

Manli Announces GeForce RTX 2080 Ti and RTX 2080 Gallardo Series Graphics Cards

Manli Technology Group Limited, the major Graphics Cards and other components manufacturer, today announced the brand new RTX 20 series family graphics solution - Manli GeForce RTX 2080 Ti & RTX 2080 Gallardo with RGB Lights. Manli GeForce RTX 2080 Ti & RTX 2080 Gallardo delivers extremely cool, fast and smooth gaming experience. Powered by the latest NVIDIA Turing GPU architecture and revolutionary RTX platform. It also couples with real-time ray tracing, artificial intelligence and programmable shading.

NVIDIA Turing SDKs Now Available

NVIDIA's Turing architecture is one of the biggest leaps in computer graphics in 20 years. Here's a look at the latest developer software releases to take advantage of this cutting-edge GPU. CUDA 10: CUDA 10 includes support for Turing GPUs, performance optimized libraries, a new asynchronous task-graph programming model, enhanced CUDA & graphics API interoperability, and new developer tools. CUDA 10 also provides all the components needed to build applications for NVIDIA's most powerful server platforms for AI and high performance computing (HPC) workloads, both on-prem (DGX-2) and in the cloud (HGX-2).

TensorRT 5 - Release Candidate: TensorRT 5 delivers up to 40x faster inference performance over CPUs through new optimizations, APIs and support for Turing GPUs. It optimizes mixed precision inference dramatically across apps such as recommenders, neural machine translation, speech and natural language processing. TensorRT 5 highlights include INT8 APIs offering new flexible workflows, optimization for depthwise separable convolution, support for Xavier-based NVIDIA Drive platforms and the NVIDIA DLA accelerator. In addition, TensorRT 5 brings support for Windows and CentOS Operating Systems.

NVIDIA Stock Falls 2.1% After Turing GPU Reviews Fail to Impress Morgan Stanley

NVIDIA's embargo on their Turing-based RTX 2080 and RTX 2080 Ti ended Wednesday, September 19 and it appears that enthusiasts were not the only ones left wanting more from these graphics cards. In particular, Morgan Stanley analyst Joseph Moore shared a note today (Thursday, September 20) with company clients saying "As review embargos broke for the new gaming products, performance improvements in older games is not the leap we had initially hoped for. Performance boost on older games that do not incorporate advanced features is somewhat below our initial expectations, and review recommendations are mixed given higher price points." The NVIDIA Corporation share value on the NASDAQ exchange had closed at $271.98 (USD) Wednesday and immediately tumbled down to a low of $264.10 opening today before recovering to close at $266.28, down 2.1% over the previous closure.

The Morgan Stanley report further mentioned that "We are surprised that the 2080 is only slightly better than the 1080ti, which has been available for over a year and is slightly less expensive. With higher clock speeds, higher core count, and 40% higher memory bandwidth, we had expected a bigger boost." Accordingly, the market analyst expects a slower adoption of these new GPUs as well as no expectation of "much upside" from NVIDIA's gaming business unit for the next two quarters. Despite all this, Morgan Stanley remains bullish on NVIDIA and expects a $273 price point in the long term.

NVIDIA Releases GeForce 411.63 Game Ready Driver with Support for Turing 2000-series GPUs

NVIDIA today released the GeForce 411.64 WHQL "Game Ready" driver to go along with the embargo release for their latest and greatest Turing-based RTX 2080 and RTX 2080 Ti graphics cards. These drivers come with optimization for upcoming AAA game launches including Assassin's Creed Odyssey, Forza Horizon 4, and Fifa 19. For those of you who pre-ordered the new GPUs, or are going to purchase them imminently, this is the driver you will need to get the most out of the hardware.

Other new features include support for CUDA 10.0, NVIDIA RTX technology, and the new Vulkan 1.1 API. As with any GeForce driver released after April 1, 2018, the driver adds performance enhancements, new features, and bug fixes only on Kepler, Maxwell, Pascal and Volta series GPUs. NVIDIA reaffirms that critical security updates will continue to be available on Fermi series GPUs through January 2019. Full release notes, as well as the download options to the driver, can be found in the link below.
DOWNLOAD: NVIDIA GeForce 411.63 WHQL

NVIDIA RTX 2080 / 2080 Ti Results Appear For Final Fantasy XV

The online results database for the Final Fantasy XV Benchmark has been partially updated to include NVIDIA's RTX 2080 and 2080 Ti. Scores for both standard and high quality settings at 2560x1440 and 3840x2160 are available. While the data for 1920x1080 and lite quality tests are not.

Taking a look at the RTX 2080 Ti results, show it beating out the GTX 1080 Ti by 26% and 28% in the standard and high quality tests respectively, at 2560x1440. Increasing the resolution to 3840x2160, again shows the RTX 2080 Ti ahead, this time by 20% and 31% respectively. The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33% in the same standard and high quality tests. Once again, increasing the resolution to 3840x2160 results in performance being 33% and 36% better than the GTX 1080. Overall, both graphics cards are shaping up to be around 30% faster than the previous generation without any special features. With Final Fantasy XV getting DLSS support in the near future, it is likely the performance of the RTX series will further improve compared to the previous generation.

TechPowerUp GPU-Z v2.11.0 Released

TechPowerUp today released the latest version of TechPowerUp GPU-Z, the popular graphics subsystem information and diagnostics utility. Version 2.11.0 introduces support for NVIDIA GeForce RTX 20-series "Turing" graphics cards, including the RTX 2080 Ti, RTX 2080, and RTX 2070. Support is also added for a few exotic OEM variants we discovered over the months, including GTX 750 Ti (GM107-A), GTX 1050 Ti Mobile 4 GB, Quadro P1000, Tesla P100 DGXS, GeForce 9200. From the AMD stable, we add support for "Vega 20," "Fenghuang" semi-custom SoC for Zhongshan Subor, Ryzen 5 Pro 2500U, 5 Pro 2400G, 3 Pro 2200G, 3 Pro 2300U, 3 2200GE, Athlon 200GE, and Embedded V1807B. Intel UHD 610, UHD P630 (Xeon), Coffee Lake GT3e (i5-8259U), are now supported.

Among the new features are system RAM usage sensors, temperature monitoring offsets for AMD Ryzen Threadripper 2000 series processors, and the ability to identify USB-C display output, GDDR6 memory standard, and 16 Gbit density memory chips. Several under-the-hood improvements were made, including WDDM-based memory monitoring for AMD GPUs, replacing ADL sensors that tend to be buggy. GPU-Z also cleans up QueryExternal files from your Temp folder. Grab GPU-Z from the link below.
DOWNLOAD: TechPowerUp GPU-Z v2.11.0

The change-log follows.

NVIDIA Segregates Turing GPUs; Factory Overclocking Forbidden on the Cheaper Variant

While working on GPU-Z support for NVIDIA's RTX 20-series graphics cards, we noticed something curious. Each GPU model has not one, but two device IDs assigned to it. A device ID is a unique identification that tells Windows which specific device is installed, so it can select and load the relevant driver software. It also tells the driver, which commands to send to the chip, as they vary between generations. Last but not least, the device ID can be used to enable or lock certain features, for example in the professional space. Two device IDs per GPU is very unusual. For example, all GTX 1080 Ti cards, whether reference or custom design, are marked as 1B06. Titan Xp on the other hand, which uses the same physical GPU, is marked as 1B02. NVIDIA has always used just one ID per SKU, no matter if custom-design, reference or Founders Edition.

We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.

NVIDIA's 20-series Could be Segregated via Lack of RTX Capabilities in Lower-tier Cards

NVIDIA's Turing-based RTX 20-series graphics cards have been announced to begin shipping on the 20th of September. Their most compelling argument for users to buy them is the leap in ray-tracing performance, enabled by the integration of hardware-based acceleration via RT cores that have been added to NVIDIA's core design. NVIDIA has been pretty bullish as to how this development reinvents graphics as we know it, and are quick to point out the benefits of this approach against other, shader-based approximations of real, physics-based lighting. In a Q&A at the Citi 2018 Global Technology Conference, NVIDIA's Colette Kress expounded on their new architecture's strengths - but also touched upon a possible segmentation of graphics cards by raytracing capabilities.

During that Q&A, NVIDIA's Colette Kress put Turing's performance at a cool 2x improvement over their 10-series graphics cards, discounting any raytracing performance uplift - and when raytracing is indeed brought into consideration, she said performance has increased by up to 6x compared to NVIDIA's last generation. There's some interesting wording when it comes to NVIDIA's 20-series lineup, though; as Kress puts it, "We'll start with the ray-tracing cards. We have the 2080 Ti, the 2080 and the 2070 overall coming to market," which, in context, seems to point out towards a lack of raytracing hardware in lower-tier graphics cards (apparently, those based on the potential TU106 silicon and lower-level variants).

NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

We are all still awaiting how NVIDIA's RTX 2000 series of GPUs will fare in independent reviews, but that has not stopped the rumor mill from extrapolating. There have been alleged leaks of the RTX 2080 Ti's performance and now we see HWiNFO add support to an unannounced NVIDIA Turing microarchitecture chip, the TU106. As a reminder, the currently announced members in RTX series are based off TU102 (RTX 2080 Ti), and TU104 (RTX 2080, RTX 2070). It is logical to expect a smaller die for upcoming RTX cards based on NVIDIA's history, and we may well see an RTX 2060 using the TU106 chip.

This addition to HWiNFO is to be taken with a grain of salt, however, as they have been wrong before. Even recently, they had added support for what, at the time, was speculated to be NVIDIA Volta microarchitecture which we now know as Turing. This has not stopped others from speculating further, however, as we see 3DCenter.org give their best estimates on how TU106 may fare in terms of die size, shader and TMU count, and more. Given that TSMC's 7 nm node will likely be preoccupied with Apple iPhone production through the end of this year, NVIDIA may well be using the same 12 nm FinFET process that TU102 and TU104 are being manufactured on. This mainstream GPU segment is NVIDIA's bread-and-butter for gross revenue, and so it is possible we may see an announcement with even retail availability towards the end of Q4 2018 to target holiday shoppers.

GIGABYTE Unveils GeForce RTX 20-series Graphics Cards

GIGABYTE, the world's leading premium gaming hardware manufacturer, and NVIDIA, the leading company of GPU, released the latest GeForce RTX 20 series graphics cards that powered by NVIDIA TuringTM architecture. GIGABYTE first launched 5 graphics cards - Geforce RTX 2080 Ti GAMING OC 11G, Geforce RTX 2080 Ti WINDFORCE OC 11G, Geforce RTX 2080 GAMING OC 8G, Geforce RTX 2080 WINDFORCE OC 8G, and Geforce RTX 2070 GAMING OC 8G. The 5 graphics cards feature GIGABYTE WINDFORCE 3X cooling system with alternate spinning fan, RGB fusion, protection metal back plate, GIGABYTE certified ultra-durable materials and one-click overclocking, so that all gamers can enjoy the ultimate gaming experience with the extreme performance.

GIGABYTE WINDFORCE 3x cooling system takes care of every component on the graphics card, and is equipped with 3 unique blade fans, high-efficiency pure copper composite heat-pipes, heat pipes directly touch GPU, and semi-passive fan function. These cooling technologies keep the graphics card in a low-temperature at any time, resulting in higher and more stable performance. The middle fan spins in reverse to optimize airflow for heat dissipation, enabling more efficient performance at a lower temperature.

Introducing the EVGA GeForce RTX 20-Series Graphics Cards

The EVGA GeForce RTX 20-Series Graphics Cards are powered by the all-new NVIDIA Turing architecture to give you incredible new levels of gaming realism, speed, power efficiency, and immersion. With the EVGA GeForce RTX 20-Series gaming cards you get the best gaming experience with next generation graphics performance, ice cold cooling with EVGA iCX2, and advanced overclocking features with the all new EVGA Precision X1 software.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games-this is the ultimate PC gaming experience.

The new GPUs were unveiled at a special NVIDIA two-day event called the "GeForce Gaming Celebration" which kicked off tonight at the Palladium in Cologne, Germany ahead of Gamescom 2018.

ZOTAC Announces its GeForce RTX 20-series

ZOTAC Technology, a global manufacturer of innovation, is pleased to change the playing field of graphics cards once more with ZOTAC GAMING GeForce RTX 20-series graphics cards. The new ZOTAC GAMING GeForce RTX 20-series will be available in twin fan and triple fan AMP models with all-new designs.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games-this is the ultimate PC gaming experience.

MSI Unveils its GeForce RTX Series

As the leading brand in True Gaming hardware, MSI has sold over 8 million graphics cards in the last year alone. Today we are extremely proud to share with you our take on NVIDIA's exciting new GeForce RTX 20 series GPUs.

The new NVIDIA GeForce RTX GPUs have reinvented graphics and set a new bar for performance. Powered by the new NVIDIA Turing GPU architecture and the revolutionary NVIDIA RTX platform, the new graphics cards bring together real-time ray tracing, artificial intelligence, and programmable shading. This is not only a whole new way to experience games-this is the ultimate PC gaming experience.
The new GPUs were unveiled at a special NVIDIA two-day event called the "GeForce Gaming Celebration" which kicked off tonight at the Palladium in Cologne, Germany ahead of Gamescom 2018.

NVIDIA Turing has 18.9 Billion Transistors

NVIDIA revealed that "Turing," the chip powering its RTX 2080-series has up to 18.9 billion transistors, making it the second biggest chip ever made (after NVIDIA V100). The Turing chip combines three key components, SM cores (CUDA cores), RT cores, and Tensor cores. The CUDA cores (SM cores) offer 14 TFLOPs of compute power; with tensor cores (4x4x4 matrix multiplication) at 110 TFLOPs FP16; and RT cores processing 10 giga-rays per second (10x over the predecessor).

NVIDIA GeForce RTX 2080 Ti Reference Design Teased

Here's the first teaser picture of NVIDIA's upcoming super high-end GeForce RTX 2080 Ti graphics card, with its most prominent feature being a dual-fan reference-design cooler. Given that blank PCB pictures of the RTX 2080 reference board shows two fan headers, it's possible that NVIDIA could make this dual-fan cooler common for both the RTX 2080 Ti and the RTX 2080+ (a premium 3,072-SP version of the RTX 2080, which could launch in September).

NVIDIA GeForce RTX 2000 Series Specifications Pieced Together

Later today (20th August), NVIDIA will formally unveil its GeForce RTX 2000 series consumer graphics cards. This marks a major change in the brand name, triggered with the introduction of the new RT Cores, specialized components that accelerate real-time ray-tracing, a task too taxing on conventional CUDA cores. Ray-tracing and DNN acceleration requires SIMD components to crunch 4x4x4 matrix multiplication, which is what RT cores (and tensor cores) specialize at. The chips still have CUDA cores for everything else. This generation also debuts the new GDDR6 memory standard, although unlike GeForce "Pascal," the new GeForce "Turing" won't see a doubling in memory sizes.

NVIDIA is expected to debut the generation with the new GeForce RTX 2080 later today, with market availability by end of Month. Going by older rumors, the company could launch the lower RTX 2070 and higher RTX 2080+ by late-September, and the mid-range RTX 2060 series in October. Apparently the high-end RTX 2080 Ti could come out sooner than expected, given that VideoCardz already has some of its specifications in hand. Not a lot is known about how "Turing" compares with "Volta" in performance, but given that the TITAN V comes with tensor cores that can [in theory] be re-purposed as RT cores; it could continue on as NVIDIA's halo SKU for the client-segment.

NVIDIA Announces Financial Results for Second Quarter Fiscal 2019

NVIDIA today reported revenue for the second quarter ended July 29, 2018, of $3.12 billion, up 40 percent from $2.23 billion a year earlier, and down 3 percent from $3.21 billion in the previous quarter.

GAAP earnings per diluted share for the quarter were $1.76, up 91 percent from $0.92 a year ago and down 11 percent from $1.98 in the previous quarter. Non-GAAP earnings per diluted share were $1.94, up 92 percent from $1.01 a year earlier and down 5 percent from $2.05 in the previous quarter.

"Growth across every platform - AI, Gaming, Professional Visualization, self-driving cars - drove another great quarter," said Jensen Huang, founder and CEO of NVIDIA. "Fueling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing. Developers are jumping on the GPU-accelerated computing model that we pioneered for the boost they need.

NVIDIA Does a TrueAudio: RT Cores Also Compute Sound Ray-tracing

Positional audio, like Socialism, follows a cycle of glamorization and investment every few years. Back in 2011-12 when AMD maintained a relatively stronger position in the discrete GPU market, and held GPGPU superiority, it gave a lot of money to GenAudio and Tensilica to co-develop the TrueAudio technology, a GPU-accelerated positional audio DSP, which had a whopping four game title implementations, including and limited to "Thief," "Star Citizen," "Lichdom: Battlemage," and "Murdered: Soul Suspect." The TrueAudio Next DSP which debuted with "Polaris," introduced GPU-accelerated "audio ray-casting" technology, which assumes that audio waves interact differently with different surfaces, much like light; and hence positional audio could be made more realistic. There were a grand total of zero takers for TrueAudio Next. Riding on the presumed success of its RTX technology, NVIDIA wants to develop audio ray-tracing further.

A very curious sentence caught our eye in NVIDIA's micro-site for Turing. The description of RT cores reads that they are specialized components that "accelerate the computation of how light and sound travel in 3D environments at up to 10 Giga Rays per second." This is an ominous sign that NVIDIA is developing a full-blown positional audio programming model that's part of RTX, with an implementation through GameWorks. Such a technology, like TrueAudio Next, could improve positional audio realism by treating sound waves like light and tracing their paths from their origin (think speech from an NPC in a game), to the listener as the sound bounces off the various surfaces in the 3D scene. Real-time ray-tracing(-ish) has captured the entirety of imagination at NVIDIA marketing to the extent that it is allegedly willing to replace "GTX" with "RTX" in its GeForce GPU nomenclature. We don't mean to doomsay emerging technology, but 20 years of development in positional audio has shown that it's better left to game developers to create their own technology that sounds somewhat real; and that initiatives from makers of discrete sound cards (a device on the brink of extinction) and GPUs makers bore no fruit.

NVIDIA Posts Cryptic #BeForTheGame Video Pointing at 20th August

When NVIDIA debuted its "Turing" GPU architecture through its recent Quadro RTX series, PC enthusiasts felt being left hung and dry. The occasion was SIGGRAPH, the biggest annual expo of digital content creators, and so a Quadro unveiling felt fitting. Come 21st August, and Gamescom will be almost upon us. NVIDIA is planning its own event in host city Cologne a day earlier. The theme of the event is "Be For The Game."

NVIDIA posted the mother of all teasers pointing to the August 20 event. It doesn't mention a new product launch, but there are enough hints, such as the back-plate reminiscent of TITAN V, combined with glossy green and black surfaces that look similar to the Quadro RTX reference boards. The video winks at both gamers and PC enthusiasts, with the first half depicting a sick build being put together. We can't wait!
The video follows.

Samsung 16Gb GDDR6 Memory Powers Latest NVIDIA Quadro Professional Graphics Solution

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced that its 16-gigabit (Gb) Graphics Double Data Rate 6 (GDDR6) memory is being used in NVIDIA's new Turing architecture-based Quadro RTX GPUs.

Thanks to Samsung's industry-leading 16Gb GDDR6 memory, end users can expect improved performance and energy efficiency in the widest array of graphics-intensive applications, including computer-aided design (CAD), digital content creation (DCC) and scientific visualization applications. Samsung's 16Gb GDDR6 can also be used in rapidly growing fields such as 8K Ultra HD video processing, virtual reality (VR), augmented reality (AR) and artificial intelligence (AI).

NVIDIA Announces Turing-based Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000

NVIDIA today reinvented computer graphics with the launch of the NVIDIA Turing GPU architecture. The greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

These two engines - along with more powerful compute for simulation and enhanced rasterization - usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experiences, amazing new effects powered by neural networks and fluid interactivity on highly complex models.

NVIDIA's Next Gen GPU Launch Held Back to Drain Excess, Costly Built-up Inventory?

We've previously touched upon whether or not NVIDIA should launch their 1100 or 2000 series of graphics cards ahead of any new product from AMD. At the time, I wrote that I only saw benefits to that approach: earlier time to market -> satisfaction of upgrade itches and entrenchment as the only latest-gen manufacturer -> raised costs over lack of competition -> ability to respond by lowering prices after achieving a war-chest of profits. However, reports of a costly NVIDIA mistake in overestimating demand for its Pascal GPUs does lend some other shades to the whole equation.

Write-offs in inventory are costly (just ask Microsoft), and apparently, NVIDIA has found itself in a miscalculating demeanor: overestimating gamers' and miners' demand for their graphics cards. When it comes to gamers, NVIDIA's Pascal graphics cards have been available in the market for two years now - it's relatively safe to say that the majority of gamers who needed higher-performance graphics cards have already taken the plunge. As to miners, the cryptocurrency market contraction (and other factors) has led to a taper-out of graphics card demand for this particular workload. The result? NVIDIA's demand overestimation has led, according to Seeking Alpha, to a "top three" Taiwan OEM returning 300,000 GPUs to NVIDIA, and "aggressively" increased GDDR5 buying orders from the company, suggesting an excess stock of GPUs that need to be made into boards.

NVIDIA's Next-Gen Graphics Cards to Launch in Q3 2018, Breadcrumb Trail Indicates

We the media and you enthusiasts are always getting scare jumps every time a high-profile launch is announced - or even hinted at. And few product launches are as enthusing as those of new, refined graphics cards architectures - the possibilities for extra performance, bang for buck improvements, mid-tier performance that belonged in last generation's halo products - it's all a mix of merriment and expectation - even if it sometimes tastes a little sour.

Adding to the previous breadcrumbs neatly laid-out regarding NVIDIA's Hot Chips presentation on a new "Next Generation mainstream GPU", the source for et another piece of bread that would make Grettel proud comes from Power Logic, a fan supplier for numerous AIB partners (company representative holding an EVGA graphics card below), who have recently said they expected "Q3 orders to be through the roof". Such an increase in demand usually means increased orders as AIB partners stock up on materials to produce a substantial enough stock for new product launches, and does fall in line with the NVIDIA Hot Chips presentation in August. Q3 starts in July, though, and while the supply-chain timings are unknown, it seems somewhat tight for a July product launch that coincides with the increased fan orders.

NVIDIA Briefs AIC Partners About Next-gen GeForce Series

NVIDIA has reportedly briefed its add-in card (AIC) partners about its upcoming GeForce product family, codenamed "Turing," and bearing a commercial nomenclature of either GeForce 11-series, or GeForce 20-series. This sets in motion a 2-3 month long process of rolling out new graphics cards by board partners, beginning with reference-design "Founders Edition" SKUs, followed by custom-design SKUs. Sources tell Tom's Hardware Germany that AIC partners have began training product development teams. NVIDIA has also released a BoM (bill of materials) to its partners, so aside from the ASIC itself, they could begin the process of sourcing other components for their custom-design products (such as coolers, memory chips, VRM components, connectors, etc.).

The BoM also specifies a timeline for the tentative amount of time it takes for each of the main stages of the product development, leading up to mass-production. It stipulates 11-12 weeks (2-3 months) leading up to mass-production and shipping, which could put product-launch some time in August (assuming the BoM was released some time in May-June). A separate table also provides a fascinating insight to the various stages of development of a custom-design NVIDIA graphics card.

NVIDIA GTX 1080-successor By Late-July

NVIDIA is reportedly giving finishing touches to its first serious GeForce-branded GPU based on a next-generation NVIDIA architecture (nobody knows which), for a late-July product announcement. This involves a limited reference-design "Founders Edition" product launch in July, followed by custom-design graphics card launches in August and September. This chip could be the second-largest client-segment implementation of said architecture succeeding the GP104, which powers the GTX 1080 and GTX 1070.

It's growing increasingly clear that the first product could be codenamed "Turing" after all, and that "Turing" may not be the codename of an architecture or a silicon, but rather an SKU (likely either named GTX 1180 or GTX 2080). As with all previous NVIDIA product-stack roll-outs since the GTX 680, NVIDIA will position the GTX 1080-successor as a high-end product initially, as it will be faster than the GTX 1080 Ti, but the product will later play second-fiddle to a GTX 1080 Ti-successor based on a bigger chip.
Return to Keyword Browsing