News Posts matching "Volta"

Return to Keyword Browsing

Microsoft Releases DirectX Raytracing - NVIDIA Volta-based RTX Adds Real-Time Capability

Microsoft today announced an extension to its DirectX 12 API with DirectX Raytracing, which provides components designed to make real-time ray-tracing easier to implement, and uses Compute Shaders under the hood, for wide graphics card compatibility. NVIDIA feels that their "Volta" graphics architecture, has enough computational power on tap, to make real-time ray-tracing available to the masses. The company has hence collaborated with Microsoft to develop the NVIDIA RTX technology, as an interoperative part of the DirectX Raytracing (DXR) API, along with a few turnkey effects, which will be made available through the company's next-generation GameWorks SDK program, under GameWorks Ray Tracing, as a ray-tracing denoiser module for the API.

Real-time ray-tracing has for long been regarded as a silver-bullet to get lifelike lighting, reflections, and shadows right. Ray-tracing is already big in the real-estate industry, for showcasing photorealistic interactive renderings of property under development, but has stayed away from gaming, that tends to be more intense, with larger scenes, more objects, and rapid camera movements. Movies with big production budgets use pre-rendered ray-tracing farms to render each frame. Movies have, hence, used ray-traced visual-effects for years now, since it's not interactive content, and its studios are willing to spend vast amounts of time and money to painstakingly render each frame using hundreds of rays per pixel.

Report: NVIDIA Not Unveiling 2018 Graphics Card Lineup at GDC, GTC After All

It's being reported by Tom's Hardware, citing industry sources, that NVIDIA isn't looking to expand upon its graphics cards lineup at this years' GDC (Game Developers Conference) or GTC (GPU Technology Conference). Even as reports have been hitting the streets that pointed towards NVIDIA announcing (if not launching) their two new product architectures as early as next month, it now seems that won't be the case after all. As a reminder, the architectures we're writing about here are Turing, reportedly for crypto-mining applications, and Ampere, the expected GeForce architecture leapfrogging the current top of the line - and absent from regular consumer shores - Volta.

There's really not much that can be gleaned as of now from industry sources, though. It's clear no one has received any kind of information from NVIDIA when it comes to either of their expected architectures, which means an impending announcement isn't likely. At the same time, NVIDIA really has no interest in pulling the trigger on new products - demand is fine, and competition from AMD is low. As such, reports of a June or later announcement/release are outstandingly credible, as are reports that NVIDIA would put the brakes on a consumer version of Ampere, use it to replace Volta on the professional and server segment, and instead launch Volta - finally - on the consumer segment. This would allow the company to cache in on their Volta architecture, this time on consumer products, for a full generation longer, while innovating the market - of sorts. All scenarios are open right now; but one thing that seems clear is that there will be no announcements next month.

NVIDIA to Unveil "Ampere" Based GeForce Product Next Month

NVIDIA prepares to make its annual tech expo, the 2018 Graphics Technology Conference (GTC) action-packed. The company already surprised us with its next-generation "Volta" architecture based TITAN V graphics card priced at 3 grand; and is working to cash in on the crypto-currency wave and ease pressure on consumer graphics card inventories by designing highly optimized mining accelerators under the new Turing brand. There's now talk that NVIDIA could pole-vault launch of the "Volta" architecture for the consumer-space; by unveiling a GeForce graphics card based on its succeeding architecture, "Ampere."

The oldest reports of NVIDIA unveiling "Ampere" date back to November 2017. At the time it was expected that NVIDIA will only share some PR blurbs on some of the key features it brings to the table, or at best, unveil a specialized (non-gaming) silicon, such as a Drive or machine-learning chip. An Expreview report points at the possibility of a GeForce product, one that you can buy in your friendly neighborhood PC store and play games with. The "Ampere" based GPU will still be based on the 12 nanometer silicon fabrication process at TSMC, and is unlikely to be a big halo chip with exotic HBM stacks. Why NVIDIA chose to leapfrog is uncertain. GTC gets underway late-March.

EK Unveils NVIDIA TITAN V Full-coverage Water-block

EK Water Blocks, the Slovenia-based premium computer liquid cooling gear manufacturer, is releasing water blocks for the most powerful PC GPU on the market to this day, the NVIDIA Titan V. The EK-FC Titan V full cover GPU water block will help you enjoy the full computing power of the Volta architecture based NVIDIA Titan V in a silent environment.

This water block directly cools the GPU, HBM2 memory, and VRM (voltage regulation module) as well! Water is channeled directly over these critical areas, thus allowing the graphics card and it's VRM to remain stable under high overclocks and to reach full boost clocks. EK-FC Titan V water block features a central inlet split-flow cooling engine design for best possible cooling performance, which also works flawlessly with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance allowing this product to be used in liquid cooling systems using weaker water pumps.

NVIDIA Turing is a Crypto-mining Chip Jen-Hsun Huang Made to Save PC Gaming

When Reuters reported Turing as NVIDIA's next gaming graphics card, we knew something was off about it. Something like that would break many of NVIDIA's naming conventions. It now turns out that Turing, named after British scientist Alan Turing, who is credited with leading a team of mathematicians that broke the Nazi "Enigma" cryptography, is a crypto-mining and blockchain compute accelerator. It is being designed to be compact, efficient, and ready for large-scale deployment by amateur miners and crypto-mining firms alike, in a quasi-industrial scale.

NVIDIA Turing could be manufactured at a low-enough cost against GeForce-branded products, and in high-enough scales, to help bring down their prices, and save the PC gaming ecosystem. It could have an ASIC-like disruptive impact on the graphics card market, which could make mining with graphics cards less viable, in turn, lowering graphics card prices. With performance-segment and high-end graphics cards seeing 200-400% price inflation in the wake of crypto-currency mining wave, PC gaming is threatened as gamers are lured to the still-affordable new-generation console ecosystems, led by premium consoles such as the PlayStation 4 Pro and Xbox One X. There's no word on which GPU architecture Turing will be based on ("Pascal" or "Volta"). NVIDIA is expected to launch its entire family of next-generation GeForce GTX 2000-series "Volta" graphics cards in 2018.

Lesson from the Crypto/DRAM Plagues: Build Future-Proof

As someone who does not mine crypto-currency, loves fast computers, and gaming on them, I find the current crypto-currency mining craze using graphics cards nothing short of a plague. It's like war broke out, and your government took away all the things you love from the market. All difficult times teach valuable lessons, and in this case, it is "Save up and build future-proof."

When NVIDIA launched its "Pascal" GPU architecture way back in Summer 2016, and AMD followed up, as a user of 2x GeForce GTX 970 SLI, I did not feel the need to upgrade anything, and planned to skip the Pascal/Polaris/Vega generation, and only upgrade when "Volta" or "Navi" offered something interesting. My pair of GTX 970 cards are backed by a Core i7-4770K processor, and 16 GB of dual-channel DDR3-1866 memory, both of which were considered high-end when I bought them, around 2014-15.

Throughout 2016, my GTX 970 pair ate AAA titles for breakfast. With NVIDIA investing on advancing SLI with the new SLI-HB, and DirectX 12 promising a mixed multi-GPU utopia, I had calculated a rather rosy future for my cards (at least to the point where NVIDIA would keep adding SLI profiles for newer games for my cards to chew through). What I didn't see coming was the inflection point between the decline of multi-GPU and crypto-plague eating away availability of high-end graphics cards at sane prices. That is where we are today.

NVIDIA Quadro GV100 Surfaces in Latest NVFlash Binary

NVIDIA could be giving final touches to its Quadro GV100 "Volta" professional graphics card, after the surprise late-2017 launch of the NVIDIA TITAN V. The card was found listed in the binary view of the latest version of NVFlash (v5.427.0), the most popular NVIDIA graphics card BIOS extraction and flashing utility. Since its feature-set upgrade to the TITAN Xp through newer drivers, NVIDIA has given the TITAN family of graphics cards a quasi-professional differentiation from its GeForce GTX family.

The Quadro family still has the most professional features, software certifications, and are sought after by big companies into graphics design, media, animation, architecture, resource exploration, etc. The Quadro GV100 could hence yet be more feature-rich than the TITAN V. With its GV100 silicon, NVIDIA is using a common ASIC and board design for its Tesla V100 PCIe add-in card variants, the TITAN V, and the Quadro GV100. While the company endowed the TITAN V with 12 GB of HBM2 memory using 3 out of 4 memory stacks the ASIC is capable of holding; there's an opportunity for NVIDIA to differentiate the Quadro GV100 by giving it that 4th memory stack, and 16 GB of total memory. You can download the latest version of NVFlash here.

NVIDIA's Latest Titan V GPU Benchmarked, Shows Impressive Performance

NVIDIA pulled a rabbit out of its proverbial hat late last week, with the surprise announcement of the gaming-worthy Volta-based Titan V graphics card. The Titan V is another one in a flurry of Titan cards from NVIDIA as of late, and while the healthiness of NVIDIA's nomenclature scheme can be put to the sword, the Titan V's performance really can't.

In the Unigine Superposition benchmark, the $3000 Titan V managed to deliver 5,222 points in the 8K Optimized preset, and 9,431 points on the 1080p Extreme preset. Compare that to an extremely overclocked GTX 1080 Ti running at 2,581 MHz under liquid nitrogen, which hit 8,642 points in the 1080p Extreme preset, and the raw power of NVIDIA's Volta hardware is easily identified. An average 126 FPS is also delivered by the Titan V in the Unigine Heaven benchmark, at 1440p as well. Under gaming workloads, the Titan V is reported to achieve from between 26% and 87% improvements in raw performance, which isn't too shabby, now is it?

NVIDIA TITAN V Lacks SLI or NVLink Support

Earlier today, we brought you a story about NVIDIA TITAN V setting you back by up to $7,196 for two cards and two $600 NVLink cables. We got word from NVIDIA that the card neither features NVLink, nor supports SLI, and have since edited it. The NVLink fingers on the TITAN V card are rudiments of the functional NVLink interface found on the Tesla V100 PCIe, being developed by NVIDIA, as the TITAN V, Tesla V100, and a future Quadro GV100 share a common PCB. The NVLink fingers on the TITAN V are concealed by the base-plate of the cooler on one side, and the card's back-plate on the other; so the female connectors of NVLink bridge cables can't be plugged in.

With the lack of SLI support on what is possibly it's fastest graphics card based on the "Volta" architecture, NVIDIA seems to have responded to market trends that multi-GPU is dying or dead. That said, it would be interesting to see if professional overclockers chasing benchmark leaderboard glory pick up the TITAN V, as opposed to two TITAN Xp in SLI or four Radeon RX Vega 64 in 4-way CrossFireX.

NVIDIA Announces TITAN V "Volta" Graphics Card

NVIDIA in a shock move, announced its new flagship graphics card, the TITAN V. This card implements the "Volta" GV100 graphics processor, the same one which drives the company's Tesla V100 HPC accelerator. The GV100 is a multi-chip module, with the GPU die and three HBM2 memory stacks sharing a package. The card features 12 GB of HBM2 memory across a 3072-bit wide memory interface. The GPU die has been built on the 12 nm FinFET+ process by TSMC. NVIDIA TITAN V maxes out the GV100 silicon, if not its memory interface, featuring a whopping 5,120 CUDA cores, 640 Tensor cores (specialized units that accelerate neural-net building/training). The CUDA cores are spread across 80 streaming multiprocessors (64 CUDA cores per SM), spread across 6 graphics processing clusters (GPCs). The TMU count is 320.

The GPU core is clocked at 1200 MHz, with a GPU Boost frequency of 1455 MHz, and an HBM2 memory clock of 850 MHz, translating into 652.8 GB/s memory bandwidth (1.70 Gbps stacks). The card draws power from a combination of 6-pin and 8-pin PCIe power connectors. Display outputs include three DP and one HDMI connectors. With a wallet-scorching price of USD $2,999, and available exclusively through NVIDIA store, the TITAN V is evidence that with Intel deciding to sell client-segment processors for $2,000, it was a matter of time before GPU makers seek out that price-band. At $3k, the GV100's margins are probably more than made up for.

"Summit" Supercomputer to Propel US Back to Number 1 in Top 500 by 2018

China has been increasingly - and steadily - gaining relevance in the supercomputing world, with most of the top-500 entries being controlled by that country. In fact, China can boast of having the number one supercomputer in the world, the Sunway TaihuLight, which can deliver 93 PetaFLOPS of computing power - just 3x more computational power than the second most powerful machine, China's own Tianhe-2). However, supercomputing, and the amount of money that's earned by selling processing slices of these supercomputers for private or state contractors, i a very attractive pull - especially considering the increasingly more expensive computational needs of the modern world.

The Summit is to be the United State's call to fame in that regard, bringing the country back to number one in raw, top-of-the-line single-machine supercomputing power. Summit is promising to more than double the PetaFLOPS of China's TaihuLight, to over 200 PetaFLOPs. That amounts to around 11x more processing grunt than its predecessor, the Titan, in a much smaller footprint - the Titan's 18,688 processing nodes will be condensed to just ~4,600 nodes on the Summit, with each node achieving around 40 TeraFLOPS of computing power. The hardware? IBM and NVIDIA, married in water-cooled nodes with the powerful GV100 accelerator that's still eluding us enthusiasts - but that's a question for another day.

NVIDIA Announces SaturnV AI Supercomputer Powered by "Volta"

NVIDIA at the Supercomputing 2017 conference announced a major upgrade of its new SaturnV AI supercomputer, which when complete, the company claims, will be not just one of the world's top-10 AI supercomputers in terms of raw compute power; but will also the world's most energy-efficient. The SaturnV will be a cluster supercomputer with 660 NVIDIA DGX-1 nodes. Each such node packs eight NVIDIA GV100 GPUs, which takes the machine's total GPU count to a staggering 5,280 (that's GPUs, not CUDA cores). They add up to an FP16 performance that's scraping the ExaFLOP (1,000-petaFLOP or 10^18 FLOP/s) barrier; while its FP64 (double-precision) compute performance nears 40 petaFLOP/s (40,000 TFLOP/s).

SaturnV should beat Summit, a supercomputer being co-developed by NVIDIA and IBM, which in turn should unseat Sunway TaihuLight, that's currently the world's fastest supercomputer. This feat gains prominence as NVIDIA SaturnV and NVIDIA+IBM Summit are both machines built by the American private-sector, which are trying to beat a supercomputing leader backed by the mighty Chinese exchequer. The other claim to fame of SaturnV is its energy-efficiency. Before its upgrade, SaturnV achieved an energy-efficiency of a staggering 15.1 GFLOP/s per Watt, which was already the fourth "greenest." NVIDIA expects the upgraded SaturnV to take the number-one spot.

China Pulls Ahead of U.S. in Latest TOP500 List

The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list's inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.

NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018

NVIDIA has reportedly codenamed the GPU architecture that succeeds its upcoming "Volta" architecture after the 18th century French physicist who is one of the pioneers of electromagnetism, André-Marie Ampère, after whom the popular unit of measuring current is named. The new NVIDIA "Ampere" GPU architecture, which succeeds "Volta," will make its debut at the 2018 Graphics Technology Conference (GTC), hosted by NVIDIA. As with GPU architecture launches by the company in recent times, one can expect an unveiling of the architecture, followed by preliminary technical presentations by NVIDIA engineers, with actual products launching a little later, and consumer-grade GeForce product launching much later.

NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.

25+ Companies Developing Level 5 Robotaxis on NVIDIA CUDA GPUs

NVIDIA today unveiled the world's first artificial intelligence computer designed to drive fully autonomous robotaxis. The new system, codenamed Pegasus, extends the NVIDIA DRIVE PX AI computing platform to handle Level 5 driverless vehicles. NVIDIA DRIVE PX Pegasus delivers over 320 trillion operations per second -- more than 10x the performance of its predecessor, NVIDIA DRIVE PX 2.

NVIDIA DRIVE PX Pegasus will help make possible a new class of vehicles that can operate without a driver -- fully autonomous vehicles without steering wheels, pedals or mirrors, and interiors that feel like a living room or office. They will arrive on demand to safely whisk passengers to their destinations, bringing mobility to everyone, including the elderly and disabled.

Supermicro Releases Supercharged NVIDIA Volta Systems

Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, and networking solutions and green computing technology, today announced support for NVIDIA Tesla V100 PCI-E and V100 SXM2 GPUs on its industry leading portfolio of GPU server platforms.

For maximum acceleration of highly parallel applications like artificial intelligence (AI), deep learning, autonomous vehicle systems, energy and engineering/science, Supermicro's new 4U system with next-generation NVIDIA NVLink is optimized for overall performance. The SuperServer 4028GR-TXRT supports eight NVIDIA Tesla V100 SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for important HPC clusters and hyper-scale workloads. Incorporating the latest NVIDIA NVLink GPU interconnect technology with over five times the bandwidth of PCI-E 3.0, this system features an independent GPU and CPU thermal zoning design, which ensures uncompromised performance and stability under the most demanding workloads.

NVIDIA Announces OptiX 5.0 SDK - AI-Enhanced Ray Tracing

At SIGGRAPH 2017, NVIDIA introduced the latest version of their AI-based, GPU-enabled ray-tracing OptiX API. The company has been at the forefront of GPU-powered AI endeavors in a number of areas, including facial animation, anti-aliasing, denoising, and light transport. OptiX 5.0 brings a renewed focus on AI-based denoising.

AI training is still a brute-force scenario with finesse applied at the end: basically, NVIDIA took tens of thousands of image pairs of rendered images with one sample per pixel and a companion image of the same render with 4,000 rays per pixel, and used that to train the AI to predict what a denoised image looks like. Basically (and picking up the numbers NVIDIA used for its AI training), this means that in theory, users deploying OptiX 5.0 only need to render one sample per pixel of a given image, instead of the 4,000 rays per pixel that would be needed for its final presentation. Based on its learning, the AI will then be able to fill in the blanks towards finalizing the image, saving the need to render all that extra data. NVIDIA quotes a 157x improvement in render time using a DGX station with Optix 5.0 deployed against the same render on a CPU-based platform (2 x E5-2699 v4 @ 2.20GHz). The Optix 5.0 release also includes provisions for GPU-accelerated motion blur, which should do away with the need to render a frame multiple times and then applying a blur filter through a collage of the different frames. NVIDIA said OptiX 5.0 will be available in November. Check the press release after the break.

NVIDIA CEO Gives Away First Tesla V100 Accelerators to Top AI Researchers

NVIDIA CEO over the weekend, held a special event recognizing the efforts of some of the world's foremost AI researchers, and gifted each one of them the first production Tesla V100 GPU compute accelerators. Based on the company's latest "Volta" GPU architecture, the V100 features special "tensor cores," components that speed up deep-learning neural net training, which should have a significant impact on AI research as it cuts down hours or even days of neural net training in a typical project.

At the NVAIL (NVIDIA AI Labs) meetup hosted by NVIDIA, CEO Jen Hsun Huang stressed on the importance of supporting AI research. "AI is the most powerful technology force that we have ever known," said Jen Hsun, "I've seen everything. I've seen the coming and going of the client-server revolution. I've seen the coming and going of the PC revolution. Absolutely nothing compares," he said.

NVIDIA Announces the Tesla V100 PCI-Express HPC Accelerator

NVIDIA formally announced the PCI-Express add-on card version of its flagship Tesla V100 HPC accelerator, based on its next-generation "Volta" GPU architecture. Based on the advanced 12 nm "GV100" silicon, the GPU is a multi-chip module with a silicon substrate and four HBM2 memory stacks. It features a total of 5,120 CUDA cores, 640 Tensor cores (specialized CUDA cores which accelerate neural-net building), GPU clock speeds of around 1370 MHz, and a 4096-bit wide HBM2 memory interface, with 900 GB/s memory bandwidth. The 815 mm² GPU has a gargantuan transistor-count of 21 billion. NVIDIA is taking institutional orders for the V100 PCIe, and the card will be available a little later this year. HPE will develop three HPC rigs with the cards pre-installed.

NVIDIA's Market Cap to Reach $100 billion Soon; Grew ~$25 billion Since May 2017

NVIDIA has been on a roll lately with their market capitalization and share valuation, which could very well send the company soaring past the $100 billion dollar mark today. Whether or not that happens (and if it does, it will be a historical milestone for the company), NVIDIA's growth of almost $25 billion dollars since May 13th is nothing short of jaw-dropping.

The "sudden" market valuation on NVIDIA comes on the heels of the company's strong graphics execution and increasingly entrenched position in the high performance GPU computing market for machine learning. The company's Volta architecture, which was showcased by Jensen Huang at their GTC keynote on May 10th, boosted confidence in the company significantly. Since then, the company's market cap has increased from the $75 billion dollar it was at shortly after GTC, towards its $96.31 billion rated market cap today. More recently, with the recent rise of the crypto wave craze, NVIDIA's GPUs have been talked about as real alternatives to AMD's previously (and perhaps hurtful for the company) grasp on this kind of workloads.

Could This be the NVIDIA TITAN Volta?

NVIDIA, which unveiled its faster "Volta" GPU architecture at its 2017 Graphics Technology Conference (GTC), beginning with the HPC product Tesla V100, is closer to launching the consumer graphics variant, the TITAN Volta. A curious-looking graphics card image with "TITAN" markings surfaced on Reddit. One could discount the pic for being that of a well-made cooler mod, until you take a peak at the PCB. It appears to lack SLI fingers where you'd expect them to be, and instead has NVLink fingers in positions found on the PCIe add-in card variant of the Tesla P100 HPC accelerator.

You might think "alright, it's not a fancy TITAN X Pascal cooler mod, but it could be a P100 with a cooler mod," until you notice the power connectors - it has two power inputs on top of the card (where they're typically found on NVIDIA's consumer graphics cards), and not the rear portion of the card (where the P100 has it, and where they're typically found on Tesla and Quadro series products). Whoever pulled this off has done an excellent job either way - of scoring a potential TITAN Volta sample, or modding whatever card to look very plausible of being a TITAN Volta.

NVIDIA Announces Its Volta-based Tesla V100

Today at its GTC keynote, NVIDIA CEO Jensen Huang took the wraps on some of the features on their upcoming V100 accelerator, the Volta-based accelerator for the professional market that will likely pave the way to the company's next-generation 2000 series GeForce graphics cards. If NVIDIA goes on with its product carvings and naming scheme for the next-generation Volta architecture, we can expect to see this processor on the company's next-generation GTX 2080 Ti. Running the nitty-gritty details (like the new Tensor processing approach) on this piece would be impossible, but there are some things we know already from this presentation.

This chip is a beast of a processor: it packs 21 billion transistors (up from 15,3 billion found on the P100); it's built on TSMC's 12 nm FF process (evolving from Pascal's 16 nm FF); and measures a staggering 815 mm² (from the P100's 610 mm².) This is such a considerable leap in die-area that we can only speculate on how yields will be for this monstrous chip, especially considering the novelty of the 12 nm process that it's going to leverage. But now, the most interesting details from a gaming perspective are the 5,120 CUDA cores powering the V100 out of a total possible 5,376 in the whole chip design, which NVIDIA will likely leave for their Titan Xv. These are divided in 84 Volta Streaming Multiprocessor Units with each carrying 64 CUDA cores (84 x 64 = 5,376, from which NVIDIA is cutting 4 Volta Streaming Multiprocessor Units for yields, most likely, which accounts for the announced 5,120.) Even in this cut-down configuration, we're looking at a staggering 42% higher pure CUDA core-count than the P100's. The new V100 will offer up to 15 FP 32 TFLOPS, and will still leverage a 16 GB HBM2 implementation delivering up to 900 GB/s bandwidth (up from the P100's 721 GB/s). No details on clock speed or TDP as of yet, but we already have enough details to enable a lengthy discussion... Wouldn't you agree?

NVIDIA's Volta Reportedly Poised for Anticipated, Early Q3 2017 Launch

According to a report from Chinese website MyDrivers, NVIDIA is looking to spruce things up on its line-up with a much earlier than expected Q3 Volta Launch. Remember that Volta was expected, according to NVIDIA's own road-maps, to launch around early 2018. The report indicates that NVIDIA's Volta products - apparently to be marketed as the GeForce 20-series - will see an early launch due to market demands, and NVIDIA's intention to further increase pricing of its products through a new-generation launch.

These stand, for now, as only rumors (and not the first time they've surfaced at that), but paint a pretty interesting picture, nonetheless. Like Intel with its Coffee Lake series, pushing a product launch to earlier than expected has consequences: production, logistics, infrastructure, product roadmaps, and stock of existing previous-generation products must all be taken into account. And with NVIDIA just recently having introduced its performance-champions GTX 1080 Ti and Titan Xp graphics cards, all of this seems a trigger pull too early - especially when taking into account the competition landscape in high-performance graphics, which is akin to a single green-colored banner poised atop the Himalayas. And NVIDIA must not forget the fact that AMD could be pulling a black swan off its engineering department with Vega, like it did with its Ryzen series of CPUs.

NVIDIA to Build "Volta" Consumer GPUs on TSMC 12 nm Process

NVIDIA's next-generation "Volta" GPU architecture got its commercial debut in the most unlikely class of products, with the Xavier autonomous car processor. The actual money-spinners based on the architecture, consumer GPUs, will arrive some time in 2018. The company will be banking on its old faithful fab TSMC, to build those chips on a new 12 nanometer FinFET node that's currently under development. TSMC's current frontline process is the 16 nm FFC, which debuted in mid-2015, with mass-production following through in 2016. NVIDIA's "GP104" chip is built on this process.

This could also mean that NVIDIA could slug it out against AMD with its current GeForce GTX 10-series "Pascal" GPUs throughout 2017-18, even as AMD threatens to disrupt NVIDIA's sub-$500 lineup with its Radeon Vega series, scheduled for Q2-2017. NVIDIA's "Volta" architecture could see stacked DRAM technologies such as HBM2 gain more mainstream exposure, although competing memory standards such as GDDR6 aren't too far behind.

On NVIDIA's Tile-Based Rendering

Looking back on NVIDIA's GDC presentation, perhaps one of the most interesting aspects approached was the implementation of tile-based rendering on NVIDIA's post-Maxwell architectures. This has been an adaptation of typically mobile approaches to graphics rendering which keeps their specific needs for power efficiency in mind - and if you'll "member", "Maxwell" was NVIDIA's first graphics architecture publicly touted for its "mobile first" design.

This approach essentially divides the screen into tiles, and then rasterizes the entire frame in a per-tile basis. 16×16 and 32×32 pixels are the usual tile sizes, but both Maxwell and Pascal can dynamically assess the required tile size for each frame, changing it on-the-fly as needed and according to the complexity of the scene. This looks to ensure that the processed data has a much smaller footprint than that of the full image rendering - small enough that it makes it possible for NVIDIA to keep the data in a much smaller amount of memory (essentially, the L2 memory), dynamically filling and flushing the available cache as possible until the full frame has been rendered. This means that the GPU doesn't have to access larger, slower memory pools as much, which primarily reduces the load on the VRAM subsystem (increasing available VRAM for other tasks), whilst simultaneously accelerating rendering speed. At the same time, a tile-based approach also lends itself pretty well to the nature of GPUs - these are easily parallelized operations, with the GPU being able to tackle many independent tiles simultaneously, depending on the available resources.
Return to Keyword Browsing