News Posts matching "GTC"

Return to Keyword Browsing

NVIDIA Teases "Ultimate Gaming Experience" At GTC Taiwan

NVIDIA has posted a short (literally short) teaser, treating users to the promise of the "Ultimate Gaming Experience". This might mean something, such as the new, expected NVIDIA 11** series of graphics cards... Or it may mean something much less exciting, and have something to do with the 4K, HDR gaming experience that is supposed to be reaching gamers in a couple of weeks, at an expected cost of more kidneys than the average human has.

Officially, though, GTC 2018 Taiwan will revolve around artificial intelligence tech (what doesn't these days, really?) Translated, the teaser image reads something along the lines of "Utilizing GPU computing to explore the world's infinite possibilities - witness the power of artificial intelligence and the ultimate gaming experience in GTC Taiwan and Computex 2018." Remember, however, that marketing almost always has a way of blowing things out of proportion - don't hold your breath for a new graphics card series announcement.

NVIDIA Adapting RTX Ray-tracing to Vulkan API

NVIDIA made big moves to bring a semblance of real-time ray-tracing to the masses, with the new RTX technology, as part of its efforts to replace rasterized rendering, which has dominated 3D graphics for the past three decades. Microsoft has come out with its own extension to DirectX 12, with the new DXR API. NVIDIA is now reportedly working with the Khronos Group to bring RTX to Vulkan.

A new Vulkan extension titled "VK_NV_raytracing" surfaced in tech-documents accessed by Phoronix, which is the company's contribution to a multi-vendor standard for ray-tracing, being developed by the Khronos Group. This extension could expose several NVIDIA RTX features and presets to Vulkan. It also has similar code-structures to DXR, to minimize duplication of effort, or skill-building. NVIDIA will detail its adaptation of RTX to Vulkan further at GTC.

NVIDIA Turing GPU to Start Mass Production in Q3 2018

Despite not being backed by an official statement, NVIDIA's next-generation crypto-mining Turing graphics cards are expected to be revealed at GTC 2018 between March 26 to 29. According to DigiTimes's latest report, NVIDIA is expecting a drop in demand for graphics card later this year. In an effort to prolong the lifecycle of their current graphics cards, mass production for Turing won't start until the third quarter of 2018. Sources around the industry have also revealed that NVIDIA had a sit-down with AIB partners to address the current situation. In short, NVIDIA partners are now forbidden to promote activities related to cryptocurrency mining and to sell bulks of consumer graphics cards to cryptominers.

NVIDIA to Host World's Top AI Experts at 2018 GTC

NVIDIA will host thousands of the world's leading AI experts at its ninth annual GPU Technology Conference (GTC) on March 26-29 at the San Jose McEnery Convention Center. NVIDIA founder and CEO Jensen Huang will deliver a keynote address on Tuesday, March 27, at 9 a.m. Pacific time to an expected 8,000 attendees representing the diverse, rapidly expanding AI and GPU computing community.

"GTC is where the world's leading researchers and business leaders learn how to harness the power of AI," said Greg Estes, vice president of Developer Programs at NVIDIA. "As GPU computing continues to drive the AI revolution, GTC is where you'll see the future take shape."

Report: NVIDIA Not Unveiling 2018 Graphics Card Lineup at GDC, GTC After All

It's being reported by Tom's Hardware, citing industry sources, that NVIDIA isn't looking to expand upon its graphics cards lineup at this years' GDC (Game Developers Conference) or GTC (GPU Technology Conference). Even as reports have been hitting the streets that pointed towards NVIDIA announcing (if not launching) their two new product architectures as early as next month, it now seems that won't be the case after all. As a reminder, the architectures we're writing about here are Turing, reportedly for crypto-mining applications, and Ampere, the expected GeForce architecture leapfrogging the current top of the line - and absent from regular consumer shores - Volta.

There's really not much that can be gleaned as of now from industry sources, though. It's clear no one has received any kind of information from NVIDIA when it comes to either of their expected architectures, which means an impending announcement isn't likely. At the same time, NVIDIA really has no interest in pulling the trigger on new products - demand is fine, and competition from AMD is low. As such, reports of a June or later announcement/release are outstandingly credible, as are reports that NVIDIA would put the brakes on a consumer version of Ampere, use it to replace Volta on the professional and server segment, and instead launch Volta - finally - on the consumer segment. This would allow the company to cache in on their Volta architecture, this time on consumer products, for a full generation longer, while innovating the market - of sorts. All scenarios are open right now; but one thing that seems clear is that there will be no announcements next month.

NVIDIA's Market Cap to Reach $100 billion Soon; Grew ~$25 billion Since May 2017

NVIDIA has been on a roll lately with their market capitalization and share valuation, which could very well send the company soaring past the $100 billion dollar mark today. Whether or not that happens (and if it does, it will be a historical milestone for the company), NVIDIA's growth of almost $25 billion dollars since May 13th is nothing short of jaw-dropping.

The "sudden" market valuation on NVIDIA comes on the heels of the company's strong graphics execution and increasingly entrenched position in the high performance GPU computing market for machine learning. The company's Volta architecture, which was showcased by Jensen Huang at their GTC keynote on May 10th, boosted confidence in the company significantly. Since then, the company's market cap has increased from the $75 billion dollar it was at shortly after GTC, towards its $96.31 billion rated market cap today. More recently, with the recent rise of the crypto wave craze, NVIDIA's GPUs have been talked about as real alternatives to AMD's previously (and perhaps hurtful for the company) grasp on this kind of workloads.

NVIDIA Releases VRWorks Audio and 360 Video SDKs at GTC

Further planting its roots on the VR SDK and development field, NVIDIA has just announced availability of two more SDK packages, for their VRWorks Audio and 360 Video suites. Now a part of NVIDIA's VRWorks suite of VR solutions, the VRWorks Audio SDK provides real-time ray tracing of audio in virtual environments, and is supported in Epic's Unreal Engine 4 (here's hoping this solution, or other solutions similar to it, address the problem of today's game audio.) The VRWorks 360 Video SDK, on the other hand, may be less interesting for graphics enthusiasts, in that it addresses the complex challenge of real-time video stitching.

Traditional VR audio ( and gaming audio, for that matter) provide an accurate 3D position of the audio source within a virtual environment. However, as it is handled today, sound is processed with little regard to anything else but the location of the source. With VRWorks Audio, NVIDIA brings to the table considerations for the dimensions and material properties of the physical environment, helping to create a truly immersive environment by modeling sound propagation phenomena such as reflection, refraction and diffraction. This is to be done in real time, at a GPU level. This work leverages NVIDIA's OptiX ray-tracing technology, which allows VRWorks Audio to trace the path of sound in real time, delivering physically accurate audio that reflects the size, shape and material properties of the virtual environment.

NVIDIA Announces Xavier, Volta-based Autonomous Transportation SoC

At its inaugural European edition of the Graphics Technology Conference (GTC), NVIDIA announced Xavier, an "AI supercomputer for the future of autonomous transportation." An evolution of its Drive PX2 board that leverages a pair of "Maxwell" GPUs with some custom logic and an ARM CPU, to provide cars with the compute power necessary to deep-learn the surroundings and self-drive, or assist-drive; Xavier is a refinement over Drive PX2 in that it merges three chips - two GPUs and one control logic into an SoC.

You'd think that NVIDIA refined its deep-learning tech enough to not need a pair of "Maxwell" SoCs, but Xavier is more than that. The 7 billion-transistor chip built on 16 nm FinFET process, offers more raw compute performance thanks to leveraging NVIDIA's next-generation "Volta" architecture, one more advanced than even its current "Pascal" architecture. The chip features a "Volta" GPU with 512 CUDA cores. The CVA makes up the vehicle I/O, while an image processor that's capable of 8K HDR video streams feeds the chip with visual inputs from various cameras around the vehicle. An 8-core ARM CPU performs general-purpose compute. NVIDIA hopes to get the first engineering samples of Xavier out to interested car-makers by Q4-2017.

GeForce GTX TITAN-Z Market Availability Delayed?

NVIDIA's flagship dual-GPU graphics card, the GeForce GTX TITAN-Z, was expected to go on sale later today. That launch is now delayed, according to a SweClockers report. The three thousand Dollar question is why. According to some sources, NVIDIA is effecting a last minute design change that sees a meatier cooler on the card, than the one Jen-Hsun Huang rafikied to the press at GTC 2014.

There may have been a last-minute realization at Santa Clara, that the card - as presented at GTC - may not cut it in the ring against AMD's Radeon R9 295X2, or at least it won't be able to warrant its vulgar $3000 price tag, against the R9 295X2's $1500; despite AMD's rather messy three-piece approach to its liquid-cooled product (the card itself, a radiator, and coolant tubing), and so NVIDIA could be redesigning the GTX TITAN-Z with an even bigger cooler, to facilitate higher clock speeds.

NVIDIA Updates GPU Roadmap with "Volta"

NVIDIA updated its GPU micro-architecture roadmap at the 2013 GPU Technology Conference (GTC). Currently spearheaded by the "Kepler" micro-architecture, which drives its GeForce, Quadro, and Tesla product lines, and which will drive Tegra mobile SoCs in 2014; NVIDIA's next-generation "Maxwell" could make its debut some time in 2014. Going by NVIDIA's graph that puts performance-per-Watt against time, "Maxwell" should nearly double performance. Maxwell GPUs feature unified virtual memory, which lets CPUs treat graphics card memory as system memory, for faster general-purpose performance.

Although not backed by a tentative launch year, and assuming "Maxwell" has its run for another two years, 2016 could see the launch of NVIDIA's "Volta" GPU micro-architecture. In addition to advancements by its predecessors, "Volta" could introduce stacked DRAM technology. It would enable GPU memory bandwidths as high as 1 TB/s. Current high-end graphics cards such as GeForce Titan and Radeon HD 7970 GHz Edition, are capable of breaching the 300 GB/s mark, so NVIDIA's claims don't sound far-fetched.

TYAN Showcases Cutting-edge GPU Platforms at GTC 2013

TYAN, an industry-leading server platform design manufacturer and subsidiary of MiTAC International Corp, will present its cutting-edge GPU supporting platform during the NVIDIA GTC 2013. Ranging from 2U to 4U and compatible with1 to 8 GPUs. TYAN offers a full product portfolio to customers which are compatible with NVIDIA Tesla K Series GPU Accelerators. The TYAN S7055 and S7056 motherboard will be showcased as well as TYAN TA77-B7061 and FT48-B7055 barbones at TYAN's booth (#503) during the GTC 2013.

Targeting from mainstream applications to the most complicated high performance computing, TYAN develops a rich product portfolio of GPU based platforms. These products offer flexible and scalable performance as well as maximum efficiency in order to help customers deploying an ideal GPU cluster solutions. All the TYAN GPU platforms are under-validation with industry-leading companies' GPU products. The TYAN TA77-B7061, FT48-B7059 and FT77-B7059 GPU platforms passed the validation and are fully compatible with NVIDIA Tesla K Series GPU Accelerators. The bundled product portfolios of TYAN GPU platforms and NVIDIA Tesla K Series will be widely available in Q2.

NVIDIA Pioneers New Standard for HPC With Tesla GPUs Built on Kepler Architecture

NVIDIA today unveiled a new family of Tesla GPUs based on the revolutionary NVIDIA Kepler GPU computing architecture, which makes GPU-accelerated computing easier and more accessible for a broader range of high performance computing (HPC) scientific and technical applications.

The new NVIDIA Tesla K10 and K20 GPUs are computing accelerators built to handle the most complex HPC problems in the world. Designed with an intense focus on high performance and extreme power efficiency, Kepler is three times as efficient as its predecessor, the NVIDIA Fermi architecture, which itself established a new standard for parallel computing when introduced two years ago.

GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc.

Here are some key bits of information concerning the upcoming GeForce GTX 680, a performance single-GPU graphics card based on the 28 nm GK104 GPU by NVIDIA. The information, at face value, is credible, because we're hearing that a large contingent of the media that finds interest in the GPU industry, is attending the Game Developers Conference, where it could interact with NVIDIA, on the sidelines. The source, however, is citing people it spoke to at CeBIT.

First, and most interesting: with some models of the GeForce 600, NVIDIA will introduce a load-based clock speed-boost feature (think: Intel Turbo Boost), which steps up clock speeds of the graphics card when subjected to heavy loads. If there's a particularly stressing 3D scene for the GPU to render, it overclocks itself, and sees the scene through. This ensures higher minimum and average frame-rates.

NVIDIA's 2012 GPU Technology Conference Opens for Registration

NVIDIA today announced that registration is now open for its third GPU Technology Conference (GTC) in the U.S., one of the world's fastest-growing events focused on computational science and visualization.

GTC 2012 - to be held in San Jose, Calif., from May 14-17 - is the flagship in NVIDIA's global series of events focused on how the GPU is transforming science. It is expected to draw the greatest minds in the scientific, engineering, research and developer communities from more than 40 nations.

For more information or to register, visit the GTC website.

Return to Keyword Browsing