News Posts matching "GTC"

Return to Keyword Browsing

NVIDIA's Market Cap to Reach $100 billion Soon; Grew ~$25 billion Since May 2017

NVIDIA has been on a roll lately with their market capitalization and share valuation, which could very well send the company soaring past the $100 billion dollar mark today. Whether or not that happens (and if it does, it will be a historical milestone for the company), NVIDIA's growth of almost $25 billion dollars since May 13th is nothing short of jaw-dropping.

The "sudden" market valuation on NVIDIA comes on the heels of the company's strong graphics execution and increasingly entrenched position in the high performance GPU computing market for machine learning. The company's Volta architecture, which was showcased by Jensen Huang at their GTC keynote on May 10th, boosted confidence in the company significantly. Since then, the company's market cap has increased from the $75 billion dollar it was at shortly after GTC, towards its $96.31 billion rated market cap today. More recently, with the recent rise of the crypto wave craze, NVIDIA's GPUs have been talked about as real alternatives to AMD's previously (and perhaps hurtful for the company) grasp on this kind of workloads.

NVIDIA Releases VRWorks Audio and 360 Video SDKs at GTC

Further planting its roots on the VR SDK and development field, NVIDIA has just announced availability of two more SDK packages, for their VRWorks Audio and 360 Video suites. Now a part of NVIDIA's VRWorks suite of VR solutions, the VRWorks Audio SDK provides real-time ray tracing of audio in virtual environments, and is supported in Epic's Unreal Engine 4 (here's hoping this solution, or other solutions similar to it, address the problem of today's game audio.) The VRWorks 360 Video SDK, on the other hand, may be less interesting for graphics enthusiasts, in that it addresses the complex challenge of real-time video stitching.

Traditional VR audio ( and gaming audio, for that matter) provide an accurate 3D position of the audio source within a virtual environment. However, as it is handled today, sound is processed with little regard to anything else but the location of the source. With VRWorks Audio, NVIDIA brings to the table considerations for the dimensions and material properties of the physical environment, helping to create a truly immersive environment by modeling sound propagation phenomena such as reflection, refraction and diffraction. This is to be done in real time, at a GPU level. This work leverages NVIDIA's OptiX ray-tracing technology, which allows VRWorks Audio to trace the path of sound in real time, delivering physically accurate audio that reflects the size, shape and material properties of the virtual environment.

NVIDIA Announces Xavier, Volta-based Autonomous Transportation SoC

At its inaugural European edition of the Graphics Technology Conference (GTC), NVIDIA announced Xavier, an "AI supercomputer for the future of autonomous transportation." An evolution of its Drive PX2 board that leverages a pair of "Maxwell" GPUs with some custom logic and an ARM CPU, to provide cars with the compute power necessary to deep-learn the surroundings and self-drive, or assist-drive; Xavier is a refinement over Drive PX2 in that it merges three chips - two GPUs and one control logic into an SoC.

You'd think that NVIDIA refined its deep-learning tech enough to not need a pair of "Maxwell" SoCs, but Xavier is more than that. The 7 billion-transistor chip built on 16 nm FinFET process, offers more raw compute performance thanks to leveraging NVIDIA's next-generation "Volta" architecture, one more advanced than even its current "Pascal" architecture. The chip features a "Volta" GPU with 512 CUDA cores. The CVA makes up the vehicle I/O, while an image processor that's capable of 8K HDR video streams feeds the chip with visual inputs from various cameras around the vehicle. An 8-core ARM CPU performs general-purpose compute. NVIDIA hopes to get the first engineering samples of Xavier out to interested car-makers by Q4-2017.

GeForce GTX TITAN-Z Market Availability Delayed?

NVIDIA's flagship dual-GPU graphics card, the GeForce GTX TITAN-Z, was expected to go on sale later today. That launch is now delayed, according to a SweClockers report. The three thousand Dollar question is why. According to some sources, NVIDIA is effecting a last minute design change that sees a meatier cooler on the card, than the one Jen-Hsun Huang rafikied to the press at GTC 2014.

There may have been a last-minute realization at Santa Clara, that the card - as presented at GTC - may not cut it in the ring against AMD's Radeon R9 295X2, or at least it won't be able to warrant its vulgar $3000 price tag, against the R9 295X2's $1500; despite AMD's rather messy three-piece approach to its liquid-cooled product (the card itself, a radiator, and coolant tubing), and so NVIDIA could be redesigning the GTX TITAN-Z with an even bigger cooler, to facilitate higher clock speeds.

NVIDIA Updates GPU Roadmap with "Volta"

NVIDIA updated its GPU micro-architecture roadmap at the 2013 GPU Technology Conference (GTC). Currently spearheaded by the "Kepler" micro-architecture, which drives its GeForce, Quadro, and Tesla product lines, and which will drive Tegra mobile SoCs in 2014; NVIDIA's next-generation "Maxwell" could make its debut some time in 2014. Going by NVIDIA's graph that puts performance-per-Watt against time, "Maxwell" should nearly double performance. Maxwell GPUs feature unified virtual memory, which lets CPUs treat graphics card memory as system memory, for faster general-purpose performance.

Although not backed by a tentative launch year, and assuming "Maxwell" has its run for another two years, 2016 could see the launch of NVIDIA's "Volta" GPU micro-architecture. In addition to advancements by its predecessors, "Volta" could introduce stacked DRAM technology. It would enable GPU memory bandwidths as high as 1 TB/s. Current high-end graphics cards such as GeForce Titan and Radeon HD 7970 GHz Edition, are capable of breaching the 300 GB/s mark, so NVIDIA's claims don't sound far-fetched.

Source: NVIDIA

TYAN Showcases Cutting-edge GPU Platforms at GTC 2013

TYAN, an industry-leading server platform design manufacturer and subsidiary of MiTAC International Corp, will present its cutting-edge GPU supporting platform during the NVIDIA GTC 2013. Ranging from 2U to 4U and compatible with1 to 8 GPUs. TYAN offers a full product portfolio to customers which are compatible with NVIDIA Tesla K Series GPU Accelerators. The TYAN S7055 and S7056 motherboard will be showcased as well as TYAN TA77-B7061 and FT48-B7055 barbones at TYAN's booth (#503) during the GTC 2013.

Targeting from mainstream applications to the most complicated high performance computing, TYAN develops a rich product portfolio of GPU based platforms. These products offer flexible and scalable performance as well as maximum efficiency in order to help customers deploying an ideal GPU cluster solutions. All the TYAN GPU platforms are under-validation with industry-leading companies' GPU products. The TYAN TA77-B7061, FT48-B7059 and FT77-B7059 GPU platforms passed the validation and are fully compatible with NVIDIA Tesla K Series GPU Accelerators. The bundled product portfolios of TYAN GPU platforms and NVIDIA Tesla K Series will be widely available in Q2.

NVIDIA Pioneers New Standard for HPC With Tesla GPUs Built on Kepler Architecture

NVIDIA today unveiled a new family of Tesla GPUs based on the revolutionary NVIDIA Kepler GPU computing architecture, which makes GPU-accelerated computing easier and more accessible for a broader range of high performance computing (HPC) scientific and technical applications.

The new NVIDIA Tesla K10 and K20 GPUs are computing accelerators built to handle the most complex HPC problems in the world. Designed with an intense focus on high performance and extreme power efficiency, Kepler is three times as efficient as its predecessor, the NVIDIA Fermi architecture, which itself established a new standard for parallel computing when introduced two years ago.

GeForce GTX 680 Features Speed Boost, Arrives This Month, etc., etc.

Here are some key bits of information concerning the upcoming GeForce GTX 680, a performance single-GPU graphics card based on the 28 nm GK104 GPU by NVIDIA. The information, at face value, is credible, because we're hearing that a large contingent of the media that finds interest in the GPU industry, is attending the Game Developers Conference, where it could interact with NVIDIA, on the sidelines. The source, however, is citing people it spoke to at CeBIT.

First, and most interesting: with some models of the GeForce 600, NVIDIA will introduce a load-based clock speed-boost feature (think: Intel Turbo Boost), which steps up clock speeds of the graphics card when subjected to heavy loads. If there's a particularly stressing 3D scene for the GPU to render, it overclocks itself, and sees the scene through. This ensures higher minimum and average frame-rates.

NVIDIA's 2012 GPU Technology Conference Opens for Registration

NVIDIA today announced that registration is now open for its third GPU Technology Conference (GTC) in the U.S., one of the world's fastest-growing events focused on computational science and visualization.

GTC 2012 - to be held in San Jose, Calif., from May 14-17 - is the flagship in NVIDIA's global series of events focused on how the GPU is transforming science. It is expected to draw the greatest minds in the scientific, engineering, research and developer communities from more than 40 nations.

For more information or to register, visit the GTC website.

Return to Keyword Browsing