News Posts matching "Maxwell"

Return to Keyword Browsing

NVIDIA Working on a Major DirectX 12 Focused Driver Update

NVIDIA is reportedly working on a major driver update that improves performance across the board on games that take advantage of DirectX 12. The company is working with major game studios to improve DirectX 12 performance by up to 16 percent on recent AAA titles, such as "Gears of War 4," "Hitman" (2016), "Rise of the Tomb Raider" and "Tom Clancy's The Division." Even "Ashes of the Singularity" is getting a performance improvement.

At its GeForce GTX 1080 Ti launch event, NVIDIA revealed that its "Pascal" and "Maxwell" GPU architectures are capable of tile-based rendering, a feature that significantly improves video memory bandwidth utilization. With the GTX 1080 Ti going on sale later this month, one can expect NVIDIA to release new drivers.

Source: OC3D

On NVIDIA's Tile-Based Rendering

Looking back on NVIDIA's GDC presentation, perhaps one of the most interesting aspects approached was the implementation of tile-based rendering on NVIDIA's post-Maxwell architectures. This has been an adaptation of typically mobile approaches to graphics rendering which keeps their specific needs for power efficiency in mind - and if you'll "member", "Maxwell" was NVIDIA's first graphics architecture publicly touted for its "mobile first" design.

This approach essentially divides the screen into tiles, and then rasterizes the entire frame in a per-tile basis. 16×16 and 32×32 pixels are the usual tile sizes, but both Maxwell and Pascal can dynamically assess the required tile size for each frame, changing it on-the-fly as needed and according to the complexity of the scene. This looks to ensure that the processed data has a much smaller footprint than that of the full image rendering - small enough that it makes it possible for NVIDIA to keep the data in a much smaller amount of memory (essentially, the L2 memory), dynamically filling and flushing the available cache as possible until the full frame has been rendered. This means that the GPU doesn't have to access larger, slower memory pools as much, which primarily reduces the load on the VRAM subsystem (increasing available VRAM for other tasks), whilst simultaneously accelerating rendering speed. At the same time, a tile-based approach also lends itself pretty well to the nature of GPUs - these are easily parallelized operations, with the GPU being able to tackle many independent tiles simultaneously, depending on the available resources.

NVIDIA Announces the GeForce GTX 1080 Ti Graphics Card at $699

NVIDIA today unveiled the GeForce GTX 1080 Ti graphics card, its fastest consumer graphics card based on the "Pascal" GPU architecture, and which is positioned to be more affordable than the flagship TITAN X Pascal, at USD $699, with market availability from the first week of March, 2017. Based on the same "GP102" silicon as the TITAN X Pascal, the GTX 1080 Ti is slightly cut-down. While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit wide GDDR5X memory interface. This translates to 11 memory chips on the card. On the bright side, NVIDIA is using newer memory chips than the one it deployed on the TITAN X Pascal, which run at 11 GHz (GDDR5X-effective), so the memory bandwidth is 484 GB/s.

Besides the narrower 352-bit memory bus, the ROP count is lowered to 88 (from 96 on the TITAN X Pascal), while the TMU count is unchanged from 224. The GPU core is clocked at a boost frequency of up to 1.60 GHz, with the ability to overclock beyond the 2.00 GHz mark. It gets better: the GTX 1080 Ti features certain memory advancements not found on other "Pascal" based graphics cards: a newer memory chip and optimized memory interface, that's running at 11 Gbps. NVIDIA's Tiled Rendering Technology has also been finally announced publicly; a feature NVIDIA has been hiding from its consumers since the GeForce "Maxwell" architecture, it is one of the secret sauces that enable NVIDIA's lead.

NVIDIA Rumored to Launch the GTX 1050 M at CES 2017

New reports have surfaced that indicate NVIDIA is all set to launch a new mid range mobile graphics solution, the GTX 1050 M at CES 2017 in Las Vegas. While NVIDIA has already released mobile variants of the GTX 1080, GTX 1070 and GTX 1060, people not willing to spend big on higher tier products from the green camp are limited to "Maxwell" based GTX 960 M or GTX 950 M offerings. Reports also indicate there has been somewhat of a surge in demand for GTX 1060 M equipped laptops, where the new GTX 1050 M could be nicely positioned to further augment consumer appetite.

As we reported in November, we can expect that in line with the existing "Pascal" based mobile solutions, the new GTX 1050 M to sport the same (or better) core-configuration as its desktop counterpart, but with significantly higher clock speeds. This should make the GTX 1050 M an attractive option as it would endow the laptop with graphical horsepower exceeding the very capable GTX 970 M. All in all with new Intel "Kaby Lake" processors set to take the stage at CES 2017 too, we could see quite an array of new or reconfigured laptops scattered throughout the show.

NVIDIA Announces Xavier, Volta-based Autonomous Transportation SoC

At its inaugural European edition of the Graphics Technology Conference (GTC), NVIDIA announced Xavier, an "AI supercomputer for the future of autonomous transportation." An evolution of its Drive PX2 board that leverages a pair of "Maxwell" GPUs with some custom logic and an ARM CPU, to provide cars with the compute power necessary to deep-learn the surroundings and self-drive, or assist-drive; Xavier is a refinement over Drive PX2 in that it merges three chips - two GPUs and one control logic into an SoC.

You'd think that NVIDIA refined its deep-learning tech enough to not need a pair of "Maxwell" SoCs, but Xavier is more than that. The 7 billion-transistor chip built on 16 nm FinFET process, offers more raw compute performance thanks to leveraging NVIDIA's next-generation "Volta" architecture, one more advanced than even its current "Pascal" architecture. The chip features a "Volta" GPU with 512 CUDA cores. The CVA makes up the vehicle I/O, while an image processor that's capable of 8K HDR video streams feeds the chip with visual inputs from various cameras around the vehicle. An 8-core ARM CPU performs general-purpose compute. NVIDIA hopes to get the first engineering samples of Xavier out to interested car-makers by Q4-2017.

AMD GPUs See Lesser Performance Drop on "Deus Ex: Mankind Divided" DirectX 12

Deus Ex: Mankind Divided is the latest AAA title to support DirectX 12, with its developer Eidos deploying a DirectX 12 renderer weeks after its release, through a patch. Guru3D put the DirectX 12 version of the game through five GPU architectures, AMD "Polaris," GCN 1.1, GCN 1.2, NVIDIA "Pascal," and NVIDIA "Maxwell," through Radeon RX 480, Radeon R9 Fury X, Radeon R9 390X, GeForce GTX 1080, GeForce GTX 1060, and GeForce GTX 980. The AMD GPUs were driven by RSCE 16.9.1 drivers, and NVIDIA by GeForce 372.70.

Looking at the graphs, switching from DirectX 11 to DirectX 12 mode, AMD GPUs not only don't lose frame-rates, but in some cases, even gain frame-rates. NVIDIA GPUs, on the other hand, significantly lose frame-rates. AMD GPUs tend to hold on to their frame-rates at 4K Ultra HD, marginally gain frame-rates at 2560 x 1440, and further gain frame-rates at 1080p. NVIDIA GPUs either barely hold on to their frame-rates, or significantly lose them. AMD has on multiple occasions claimed that its Graphics CoreNext architecture, combined with its purist approach to asynchronous compute make Radeon GPUs a better choice for DirectX 12 and Vulkan. Find more fascinating findings by Guru3D here.
More graphs follow.

NVIDIA Calls Intel's Bluff on Xeon Phi vs. GPU Benchmarks

NVIDIA accused Intel of cheating in its ISC 2016 presentation, particularly in a performance-related slide, in which it claimed that its "Knights Landing" Xeon Phi HPC processor provides faster training of neural nets than NVIDIA "Maxwell" GPUs. In a slide, Intel claimed that a Xeon Phi HPC processor card is 2.3 times faster at training deep-learning neural nets, and 38 percent better scaling across nodes, than NVIDIA "Maxwell" GPUs, which triggered a swift response from the GPU maker, which made significant investments in deep-learning technologies over the past three years.

NVIDIA argued that Intel is using the classic technique of running outdated benchmarks to test its neural net training speeds. The company says that if Intel used the latest version of the same benchmark (Caffe AlexNet), the "Maxwell" GPU will be found to be 30 percent faster than the Xeon Phi at training neural nets. NVIDIA also notes that "Maxwell" is only its previous-generation part, and a "Pascal" based HPC processor would easily be 90 percent faster than the Xeon Phi. More importantly, NVIDIA notes that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. The latest "Pascal" GPUs leverate NVLink to scale up to 128 GPUs, providing the fastest deep-learning solutions money can buy.

Source: ArsTechnica

NVIDIA Intros the Quadro M2000 Graphics Card

NVIDIA announced the Quadro M2000 professional graphics card. Based on the 28 nm GM206 "Maxwell" silicon, the M2000 is a mid-range professional graphics offering by the company, and is energy-efficient. The card has the same core-configuration as the GeForce GTX 950, with 768 CUDA cores, 48 TMUs, 16 ROPs, and a 128-bit wide GDDR5 memory interface. 4 GB is the standard memory amount for this card.

The GPU is clocked up to 1188 MHz, and the memory at 6.60 GHz (GDDR5-effective), yielding a memory bandwidth of 106 GB/s. The card relies on the PCI-Express slot for all its power, its typical board power-draw is rated at 75W. The reference-design board features a neat full-height PCB, and a single-slot cooling solution. Display outputs include four DisplayPort 1.2 connectors. The M2000 brings some Quadro-exclusive software features to the table, including nView MultiDisplay, and support for 5K and 8K displays (through multiple connectors). Expect this card to be priced around US $550.

NVIDIA Reportedly Stops Production of Certain "Maxwell" GPUs

Paving the way for its next-generation "Pascal" architecture-based performance-thru-enthusiast GPUs, NVIDIA reportedly halted further production of certain current-generation "Maxwell" GPUs. HWBattle reports that production of the GM204-based GeForce GTX 980, the popular GTX 970, and GM200-based GTX 980 Ti have seen production halts, to let the market digest existing inventories of the three cards; and letting NVIDIA's add-in card (AIC) partners prepare to stock up next-generation graphics cards, beginning with at least three SKUs based on the GP104 silicon, in June 2016.

Source: HWBattle

NVIDIA Drive PX2 Powered by a Pair of GP106 Chips

NVIDIA's Drive PX2 compute system for self-driving cars in development, as shown at the recent GTC event hosted by the company, is driven by a pair of GP106 GPUs. On the company's consumer-graphics products, it could drive mid-thru-performance segment SKUs, succeeding the GM206 chip, which powers the GeForce GTX 960. Keeping up with the theme of "Maxwell" chips, the GP106 is expected to feature half the graphics processing clusters (GPCs) as the GP104, and its CUDA core count is expected to be closer to 1,280.


NVIDIA Launches World's First Deep Learning Supercomputer

NVIDIA today unveiled the NVIDIA DGX-1, the world's first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence. The NVIDIA DGX-1 is the first system designed specifically for deep learning -- it comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.

The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.

EVGA Announces the GeForce GTX 950 Low Power Graphics Card

The EVGA GeForce GTX 950 features a true gaming GPU designed for every PC gamer. It's built to the exact specifications of the GeForce GTX family and powered by NVIDIA Maxwell - the most advanced GPU architecture ever created - to deliver 3x the performance of previous-generation cards. More powerful than any console, this card delivers a truly interactive, cinematic experience in the latest games with advanced effects driven by NVIDIA GameWorks and DirectX 12.

The EVGA GeForce GTX 950 is now available in special low power models, but still retains all the performance intact. In fact, several of these models do not even have a 6-Pin power connector. Learn more at this page.

NVIDIA Unveils the Quadro M6000 24GB Graphics Card

NVIDIA announced the Quadro M6000, its new high-end workstation single-GPU graphics card. Based on the GM200 silicon, and leveraging the "Maxwell" GPU architecture, the M6000 maxes out all the hardware features of the chip, featuring 3,072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 24 GB of memory, double that of the GeForce GTX TITAN X. Its peak single-precision floating point performance is rated at 7 TFLOP/s.

Where the M6000 differs from its the GTX TITAN X is its workstation-grade features. It drops the HDMI 2.0 connector for a total of four DisplayPort 1.2 connectors, supporting a total of four 4K Ultra HD displays. The dual-link DVI connector stays on. There's also an optional stereoscopic 3D connector. The nView MultiDisplay tech provides more flexible display-head configurations than the ones you find on NVIDIA's consumer graphics GPUs; you also get NVIDIA GPUDirect support, which gives better memory sharing access for multi-GPU systems. The M6000 supports most modern 3D APIs, such as DirectX 12, OpenGL 4.5, and Vulkan; with compute capabilities over CUDA, OpenCL, and DirectCompute. NVIDIA didn't reveal pricing.

Razer Accepts Pre-orders for the Razer Core External Graphics Solution

Razer, a world leader in connected devices and software for gamers, today announced pricing, availability and compatibility of its 2016 "Best of CES" award-winning Razer Core. The device is the world's first plug-and-play Thunderbolt 3 external graphics enclosure that debuted earlier this year in tandem with the Razer Blade Stealth Ultrabook, itself an official "Best PC" and "People's Choice" CES winner.

"Razer continues to innovate in the laptops category where big systems brands either cannot or will not," says Min-Liang Tan, Razer co-founder and CEO. "This time, in collaboration with industry leaders, we've created the world's first external graphics card solution of its kind, bringing the GPU power of a desktop PC to the latest Razer laptops, such as the Razer Blade Stealth Ultrabook, through a single Thunderbolt 3 connection with plug-and-play convenience."

NVIDIA Readies GeForce GTX 950 SE Graphics Card

NVIDIA is planning to shake up its sub-$150 graphics card lineup with a new SKU carved out of the current $140 GeForce GTX 950. The company is planning to retire the GTX 750 Ti, and is looking for a true replacement to the GTX 750 (non-Ti). The GTX 750 duo are based on the slightly older first-gen "Maxwell" architecture. The new GeForce GTX 950 SE, or GTX 950 LE, as it's being called; will be a further crippled GTX 950, rather than its better-endowed sibling (currently being served by the GTX 960).

The GeForce GTX 950 SE will feature one less streaming multiprocessor Maxwell (SMM) than even the current GTX 950, 5 out of 8 physically present on the GM206 silicon. This works out to a CUDA core count of 640. The TMU count is proportionately lower at 40, ROP count at 32, and memory bus width at 128-bit; holding 2 GB of GDDR5 memory. With a typical board power expected to be around 70W, cards by various AIC partners will either make do with single 6-pin PCIe power inputs, or completely lack them. The GPU and memory clock speeds are expected to be slightly lower than those of the GTX 950, too. NVIDIA could launch this SKU some time in March.

Source: Expreview

NVIDIA Talks Vulkan, Supports it on "Kepler" and "Maxwell" GPUs

NVIDIA talked Vulkan in its latest GeForce blog post, announcing that your GeForce GTX graphics card already supports the "industry forged" API. NVIDIA is offering Vulkan hardware-acceleration on its "Kepler" and "Maxwell" GPU architectures at this time, and on Windows 7 and above; PC Linux, and Android. NVIDIA is all praises for Vulkan's low-latency and high-efficiency pathways, which streamline the process of drawing graphics.

Vulkan makes its big mainstream debut with a major update to "The Talos Principle," by Croteam (the people behind the "Serious Sam" franchise). This update adds a Vulkan renderer to the game, and ships later today. NVIDIA has an driver ready with the Vulkan API, which you can download from here. Maintained by the Khronos Group, Vulkan is a successor to OpenGL, although it's built from the ground up, with a major chunk of its code being contributed by AMD, from its Mantle API.

Source: NVIDIA Blog

Rise of the Tomb Raider to Get DirectX 12 Eye Candy Soon?

Rise of Tomb Raider could be among the first AAA games that take advantage of DirectX 12, with developer Crystal Dynamics planning a massive update that adds a new renderer, and new content (VFX, geometry, textures). The latest version of the game features an ominously named "DX12.dll" library in its folder, and while it doesn't support DirectX 12 at the moment, a renderer selection has appeared in the game's launcher. DirectX 12 is currently only offered on Windows 10, with hardware support on NVIDIA "Kepler" and "Maxwell" GPUs, and on AMD Graphics CoreNext 1.1 and 1.2 GPUs.

Source: TweakTown

NVIDIA Announces VR-Ready Certification Program

NVIDIA today unveiled a VR-ready program to help gamers navigate their way to great virtual reality experiences. The company has partnered with PC and notebook makers and add-in card providers to deliver "GeForce GTX VR Ready" systems and graphics cards that deliver an immersive VR gaming experience. The program minimizes confusion regarding which equipment is necessary to play the range of VR games and applications increasingly coming to market.

Delivering a great VR experience demands seven times the graphics processing power of traditional 3D games and applications -- driving framerates above 90 frames per second (fps) for two simultaneous images (one for each eye). "For customers, navigating an emerging technology like VR can be daunting," said Jason Paul, general manager of emerging technologies at NVIDIA. "We're working with trusted partners worldwide to simplify the buying process with a GeForce GTX VR Ready badge that will let customers quickly identify PCs or add-in cards that are capable of handling the demands of VR." NVIDIA GPUs are at the heart of the VR ecosystem and are powered by the award-winning NVIDIA Maxwell graphics architecture, which was created with VR in mind.

EVGA Announces the GeForce GTX 970 HYBRID Graphics Card

The Award Winning EVGA HYBRID line is now available on GeForce GTX 970. The EVGA GeForce GTX 970 HYBRID GAMING is an "all in one" water cooling solution that significantly lowers the GPU operating temperature. Best of all? The water cooler is completely self-contained, with an included 120mm radiator and fan. No filling, no custom tubing, no maintenance. Just plug in and play!

Of course, the GeForce GTX 970 HYBRID GAMING is also powered by the next-generation NVIDIA Maxwell architecture, giving you incredible performance, unmatched power efficiency, and cutting-edge features. Maxwell is the most advanced GPU architecture ever made, designed to be the engine of next-generation gaming. Inspired by light, it was designed to solve some of the most complex lighting and graphics challenges in visual computing.

BIOSTAR Announces GeForce GTX 950 Gaming Graphics Card

BIOSTAR announces its latest graphics card designed to tackle the needs of gamers with competitive performance. BIOSTAR GeForce GTX 950 features 2GB of GDDR5 memory, with 768 CUDA cores, and support for NVIDIA PhysX and DSR Technology. Game in silence with the Low noise output via its heat pipe cooling design, and be prepared for the latest titles with its complete DirectX 12 support.

The BIOSTAR GeForce GAMING GTX 950 features NVIDIA's 2nd-generation Maxwell GPU which boasts highly-improved performance-per-watt versus other cards. With performance intended for 1080p gamers, the GTX 950 is the perfect lightweight card for MOBA gamers featuring various optimizations to improve your gaming experience while delivering the best performance-per-watt that only the latest-generation Maxwell GPU can offer. Armed with 768 CUDA cores and complimented by 2GB 128-bit GDDR5 memory, this card brings stunning visuals with support for up four monitors: dual-DVI, DisplayPort and HDMI supported.

NVIDIA Details "Pascal" Some More at GTC Japan

NVIDIA revealed more details of its upcoming "Pascal" GPU architecture at the Japanese edition of the Graphics Technology Conference. The architecture will be designed to nearly double performance/Watt over the current "Maxwell" architecture, by implementing the latest tech. This begins with stacked HBM2 (high-bandwidth memory 2). The top "Pascal" based product will feature four 4-gigabyte HBM2 stacks, totaling 16 GB of memory. The combined memory bandwidth for the chip will be 1 TB/s. Internally, bandwidths can touch as high as 2 TB/s. The chip itself will support up to 32 GB of memory, and so enterprise variants (Quadro, Tesla), could max out the capacity. The consumer GeForce variant is expected to serve up 16 GB.

It's also becoming clear that NVIDIA will build its "Pascal" chips on the 16 nanometer FinFET process (AMD will build its next-gen chips on more advanced 14 nm process). NVIDIA is innovating a new interconnect called NVLink, which will change the way the company has been building dual-GPU graphics cards. Currently, dual-GPU cards are essentially two graphics cards on a common PCB, with PCIe bandwidth from the slot shared by a bridge-chip, and an internal SLI bridge connecting the two GPUs. With NVLink, the two GPUs will be interconnected with an 80 GB/s bi-directional data path, letting each GPU directly address memory controlled by the other. This should greatly improve memory management in games that take advantage of newer APIs such as DirectX 12 and Vulkan; and prime the graphics card for higher display resolutions. NVIDIA is expected to launch its first "Pascal" based products in the first half of 2016.

Source: VR World

GIGABYTE Launches the GeForce GTX 980 WaterForce Graphics Card

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the GeForce GTX 980 WATERFORCE (GV-N980WAOC-4GD) that comes with an all-In-one, closed-loop liquid cooler. Built on the performance-proven GTX 980 Maxwell GPU, the GTX 980 WATERFORCE combines GIGABYTE exclusive technologies, high-quality components, and innovations to deliver the smooth gaming at ultimate 4K resolutions in a virtually silent operation compared to conventional air cooling. Through GPU Gauntlet Sorting, the graphics card is forged with only the top-notch GPU cores to bring out the maximum processing power for going super overclock.

With the pre-installed water cooling unit and radiator connected with a pair of SFP tubes, the GTX 980 WATERFORCE can be easily fitted in most chassis without the hassle of complicated installation or maintenance. The full-coverage cooling module covers not only the GPU, but also the VRAM and MOSFET to actively dissipate all the heat in an efficient manner, while requiring no additional fan like other competitions for a much enhanced acoustic performance.

Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12

It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.

Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.

NVIDIA GRID 2.0 Launches With Broad Industry Support

NVIDIA today launched NVIDIA GRID 2.0 with broad industry support for its ability to deliver even the most graphics-intensive applications to any connected device virtually. Nearly a dozen Fortune 500 companies are completing trials of the NVIDIA GRID 2.0 beta. Major server vendors, including Cisco, Dell, HP and Lenovo, have qualified the GRID solution to run on 125 server models, including new blade servers. NVIDIA has worked closely with Citrix and VMware to bring a rich graphics experience to end-users on the industry's leading virtualization platforms.

NVIDIA GRID 2.0 delivers unprecedented performance, efficiency and flexibility improvements for virtualized graphics in enterprise workflows. Employees can work from almost anywhere without delays in downloading files, increasing their productivity. IT departments can equip workers with instant access to powerful applications, improving resource allocation. And data can be stored more securely by residing in a central server rather than individual systems.

Manli Announces its GeForce GTX 950 Graphics Card

Manli Technology Group Limited, the major Graphics Cards, Mini-PCs and other components manufacturer, today announced the latest mid range of 9 series graphics solution, Manli GeForce GTX 950. MOBA games have been increasingly popular recently, and some gamers may concern to upgrade their PC in limited budget, to unleash true gaming potential. Manli GeForce GTX950 will be the best choice.

Powered by second-generation Maxwell architecture, with 768 CUDA cores enabled and running at 1038-1216MHz, to provide 3x performance than previous generation GTX650. With the support of 2GB GDDR5 and running at 6610MHz, Manli GeForce GTX950 enhances the competitive edge in MOBA games, and provides more OC headroom for users to overclock in Manli TurboEngine software.
Return to Keyword Browsing