News Posts matching #GDDR5

Return to Keyword Browsing

AMD Announces New Radeon Embedded E9000 Series GPU Models

The AMD Embedded business provides SoCs and discrete GPUs that enable casino gaming companies to create immersive and beautiful graphics for the latest in casino gaming platforms, which are adopting the same high-quality motion graphics and experiences seen in modern consumer gaming devices. AMD Embedded provides casino and gaming customers a breadth of solutions to drive virtually any gaming system. The AMD Ryzen Embedded V1000 SoC brings CPU and GPU technology together in one package, providing the capability to run up to four 4K displays from one system. The AMD Ryzen Embedded R1000 SoC is a power efficient option while providing up to 4X better CPU and graphics performance per dollar than the competition.

Beyond SoCs, AMD also offers embedded GPUs to enable stunning, immersive visual experiences while supporting efficient thermal design power (TDP) profiles. AMD delivers three discrete GPU classes to customers with the AMD Embedded Radeon ultra-high-performance embedded GPUs, the AMD Embedded Radeon high-performance embedded GPUs and the AMD Embedded Radeon power-efficient embedded GPUs. These three classes enable a wide range of performance and power consumption, but most importantly offer features that the embedded industry demands including planned longevity, enhanced support and support for embedded operating systems.

NVIDIA GeForce GTX 1660 Super Releases on Oct 22nd

Chinese website ITHome has new info on the release of NVIDA's GeForce GTX 1660 Super graphics cards. According to their website, the release is expected for October 22nd, which seems credible, considering NVIDIA always launches on a Tuesday. As expected, the card will be built around the Turing TU116 graphics processor, which also powers the GTX 1660 and GTX 1660 Ti. Shader counts should either be 1472, because NVIDIA wants to position their card between GTX 1660 (1408 cores) and GTX 1660 Ti (1536 cores). The memory size will be either 4 GB or 6 GB. Specifications of the memory are somewhat vague, it is rumored that NVIDIA's GTX 1660 Super will use GDDR6 chips, just like GTX 1660 Ti — the plain GTX 1660 uses GDDR5 memory. Another possibility is that shader count matches GTX 1660, and the only difference (other than clock speeds) is that GTX 1660 Super uses GDDR6 VRAM.

The Chinese pricing is expected around 1100 Yuan, which converts to $150 — surprisingly low, considering GTX 1660 retails at $210 and GTX 1660 Ti is priced at $275. Maybe NVIDIA is adjusting their pricing to preempt AMD's upcoming Radeon RX 5500/5600 Series. Videocardz has separately confirmed this rumor with their sources at ASUS Taiwan, who are expected to launch at least three SKUs based on the new NVIDIA offering, among them DUAL EVO, Phoenix and TUF3 series.

MSI Releases a Low-profile GeForce GTX 1650 Graphics Card

MSI released one of first low-profile (half-height) graphics cards based on the GeForce GTX 1650. The card uses a monolithic aluminium heatsink that's ventilated by two 60 mm fans. Although there's just one row of display outputs, the cooler is over 1 slot thick, and so you get dual-slot I/O shields for both full-height and half-height (low-profile). The card relies on the PCI-Express 3.0 x16 slot for all its power, and sticks to NVIDIA-reference clock speeds of 1665 MHz boost, and 8.00 GHz (GDDR5-effective) memory. Based on the 12 nm "TU117" silicon, the GeForce GTX 1650 features 896 "Turing" CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface, holding 4 GB of memory. Display outputs on this MSI low-profile card surprisingly lack DisplayPort, you only get an HDMI 2.0b, and a dual-link DVI-D (lacks analog D-Sub pins).

AMD Memory Tweak Tool Lets You OC and Tweak AMD Radeon Memory Timings On-the-fly

Eliovp, who describes himself on GitHub as a Belgian [crypto] mining enthusiast, created what could go down as the best thing that happened to AMD Radeon users all decade. The AMD Memory Tweak Tool is a Windows and Linux based GUI utility that lets you not just overclock AMD Radeon graphics card memory on the fly, but also lets you tweak its memory timings. Most timings apply live, while your machine is running within Windows/Linux GUI, some require memory retraining via a reboot, which means they can't be changed at this time, because rebooting reverts the timings to default. The author is trying to figure out a way to run memory training at runtime, which would let you change those timings, too, in the future. While you're at it, the tool also lets you play with GPU core frequency and fan-control.

The AMD Memory Tweak tool supports both Windows and Linux (GUI), and works with all recent AMD Radeon GPUs with GDDR5 and HBM2 memory types. It requires Radeon Software Adrenalin 19.4.1 or later in case of Windows, or amdgpu-pro ROCM to be actively handling the GPU in case of Linux. The Linux version further has some dependencies, such as pciutils-dev, libpci-dev, build-essential, and git. The source-code for the utility is up on GitHub for you to inspect and test.

DOWNLOAD: AMD Memory Tweak Tool by Eliovp

ZOTAC Announces the ZBOX QX Series Mini PC Powered by Xeon and Quadro

ZOTAC Technology, a global manufacturer of innovation, today introduced the more capable ZBOX Q Series Mini Creator PC featuring the advanced NVIDIA Quadro GPU and powerful workstation focused Intel Xeon processor. The new addition to the ZBOX Q Series leverages the ZBOX Mini PC's sleek and minimal design without compromising the powerful hardware components inside. From stunning industrial design and advanced special effects, to complex scientific visualization and sophisticated data modeling, to creating and editing images and videos, the ZBOX Q Series enables limitless creations.

The new ZBOX Q Series features the industry certified NVIDIA Quadro with up to 16GB GDDR5 memory. It's a tested and certified fully compatible hardware on many major professional design applications. The new Q Series models come equipped with an Intel Xeon processor to deliver fast and responsive performance.

AMD Readies Radeon RX 640, an RX 550X Re-brand

One of our readers discovered an interesting entry in the INF file of AMD's Adrenalin 19.4.3 graphics drivers. It includes two instances of "Radeon RX 640," and has the same device ID as the Radeon RX 550X from the current generation. The branding flies in the face of reports suggesting that with its next-generation "Navi" GPUs, AMD could refresh its client-segment nomenclature to follow the "Radeon RX 3000" series, but it's possible that the RX 600 series was carved out to re-brand the existing "Polaris" based low-end chips one step-down (i.e. RX 550X re-branding as RX 640, RX 560 possibly as RX 650, etc.).

The move to create the RX 600 series could also be driven by AMD's need to contain all "Navi" based SKUs in the RX 3000 series, and re-branded "Polaris" based ones in the RX 600, so that, at least initially, consumers aren't led to believe they're buying a re-branded "Polaris" SKU opting for an RX 3000-series graphics card. It's also possible that AMD may not create low-end chips based on "Navi" initially, and focus on the performance-segment with the highest sale volumes among serious gamers, the $200-400 price-range. Based on the 14 nm "Lexa" silicon, the RX 550X is equipped with 640 stream processors, 32 TMUs, 16 ROPs, and 2 GB of GDDR5 memory across a 128-bit wide memory bus. Given the performance gains expected from Intel's Gen11 "Ice Lake" iGPU and AMD's own refreshed "Picasso" APU, the RX 640 could at best be a cheap iGPU replacement for systems that lack it.
Image Credit: Just Some Noise (TechPowerUp Forums)

Manli Introduces its GeForce GTX 1650 Graphics Card Lineup

Manli Technology Group Limited, the major Graphics Cards, and other components manufacturer, today announced the affordable new member within the 16 series family - Manli GeForce GTX 1650. Manli GeForce GTX 1650 is powered by award-winning NVIDIA Turing architecture. It is also equipped with 4 GB of GDDR5, 128-bit memory controller, and built-in 896 CUDA Cores with core frequency set at 1485 MHz which can dynamically boost up to 1665 MHz. Moreover, Manli GeForce GTX 1650 has less power consumption with only 75W, and no external power supply required.

NVIDIA GeForce GTX 1650 Released: TU117, 896 Cores, 4 GB GDDR5, $150

NVIDIA today rolled out the GeForce GTX 1650 graphics card at USD $149.99. Like its other GeForce GTX 16-series siblings, the GTX 1650 is derived from the "Turing" architecture, but without RTX real-time raytracing hardware, such as RT cores or tensor cores. The GTX 1650 is based on the 12 nm "TU117" silicon, which is the smallest implementation of "Turing." Measuring 200 mm² (die area), the TU117 crams 4.7 billion transistors. It is equipped with 896 CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface, holding 4 GB of memory clocked at 8 Gbps (128 GB/s bandwidth). The GPU is clocked at 1485 MHz, and the GPU Boost at 1665 MHz.

The GeForce GTX 1650 at its given price is positioned competitively with the Radeon RX 570 4 GB from AMD. NVIDIA has been surprisingly low-key about this launch, by not just leaving it up to the partners to drive the launch, but also sample reviewers. There are no pre-launch Reviewer drivers provided by NVIDIA, and hence we don't have a launch-day review for you yet. We do have GTX 1650 graphics cards, namely the Palit GTX 1650 StormX, MSI GTX 1650 Gaming X, and ASUS ROG GTX 1650 Strix OC.

Update: Catch our reviews of the ASUS ROG Strix GTX 1650 OC and MSI GTX 1650 Gaming X

Colorful Announces GeForce GTX 1650 4GB Ultra Graphics Card

Colorful Technology Company Limited, professional manufacturer of graphics cards, motherboards and high-performance storage solutions is proud to announce the launch of its latest graphics card for the entry-level gaming market. The COLORFUL iGame GeForce GTX 1650 Ultra 4G brings NVIDIA's Turing graphics architecture to the masses, and COLORFUL brings the best out of the GPU thanks to its years of experience of working with gamers.

For new gamers and those upgrading from integrated graphics and want a taste of what's to come, the COLORFUL iGame GeForce GTX 1650 Ultra 4G is a prime choice to start with. Featuring the latest NVIDIA GPU technology: 12nm Turing architecture that brings with the best of Geforce, including GeForce Experience, NVIDIA Ansel, G-Sync and G-Sync Compatible monitor supports, Game Ready Drivers and so much more. COLORFUL has given the iGame GTX 1650 Ultra 4G with a performance boost via One-key OC button so you can get extra performance without tinkering.

NVIDIA to Flesh out Lower Graphics Card Segment with GeForce GTX 1650 Ti

It seems NVIDIA's partners are gearing up for yet another launch, sometime after the GTX 1650 finally becomes available. ECC Listings have made it clear that partners are working on another TU117 variant, with improved performance, sitting between the GTX 1650 and the GTX 1660, which will should bring the fight to AMD's Radeon RX 580. Of course, with the GTX 1660 sitting pretty at a $219 price, this leaves anywhere between the GTX 1650's $149 and the GTX 1660's $229 for the GTX 1650 Ti to fill. With the GTX 1660 being an average of 13% faster than the RX 580, it makes sense for NVIDIA to look for another SKU to cover that large pricing gap between the 1650 and the 1660.

It's speculated that the GeForce GTX 1650 could feature 1024 CUDA Cores, 32 ROPs and 64 TMUs. These should be paired with the same 4 GB GDDR5 VRAM running across a 128-bit bus at the same 8000 MHz effective clock speeds as the GTX 1650, delivering a bandwidth of 128 GB/s. Should NVIDIA be able to pull the feat of keeping the same 75W TDP between its Ti and non-Ti GTX 1650 (as it did with the GTX 1660), that could mean that a 75 W graphics card would be contending with AMD's 185 W RX 580 - a mean, green feet in the power efficiency arena. A number of SKUs for the GTX 1650 Ti have been leaked on ASUS' side of the field, which you can find after the break.

GAINWARD, PALIT GeForce GTX 1650 Pictured, Lack DisplayPort Connectors

In the build-up to NVIDIA's GTX 1650 release, more and more cards are being revealed. While GAINWARD and PALIT's designs won't bring much in the way of interesting PCB designs and differences to be perused, since the PCBs are exactly the same. The GAINWARD Pegasus and the PALIT Storm X only differ in terms of the used shroud design, and both cards carry the same TU117 GPU paired with 4GB of GDDR5 memory.

ZOTAC GeForce GTX 1650 Pictured: No Power Connector

Here are some of the first clear renders of an NVIDIA GeForce GTX 1650 graphics card, this particular one from ZOTAC. The GTX 1650, slated for April 22, will be the most affordable GPU based on the "Turing" architecture, when launched. The box art confirms this card features 4 GB of GDDR5 memory. The ZOTAC card is compact and SFF-friendly, is no longer than the PCIe slot itself, and is 2 slots-thick. Its cooler is a simple fan-heatsink with an 80 mm fan ventilating an aluminium heatsink with radially-projecting fins. The card can make do with the 75W power drawn from the PCIe slot, and has no additional power connectors. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and a dual-link DVI-D.

NVIDIA GeForce GTX 1650 Availability Revealed

NVIDIA is expected to launch its sub-$200 GeForce GTX 1650 graphics card on the 22nd of April, 2019. The card was earlier expected to launch towards the end of April. With it, NVIDIA will introduce the 12 nm "TU117," its smallest GPU based on the "Turing" architecture. The GTX 1650 could replace the current GTX 1060 3 GB, and may compete with AMD offerings in this segment, such as the Radeon RX 570 4 GB, in being Full HD-capable if not letting you max your game settings out at that resolution. The card could ship with 4 GB of GDDR5 memory.

ZOTAC Unveils its GeForce GTX 1660 Series

ZOTAC Technology, a global manufacturer of innovation, is pleased to expand the GeForce GTX 16 series with the ZOTAC GAMING GeForce GTX 1660 series featuring GDDR5 memory and the NVIDIA Turing Architecture.

Founded in 2017, ZOTAC GAMING is the pioneer movement that comes forth from the core of the ZOTAC brand that aims to create the ultimate PC gaming hardware for those who live to game. It is the epitome of our engineering prowess and design expertise representing over a decade of precision performance, making ZOTAC GAMING a born leading force with the goal to deliver the best PC gaming experience. The logo shows the piercing stare of the robotic eyes, where behind it, lies the strength and future technology that fills the ego of the undefeated and battle experienced.

NVIDIA Launches the GeForce GTX 1660 6GB Graphics Card

NVIDIA today launched the GeForce GTX 1660 6 GB graphics card, its successor to the immensely popular GTX 1060 6 GB. With prices starting at $219.99, the GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month; with fewer CUDA cores and a slower memory interface. NVIDIA carved the GTX 1660 out by disabling 2 out of 24 "Turing" SMs on the TU116, resulting in 1,408 CUDA cores, 88 TMUs, and 48 ROPs. The company is using 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6, which makes its memory sub-system 33 percent slower. The GPU is clocked at 1530 MHz, with 1785 MHz boost, which are marginally higher than the GTX 1660 Ti. The GeForce GTX 1660 is a partner-driven launch, meaning that there won't be any reference-design cards, although NVIDIA made should every AIC partner has at least one product selling at the baseline price of $219.99.

Read TechPowerUp Reviews: Zotac GeForce GTX 1660 | EVGA GeForce GTX 1660 XC Ultra | Palit GeForce GTX 1660 StormX OC | MSI GTX 1660 Gaming X

Update: We have updated our GPU database with all GTX 1660 models announced today, so you can easily get an overview over what has been released.

EVGA and GIGABYTE GeForce GTX 1660 Graphics Cards Pictured

Here are some of the first pictures of EVGA's and GIGABYTE's upcoming GeForce GTX 1660 graphics cards reportedly slated for launch later this week. It should come as no surprise that these cards resemble the companies' GTX 1660 Ti offerings, since they're based on the same 12 nm "TU116" silicon, with fewer CUDA cores. The underlying PCBs could be slightly different as the GTX 1660 uses older generation 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. The "TU116" silicon is configured with 1,408 CUDA cores out of the 1,536 physically present; the memory amount is 6 GB, across a 192-bit wide memory bus. The GTX 1660 baseline price is reportedly USD $219, and the card replaces the GTX 1060 6 GB from NVIDIA's product stack.

EVGA is bringing two designs to the market, a short-length triple-slot card with a single fan; and a more conventional longer card with 2-slot, dual-fan design. The baseline "Black" card could be offered in the shorter design; while the top-tier XC Ultra could be exclusive to the longer design. GIGABYTE, on the other hand, has two designs, a shorter-length dual-fan; and a longer-length triple-fan. Both models are dual-slot. The baseline SKU will be restricted to the shorter board design, while premium Gaming OC SKUs could come in the longer board design.

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).

NVIDIA GeForce GTX 1650 Memory Size Revealed

NVIDIA's upcoming entry-mainstream graphics card based on the "Turing" architecture, the GeForce GTX 1650, will feature 4 GB of GDDR5 memory, according to tech industry commentator Andreas Schilling. Schilling also put out mast box-art by NVIDIA for this SKU. The source does not mention memory bus width. In related news, Schilling also mentions NVIDIA going with 6 GB as the memory amount for the GTX 1660. NVIDIA is expected to launch the GTX 1660 mid-March, and the GTX 1650 late-April.

Intel Readies Crimson Canyon NUC with 10nm Core i3 and AMD Radeon

Intel is giving final touches to a "Crimson Canyon" fully-assembled NUC desktop model which combines the company's first 10 nm Core processor, and AMD Radeon discrete graphics. The NUC8i3CYSM desktop from Intel packs a Core i3-8121U "Cannon Lake" SoC, 8 GB of dual-channel LPDDR4 memory, and discrete AMD Radeon RX 540 mobile GPU with 2 GB of dedicated GDDR5 memory. A 1 TB 2.5-inch hard drive comes included, although you also get an M.2-2280 slot with both PCIe 3.0 x4 (NVMe) and SATA 6 Gbps wiring. The i3-8121U packs a 2-core/4-thread CPU clocked up to 3.20 GHz and 4 MB of L3 cache; while the RX 540 packs 512 stream processors based on the "Polaris" architecture.

The NUC8i3CYSM offers plenty of modern connectivity, including 802.11ac + Bluetooth 5.0 powered by an Intel Wireless-AC 9560 WLAN card, wired 1 GbE from an Intel i219-V controller, consumer IR receiver, an included beam-forming microphone, an SDXC card reader, and stereo HD audio. USB connectivity includes four USB 3.1 type-A ports including a high-current port. Display outputs are care of two HDMI 2.0b, each with 7.1-channel digital audio passthrough. The company didn't reveal pricing, although you can already read a performance review of this NUC from the source link below.

Sapphire Outs an RX 570 Graphics Card with 16GB Memory, But Why?

Sapphire has reportedly developed an odd-ball Radeon RX 570 graphics card, equipped with 16 GB of GDDR5 memory, double the memory amount the SKU is possibly capable of. The card is based on the company's NITRO+ board design common to RX 570 thru RX 590 SKUs, and uses sixteen 8 Gbit GDDR5 memory chips that are piggybacked (i.e. chips on both sides of the PCB). When Chinese tech publication MyDrivers reached out to Sapphire for an explanation behind such a bizarre contraption, the Hong Kong-based AIB partner's response was fascinating.

Sapphire in its response said that they wanted to bolster the card's crypto-currency mining power, and giving the "Polaris 20" GPU additional memory would improve its performance compared to ASIC miners using the Cuckoo Cycle algorithm. This can load up the video memory anywhere between 5.5 GB to 11 GB, and giving the RX 570 16 GB of it was Sapphire's logical next step. Of course Cuckoo Cycle is being defeated time and again by currency curators. This card will be a stopgap for miners until ASIC mining machines with expanded memory come out, or the proof-of-work systems are significantly changed.

Hands On with a Pack of RTX 2060 Cards

NVIDIA late Sunday announced the GeForce RTX 2060 graphics card at $349. With performance rivaling the GTX 1070 Ti and RX Vega 56 on paper, and in some cases even the GTX 1080 and RX Vega 64, the RTX 2060 in its top-spec trim with 6 GB of GDDR6 memory, could go on to be NVIDIA's best-selling product from its "Turing" RTX 20-series. At the CES 2019 booth of NVIDIA, we went hands-on with a few of these cards, beginning NVIDIA's de-facto reference-design Founders Edition. This card indeed feels smaller and lighter than the RTX 2070 Founders Edition.

The Founders Edition still doesn't compromise on looks or build quality, and is bound to look slick in your case, provided you manage to find one in retail. The RTX 2060 launch will be dominated by NVIDIA's add-in card partners, who will dish out dozens of custom-design products. Although NVIDIA didn't announce them, there are still rumors of other variants of the RTX 2060 with lesser memory amounts, and GDDR5 memory. You get the full complement of display connectivity, including VirtualLink.

GDDR6 Memory Costs 70 Percent More than GDDR5

The latest GDDR6 memory standard, currently implemented by NVIDIA in its GeForce RTX 20-series graphics cards, pulls great premium. According to a 3DCenter.org report citing list-prices sourced from electronics components wholeseller DigiKey, 14 Gbps GDDR6 memory chips from Micron Technology cost over 70 percent more than common 8 Gbps GDDR5 chips of the same density, from the same manufacturer. Besides obsolescence, oversupply could be impacting GDDR5 chip prices.

Although GDDR6 is available in marginally cheaper 13 Gbps and 12 Gbps trims, NVIDIA has only been sourcing 14 Gbps chips. Even the company's upcoming RTX 2060 performance-segment graphics card is rumored to implement 14 Gbps chips in variants that feature GDDR6. The sheer disparity in pricing between GDDR6 and GDDR5 could explain why NVIDIA is developing cheaper GDDR5 variants of the RTX 2060. Graphics card manufacturers can save around $22 per card by using six GDDR5 chips instead of GDDR6.

Sapphire Outs Radeon RX 590 Nitro+ OC Sans "Special Edition"

Sapphire debuted its Radeon RX 590 series last month with the RX 590 Nitro+ Special Edition, which at the time was advertised as a limited-edition SKU. The company over Holiday weekend updated its product stack to introduce a new mass-production SKU, the RX 590 Nitro+ OC, minus "Special Edition" branding. There are only cosmetic changes between the two SKUs. Sapphire's favorite shade of blue on the Special Edition SKU makes way for matte-black on the cooler shroud, as do the black accents on the back-plate, instead of blue. The fan impellers are opaque matte black instead of frosty and translucent.

Thankfully, Sapphire hasn't changed the specs that matter - factory-overclock. The card still ships with 1560 MHz engine clocks (boost), and 8.40 GHz (GDDR5-effective) memory, and a "quiet" second BIOS that dials down the clocks to 1545 MHz boost and 8.00 GHz memory. The underlying PCB is unchanged, too, drawing power from a combination of 8-pin and 6-pin PCIe power connectors, and conditioning it with a 6+1 phase VRM. Display outputs include two each of DisplayPort 1.4 and HDMI 2.0, and a dual-link DVI-D. The company didn't reveal pricing, although we expect it to be marginally lower than the Special Edition SKU.

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!

There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.

AMD Radeon RX 590 Launch Price, Other Details Revealed

AMD is very close to launching its new Radeon RX 590 graphics card, targeting a middle-of-market segment that sells in high volumes, particularly with Holiday around the corner. The card is based on the new 12 nm "Polaris 30" silicon, which has the same exact specifications as the "Polaris 20" silicon, and the original "Polaris 10," but comes with significantly higher clock-speed headroom thanks to the new silicon fabrication process, which AMD and its partners will use to dial up engine clock speed by 10-15% over those of the RX 580. While the memory is still 8 Gbps 256-bit GDDR5, some partners will ship overclocked memory.

According to a slide deck seen by VideoCardz, AMD is setting the baseline price of the Radeon RX 590 at USD $279.99, which is about $50 higher than RX 580 8 GB, and $40 higher than the price the RX 480 launched at. AMD will add value to that price by bundling three AAA games, including "Tom Clancy's The Division 2," "Devil May Cry 5," and "Resident Evil 2." The latter two titles are unreleased, and the three games together pose a $120-150 value. AMD will also work with monitor manufacturers to come up with graphics card + AMD FreeSync monitor bundles.
Return to Keyword Browsing