News Posts matching #GPU

Return to Keyword Browsing

Bitspower at COMPUTEX 2019: New CPU, GPU Blocks and Conceptual Products

Bitspower at COMPUTEX 2019 showcased new products and conceptual, advanced looks at products that are still being developed. The company showcased GPU waterblocks for both the NVIDIA RTX 20-series of graphics cards, as well as AMD's current top of the line Radeon VII. On the GPU waterblock side of things, they also showcased an in-development block with a rugged texture and design that carries Bitspower's logo, which would certainly look great on a vertically-mounted GPU.

BITFENIX at COMPUTEX 2019: Dawn TG, Nova Mesh 16, Saber PC Chassis Showcased

At COMPUTEX 2019, BITFENIX took the opportunity to showcase their latest case design built for the conscious enthusiast. Their Dawn TG is the company's take on the RGB market, with a stylized, ARGB front fascia that's eerily reminiscent of Oblivion's Tet (the movie, not the game). This case is available in both black and white, and features smoked tempered glass side mirrors for that premium feel.

The Nova Mesh TG Black brings slightly more gaudiness to its RGB design, which is clearly visible behind the tempered glass side mirrors. The panel's front fascia, as the name implies, is covered in mesh so as to maximize airflow for interior components.

EA Reveals Next-Generation Hair Rendering for Frostbite

In the gaming industry, everything is evolving around game graphics. GPUs are integrating new technologies such as ray tracing, there are tons of software dedicated to making in-game illustrations look as realistic as possible. Electronic Arts, one of the game publishing companies decided to release a state of the art AAA games, today revealed an update to DICE's Frostbite engine.

DICE's Frostbite engine is powering many of today's AAA titles such as Battlefield V, Anthem and Star Wars Battlefront. Today it got a big update. EA released new capabilities to render the hair of in-game characters with almost real-life realism. This is pretty impressive considering that hair is very difficult to model artificially. Being one of the most interesting topics for game developers, good hair animations are extremely important to achieving the lifelike look newer AAA titles are targeting.

AMD "Navi" Features 8 Streaming Engines, Possible ROP Count Doubling?

AMD's 7 nm "Navi 10" silicon may finally address two architectural shortcomings of its performance-segment GPUs, memory bandwidth, and render-backends (deficiency thereof). The GPU almost certainly features a 256-bit GDDR6 memory interface, bringing about a 50-75 percent increase in memory bandwidth over "Polaris 30." According to a sketch of the GPU's SIMD schematic put out by KOMACHI Ensaka, Navi's main number crunching machinery is spread across eight shader engines, each with five compute units (CUs).

Five CUs spread across eight shader engines, assuming each CU continues to pack 64 stream processors, works out to 2,560 stream processors on the silicon. This arrangement is in stark contrast to the "Hawaii" silicon from 2013, which crammed 10 CUs per shader engine across four shader engines to achieve the same 2,560 SP count on the Radeon R9 290. The "Fiji" silicon that followed "Hawaii" stuck to the 4-shader engine arrangement. Interestingly, both these chips featured four render-backends per shader engine, working out to 64 ROPs. AMD's decision to go with 8 shader engines raises hopes for the company doubling ROP counts over "Polaris," to 64, by packing two render backends per shader engine. AMD unveils Navi in its May 27 Computex keynote, followed by a possible early-July launch.

After a 4 Year Leave, AMD Rejoins the Fortune 500 List

The Fortune 500 lists the top 500 companies in the worold in terms of revenue. These are the most significant movers in the markets, be it of real estate, mining, hedge fund, or semiconductor nature (among others). AMD was "kicked" out of the Fortune 500 back in 2015, when the company was struggling with its Bulldozer-based processors and had an increasingly small marketshare - and thus revenue - that Zen came on to save. Now, thanks to the efforts of everyone involved in the company, they've been listed again on the #460 spot.

The company has been winning minds and wallets when it comes to their CPU solutions in both the mainstream and professional segments, with the company making very important forays into the HPC world mostly thanks to the strength of their CPU lineup - which, in some cases, like with the Frontier Supercomupter (expected to be the world's fastest), can bring wins in the GPU computing department as well. For comparison's sake, Intel stands at a commanding #43, while NVIDIA enjoys a comfortable #268 place.

AMD to Detail Zen 2, Navi Architectures Come Hot Chips in August

The Hot Chips conference is one of the leading-edge grounds for discussion of new silicon-bound technologies, and AMD will, as usual, take to its grounds in an effort to detail their efforts in their technology fields. The conference's organization has already confirmed a number of participants in its conference schedule, which includes the likes of Intel, Microsoft, Alibaba, NVIDIA, Tesla and of course, AMD.

AMD will be delivering two keynotes: the first, on August 19th, is simply titled "Zen 2", and will therefore deal with the underpinnings of the Zen 2 microarchitecture, which will be pervasive to all of AMD's CPU product lines. A second conference will be held on the same day by AMD's CEO Lisa Su herself, and is titled "Delivering the Future of High-Performance Computing with System, Software and Silicon Co-Optimization". On the next day, August 20th, another AMD keynote is simply titled "7 nm Navi GPU", and we expect it to follow in the footsteps of the Zen 2 conference. So, with AMD diving deep into both architectures come August... it's extremely likely the company will have launched both product lines by then. Fingers crossed. You can find the abstract on AMD's CEO Lisa Su's conference after the break.

AMD Confirms Launch of Next-gen Ryzen, EPYC and Navi for Q3

During AMD's annual shareholder meeting today, AMD president and CEO Dr. Lisa Su confirmed the launch of next-generation AMD Ryzen, EPYC CPUs and Navi GPUs for the third quarter of this year. The expected products are going to be manufactured on TSMC's 7 nm process and will be using new and improved architectures.

Ryzen 3000 series CPUs are rumored to have up to as much as 16 cores in Ryzen 9 SKUs, 12 cores in Ryzen 7 SKUs and 8 cores in Ryzen 5 SKUs. EPYC server CPUs will be available in models up to 64 cores. All of the new CPUs will be using AMD "Zen 2" architecture that will offer better IPC performance and, as rumors suggest for consumer models, are OC beasts. Navi GPUs are the new 7 nm GPUs that are expected to be very competitive both price and performance wise to NVIDIA's Turing series, hopefully integrating new technologies such as dedicated Ray Tracing cores for higher frame rates in Ray Tracing enabled games. No next generation ThreadRipper launch date was mentioned, so we don't yet know when and if that will that land.

AMD Readying Commemorative 50th Anniversary Editions of Radeon VII, Ryzen 7 2700X

AMD is going to celebrate their 50th anniversary in a big way, with commemorative editions of both its highest performance GPU and CPU in the form of the Radeon VII and Ryzen 7 2700X, respectively. This isn't so surprising - after all, if partners are readying their own special editions, it would be amiss for the red team not to do the same. It's a time to lavish their lineup with something that marks the fifty years of the company's existence - alongside its bright (and not so bright) spots.

The Radeon VII will apparently bring the red up to 11, with a red-colored shroud and LED lighting - and apart from that, we simply don't know. It's speculated the Ryzen 7 2700X will be packaged in a prettier box, with increased core clocks to boot - perhaps through core binning and a higher maximum boost threshold. It would make sense for AMD to do the same on their own Radeon VII - celebrating a 50th anniversary with increased performance across the board seems an easy conclusion to come to. We'll just have to wait a few more days, though - apparently, the company will be introducing these products next week, come April 29th. If you want a piece of AMD's history, this could be your chance - albeit a limited one when it comes to actual production numbers for these limited edition GPU and CPU.

Razer Supercharges Windows Laptops and MacBooks with the New Core X Chroma

Razer , the leading global lifestyle brand for gamers, announced today the latest external graphics enclosure (eGPU), the Razer Core X Chroma. The Razer Core X Chroma expands its features to now include a 700W power supply, USB and Gigabit Ethernet ports, and Razer Chroma lighting.

Designed to work with a wide array of Thunderbolt 3 Windows laptops and MacBooks, the Razer Core eGPU line will give mobile warriors the ability to tap into the massive graphics processing power of a desktop gaming machine with minimal hassle. This allows users to play the most demanding games and create world-class content at blazing speeds. The Razer Core X Chroma now includes a 700W power supply, USB and Gigabit Ethernet ports, and Razer Chroma lighting to meet the needs of the most hardcore gamers and creators.

NVIDIA Also Releases Tech Demos for RTX: Star Wars, Atomic Heart, Justice Available for Download

We've seen NVIDIA's move to provide RTX effects on older, non-RT capable hardware today being met with what the company was certainly expecting: a cry of dismay from users that now get to see exactly what their non-Turing NVIDIA hardware is capable of. The move from NVIDIA could be framed as a way to democratize access to RTX effects via Windows DXR, enabling users of its GTX 1600 and 1000 series of GPUs to take a look at the benefits of raytracing; but also as an upgrade incentive for those that now see how their performance is lacking without the new specialized Turing cores to handle the added burden.

Whatever your side of the fence on that issue, however, NVIDIA has provided users with one more raytraced joy today. Three of them, in fact, in the form of three previously-shown tech demos. The Star Wars tech demo (download) is the most well known, certainly, with its studies on reflections on Captain Phasma's breastplate. Atomic Heart (download) is another one that makes use of RTX for reflections and shadows, while Justice (download) adds caustics to that equation. If you have a Turing graphics card, you can test these demos in their full glory, with added DLSS for improved performance. If you're on Pascal, you won't have that performance-enhancing mode available, and will have to slog it through software computations. Follow the embedded links for our direct downloads of these tech demos.

Glued Die on ASUS ROG Strix GeForce RTX 2080 Ti Affects Some Aftermarket Cooling Solutions

Update April 4th: This post has been corrected based on new information provided by ASUS, EKWB, as well as other parties. The original story mentioned a silent change to the glue used on the PCB which, as we now believe, is no longer the case in that ASUS is not to blame.

Update April 5th: ASUS has confirmed to us that there has been no PCB change (in terms of components and their heights), it's only a problem of tolerances due to the glue being liquid during production.

ASUS has glued the GPU die to the PCB for many generations, which helps ensure contact and avoids microfractures in the solder balls from physical force or thermal expansion. The nature of this glue, typically an epoxy resin, means that aftermarket cooling solutions, such as full cover or die-only water blocks, have to accommodate for this around the holes around the die. Previous graphics cards had no issue here, because the mounting holes were far away from the GPU die. With RTX 2080 Ti and its super large GPU chip this has changed, and there's only a few millimeters of space left. If a waterblock uses wider standoffs than the design merits, or if the glue spreads out farther than intended, it can result in poor/inconsistent contact between waterblock and the GPU, which in turn can lead to worse thermal performance than ideal.

This time, EK Waterblocks alerted us that the ROG Strix GeForce RTX 2080 Ti had poor contact and fitting issues with their GPU water block for the same, as seen in images below provided by their customer T. Hilal, which interferes with the four standoffs surrounding the package. EK recommends removing these standoffs to ensure a good fit and thermal paste spread, and this does not affect water block performance much in their internal testing. In previous such occasions, EK and others have had to come up with a second version of the block for added compatibility, however it remains to be seen if the ROG Strix GeForce RTX 2080 Ti will merit a similar treatment this time round. As an external reference, Phanteks has separately confirmed to us that their water block remains compatible.

AMD President and CEO Dr. Lisa Su to Deliver COMPUTEX 2019 CEO Keynote

Taiwan External Trade Development Council (TAITRA) announced today that the 2019 COMPUTEX International Press Conference will be held with a Keynote by AMD President and CEO Dr. Lisa Su. The 2019 COMPUTEX International Press Conference & CEO Keynote is scheduled for Monday, May 27 at 10:00 AM in Room 201 of the Taipei International Convention Center (TICC) in Taipei, Taiwan with the keynote topic "The Next Generation of High-Performance Computing".

"COMPUTEX, as one of the global leading technology tradeshows, has continued to advance with the times for more than 30 years. This year, for the first time, a keynote speech will be held at the pre-show international press conference," said Mr. Walter Yeh, President & CEO, TAITRA, "Dr. Lisa Su received a special invitation to share insights about the next generation of high-performance computing. We look forward to her participation attracting more companies to participate in COMPUTEX, bringing the latest industry insights, and jointly sharing the infinite possibilities of the technology ecosystem on this global stage."

Intel Hires NVIDIA's Tom Petersen in Latest Move to Bolster GPU Division

Anyone remotely familiar with NVIDIA knows of their now erstwhile distinguished engineer Thomas A. Petersen, better known simply as Tom Petersen or TAP. He was a delight to work with as far as the tech media is concerned, including TechPowerUp, and was a source of technical information on NVIDIA microarchitectures as well as features targeting the general consumer and prosumer alike. The last few keynote presentations have had a visible lack of Tom on screen, and even in person to where we were discussing internally whether he had taken on a more "behind the scenes" approach at the company. As it turns out, Tom is the latest in line to have attracted the eyes of Intel as the latter gears up to the challenge of gaining marketing share in the discrete GPU business in the years to come.

Tom confirmed on his Facebook page this past Friday that he was indeed leaving NVIDIA, with March 29 being his last working day there. He was quick to note his unemployment status on his LinkedIn profile in a humorous manner as well, and this was surely not for long given news broke shortly from Hothardware, and then Gamers Nexus, who both independently verified from their contacts at Intel that Tom Petersen was headed to the blue team sooner than later. Aside from being a media liaison for technical marketing, he has been at the forefront for the development of tools to help benchmark render response and effectiveness (FCAT), contributed to NVIDIA's GPU BOOST technology directly, and no doubt will be an important contributor at Intel to complement the vast number of PR and media personnel joining their ranks in the recent few months. We are excited to see what Tom helps bring to the table, and wish him the best to help create a more open and competing dGPU market for us.

Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

This here is an interesting piece of tech news for sure, in that Intel has already scored a pretty massive design win for not one, but two upcoming products. Intel's "Future Xeon Scalable Processors" and the company's "Xe Compute Architecture" have been tapped by the U.S. Department of Energy for incorporation into the new AURORA Supercomputer - one that will deliver exascale performance. AURORA is to be developed in a partnership between Intel and Cray, using the later's Shasta systems and its "Slingshot" networking fabric. But these are not the only Intel elements in the supercomputer design: Intel's DC Optane persistent memory will also be employed (in an as-of-yet-unavailable version of it as well), making this a full win across the prow for Intel.

Samsung Electronics Introduces New Flashbolt HBM2E High Bandwidth Memory

Samsung Electronics Co., Ltd., the world leader in advanced semiconductor technology, today announced its new High Bandwidth Memory (HBM2E) product at NVIDIA's GPU Technology Conference (GTC) to deliver the highest DRAM performance levels for use in next-generation supercomputers, graphics systems, and artificial intelligence (AI).

The new solution, Flashbolt , is the industry's first HBM2E to deliver a 3.2 gigabits-per-second (Gbps) data transfer speed per pin, which is 33 percent faster than the previous-generation HBM2. Flashbolt has a density of 16Gb per die, double the capacity of the previous generation. With these improvements, a single Samsung HBM2E package will offer a 410 gigabytes-per-second (GBps) data bandwidth and 16 GB of memory.

NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected

NVIDIA will kick off the 2019 GPU Technology Conference later today, at 2 PM Pacific time. The company is expected to either tease or unveil a new graphics architecture succeeding "Volta" and "Turing." Not much is known about this architecture, but it's highly likely to be NVIDIA's first to be designed for the 7 nm silicon fabrication process. This unveiling could be the earliest stage of the architecture's launch cycle, would could see market availability only by late-2019 or mid-2020, if not later, given that the company's RTX 20-series and GTX 16-series have only been unveiled recently. NVIDIA could leverage 7 nm to increase transistor densities, and bring its RTX technology to even more affordable price-points.

MSI Reveals New GeForce GTX 1660 Series Graphics Cards

As the world's most popular GAMING graphics card vendor, MSI is proud to announce its new graphics card line-up based on the new GeForce GTX 1660 GPU, the latest addition to the NVIDIA Turing GTX family.

The GeForce GTX 1660 utilizes the "TU116" Turing GPU that's been carefully architected to balance performance, power, and cost. TU116 includes all of the new Turing Shader innovations that improve performance and efficiency, including support for Concurrent Floating Point and Integer Operations, a Unified Cache Architecture with larger L1 cache, and Adaptive Shading.

JPR: GPU Shipments Down 2.65% From Last Quarter, 3.3% YoY

Jon Peddie Research, the market research firm for the computer graphics industry, has released its quarterly Market Watch report on worldwide GPU shipments used in PCs for Q4'18. Overall GPU shipments decreased -2.65% from last quarter, AMD shipments decreased -6.8% Nvidia decreased -7.6% and Intel's shipments, decreased -0.7%. AMD's market share from last quarter decreased -0.6%, Intel's increased 1.4%, and Nvidia's market share decreased -0.82%. Year-to-year total GPU shipments decreased -3.3%, desktop graphics decreased -20%, notebooks increased 8%.

Although overall GPU shipments declined PC sales saw an uptick of 1.61% which is a positive sign for the market overall. "The channel's demand for add-in boards (AIBs) in early 2018 was out of sync with what was happening in the market," said Dr. Jon Peddie, president and founder of Jon Peddie Research. "As a result, the channel was burdened with too much inventory. That has impacted sales of discrete GPUs in Q4, and will likely be evident in Q1, and Q2'19 as well."

AMD Patents Variable Rate Shading Technique for Console, VR Performance Domination

While developers have become more and more focused on actually taking advantage of the PC platform's performance - and particularly graphical technologies - advantages over consoles, the truth remains that games are being optimized for the lowest common denominator first. Consoles also share a much more user-friendly approach to gaming - there's no need for hardware updates or software configuration, mostly - it's just a sit on the couch and leave it affair, which can't really be said for gaming PCs. And the console market, due to its needs for cheap hardware that still offers performance levels that can currently fill a 4K resolution screen, are the most important playground for companies to thrive. Enter AMD, with its almost 100% stake in the console market, and Variable Rate Shading.

As we've seen with NVIDIA's Turing implementation for Variable Rate Shading, this performance-enhancing technique works in two ways: motion adaptive shading and content adaptive shading. Motion adaptive shading basically takes input from previous frames in order to calculate which pixels are moving fast across the screen, such as with a racing perspective - fast-flying detail doesn't stay focused in our vision so much that we can discern a relative loss in shading detail, whilst stationary objects, such as the focused hypercar you're driving, are rendered in all their glory. Valuable compute time can be gained by rendering a coarse approximation of the pixels that should be in that place, and upscaling them as needed according to the relative speed they are moving across the frame. Content adaptive shading, on the other hand, analyzes detail across a scene, and by reducing shading work to be done across colors and detail that hasn't had much movement in the previous frame and frames - saves frame time.

NVIDIA Partners with OBS for GeForce Optimization and RTX Encoder

We saw a glimpse of this at the NVIDIA suite during CES 2019, with a beta version coming out shortly after. NVIDIA and OBS (Open Broadcaster Software) have since brought out the full release of a new OBS Studio, version 23.0.1, that adds improved support for NVIDIA GeForce cards. In particular, their latest and greatest RTX lineup, including the new desktop RTX 2060 as well as the mobile and Max-Q variants, will see an FPS impact drop by as much as 66% according to NVIDIA's internal testing. Some example results are seen below, with games such as Fortnite, PUBG, Call of Duty: Black Ops 4 - Blackout, and Apex Legends seeing a frame rate boost by up to 48% compared to x264 Fast, and 27% compared to x264 Very Fast.

Given this is a result of NVENC, NVIDIA's hardware encoder, in place, older GeForce GPUs (GTX 600-series and newer that support NVENC) will also see some benefits. GeForce RTX GPUs just get to enjoy a bit more- up to 15% more, in fact, in efficiency as far as bitrate consumption for the same graphical fidelity. NVIDIA effectively says that "GeForce RTX GPUs can stream with superior image quality compared to x264 Fast, and on par with x264 Medium", thus putting in a strong case for single-PC gaming and streaming, as opposed to having a dedicated streaming PC. They have even put out a video to go over the enhancements, which will no doubt interest game streamers on the PC platform.

Manli Announces GeForce GTX 1660 Ti Series Graphics Cards

Manli Technology Group Limited, the major Graphics Cards, and other components manufacturer, today announced the brand new 16 series graphics solution, Manli GeForce GTX 1660 Ti with two options: Single Fan and Blower Fan.

The Manli GeForce GTX 1660 Ti is equipped with the world's fastest memory, 6 GB of GDDR6, and a 192 bit memory controller. The base clock is 1500 MHz which can dynamically boost up to 1770 MHz to deliver a smooth and fast gaming experience. Meanwhile, it is packed with the award-winning NVIDIA Turing architecture, adaptive shading technology, and NVIDIA Ansel which delivers super resolution of images.

NVIDIA TU116 GPU Pictured Up Close: Noticeably Smaller than TU106

Here is the first picture of NVIDIA's 12 nm "TU116" silicon, which powers the upcoming GeForce GTX 1660 Ti graphics card. While the size of the package itself is identical to that of the "TU106" on which the RTX 2060 and RTX 2070 are based; the die of the TU116 is visibly smaller. This is because the chip physically lacks RT cores, and only has two-thirds the number of CUDA cores as the TU106, with 1,536 against the latter's 2,304. The die area, too, is about 2/3rds that of the TU106. The ASIC version of TU116 powering the GTX 1660 Ti is "TU116-400-A1."

VideoCardz scored not just pictures of the ASIC, but also the PCB of an MSI GTX 1660 Ti Ventus graphics card, which reveals something very interesting. The PCB has traces for eight memory chips, across a 256-bit wide memory bus, although only six of them are populated with memory chips, making up 6 GB over a 192-bit bus. The GPU's package substrate, too, is of the same size. It's likely that NVIDIA is using a common substrate, with an identical pin-map between the TU106 and TU116, so AIC partners could reduce PCB development costs.

ASUS NVIDIA GeForce GTX 1660 Ti Models With 3 GB VRAM Registered With the EEC

It seems that NVIDIA may be pulling another GTX 1060 when it comes to memory configurations of its upcoming midrange, non-RTX GPU. If ASUS' filling with the EEC (Eurasian Economic Commission) are anything to go by - and they usually are - then the green team is looking to tier their GTX 1660 Ti graphics cards via memory culling, offering it in both 6 GB and 3 GB versions. The GTX 1660 Ti and GTX 1660 will supersede NVIDIA's highest-volume GTX

In all, there are 9 SKUs for the GTX 1660 Ti 3 GB graphics card being filed with the EEC, which usually preempts graphics card launches in those domains. These slot in nicely with ASUS' plans for 6 GB versions of the GTX 1660 Ti, almost to a card - though ASUS' STRIXX-branded graphics cards seem, for now, to only be available in 6 GB versions. Of course, the 3 GB of VRAM on the GTX 1060 allow the card to achieve a desirable performance/dollar ratio, but at the cost of some performance, with the penalty increasing alongside resolution - but these are cards that likely won't ever be used for 4K gaming. While 3 GB graphics cards still fare relatively well, as we've seen, the latest games are pushing over 3 GB of video RAM more often than not, which leaves the 3 GB version of the graphics card somewhat of a less than choice when it comes to AAA gaming. But when it comes to competitive multiplayer game,s it likely will be more than enough.

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

NVIDIA is in the process of rolling out the first implementations of its RTX 2000 series GPUs in mobile form, and if the going is as is being reported, it's going to be a little rough for users to actually extrapolate their performance from product to product. This is because manufacturers are apparently getting a whole lot of leeway in how to clock their products, according to their solution's thermal characteristics and design philosophy.

What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.

AMD Re-structures Leadership Team; James Prior Leaves AMD

Let me be the first to say that the two may not be directly related, but it is an awfully strong coincidence that both pieces of news come out on the same day. Indeed, earlier in the day AMD put out a press release (full release past the break) announcing "multiple organizational changes focused on strengthening the company's senior leadership team and accelerating growth." Several familiar names have been promoted within the company to be in charge of more products and visions across their CPU and GPU business units. Mark Papermaster, for example, is now an executive VP as well as CTO of AMD, and the company has also hired in new talent, including industry veteran Sandeep Chennakeshu, as executive VP of "Computing and Graphics responsible for the company's high-performance PC, gaming and semi-custom businesses".

Perhaps all this re-structuring and new hiring comes in handy, at a time when we have seen several people leave AMD for Intel or otherwise. Indeed, shortly after that press release went out, word got to us that James Prior, Senior Product Manager for AMD, and an ardent employee for nearly 6 years, is no longer working for the company. We have no word yet on what is next for James, but it was more than a small surprise to know that the person you just spoke with at CES, and had a long conversation of AMD's desktop processors, is gone just like that. We have known James for many years now, and can attest to his work ethics as well as being a great guy all-round. We wish him the best in his future ventures, and look forward to also seeing how AMD's re-structuring turns out.
Return to Keyword Browsing