Thursday, September 10th 2015

AMD Clumps Various Graphics Divisions as Radeon Technologies Group Under Koduri

AMD this Wednesday, announced a major internal re-organization, with the merger of its various visual computing divisions as a monolithic entity, called Radeon Technologies Group. It will be led by Raja Koduri as Senior Vice President. Koduri served as VP of the company's visual computing division. He will now report directly to CEO Lisa Su, and the various other graphics-related divisions (eg: professional graphics business, led by Sean Burke), will report to him. AMD's lucrative semi-custom business - the one responsible for SoCs that drive popular game consoles such as the Xbox One and PlayStation 4 - will also come under the unified Radeon Technologies Group.

"We are entering an age of immersive computing where we will be surrounded by billions of brilliant pixels that enhance our daily lives in ways we have yet to fully comprehend," said Lisa Su. "AMD is one of the few companies with the engineering talent and IP to make emerging immersive computing opportunities a reality," said Koduri, adding "Now, with the Radeon Technologies Group, we have a dedicated team focused on growing our business as we create a unique environment for the best and brightest minds in graphics to be a part of the team re-defining the industry."

Many Thanks to Dorsetknob for the tip.
Add your own comment

22 Comments on AMD Clumps Various Graphics Divisions as Radeon Technologies Group Under Koduri

#1
btarunr
Editor & Senior Moderator
IMO, this makes brand Radeon easier to spin-off/sell-off (this could be a prequel to that). Some outstanding recent PR practices could serve as Catalyst for that.
Posted on Reply
#2
Steevo
So their custom IP business was the way to go, and it looks like they are spinning ATI/AMD/RTG out on its own again. Cause F the idiots at AMD who thought they were jesus, but were just a drunk ahole.


EDIT** Plus the bad name the ATI brand got with their AMD repeated fall on their face bullshit, Phnomonally shit, Bulldozer of trash, Excavating your septic system at low low prices.


Also may be the beginning of a bankrupcy for AMD as the parent company to spin all the bad debts off to, and run into the ground before they all jump ship.
Posted on Reply
#3
ArdWar
"...in ways we have yet to fully comprehend." said Lisa Su.

Finally realizing it, eh?
Posted on Reply
#4
tabascosauz
I'm not sure I like what this could mean for AMD's CPUs. They still have a promise in 2016 to deliver on. Jumping ship now?

It might be a good thing that someone who is experienced in GPUs (especially mobile ones, where the rapid progress really happens these days) is at the head of Radeon, but it just seems like AMD is throwing their CPUs away. This can't be good for AMD and it can't be good for Intel either.

Who knows? AMD/ATI has a reputation for throwing away things that suddenly become valuable. Look at Adreno. Qualcomm's having a great time with Adreno (although they're having a shitty time with ARMv8).
Posted on Reply
#5
okidna
btarunrIMO, this makes brand Radeon easier to spin-off/sell-off (this could be a prequel to that). Some outstanding recent PR practices could serve as Catalyst for that.
Posted on Reply
#6
TheGuruStud
They can't spin off the GPU division unless it's on paper only. Without GPUs they have nothing b/c their APU design isn't going away in the industry. The next step is HBM2 on die.

They can't get shit together as it is. As a separate company the collaboration timing will be an even larger failure, no?
Posted on Reply
#7
Frick
Fishfaced Nincompoop
@Steevo I honestly believe years from now we'll look back on Bulldozer and think they were too early.
Posted on Reply
#8
Assimilator
SteevoAlso may be the beginning of a bankrupcy for AMD as the parent company to spin all the bad debts off to, and run into the ground before they all jump ship.
Exactly what I was thinking. The management has finally woken up and figured out the company is too deep in a hole to ever climb out. So they will do some creative accounting to shift all the debt to a holding company, declare that company bankrupt, and start all over again with a clean balance sheet. This might be a brilliant strategic move from Lisa Su, although not so great for investors.

Attention anyone who is still foolish enough to own shares in AMD: sell them. Now. Before they're worth nothing.
Posted on Reply
#9
ne6togadno
Frick@Steevo I honestly believe years from now we'll look back on Bulldozer and think they were too early.
nop
it wasnt too early. it was right on time. it is logical next step after stacking cores consept.
it is software industry that failed to catch on and utilize all that bulldozer can offer. 4 years have past and still most of the software struggles to use 2 cores properly not to mention 4 or more.
Posted on Reply
#10
Frick
Fishfaced Nincompoop
ne6togadnonop
it wasnt too early. it was right on time. it is logical next step after stacking cores consept.
it is software industry that failed to catch on and utilize all that bulldozer can offer. 4 years have past and still most of the software struggles to use 2 cores properly not to mention 4 or more.
Depends on the viewpointI guess. :)
Posted on Reply
#11
iO
This regrouping is not about some preparations to spin off the GPU division, it's just a nicer way to describe an impending round of more staff layoffs.
Posted on Reply
#12
cadaveca
My name is Dave
Great news.
ne6togadnonop
it wasnt too early. it was right on time. it is logical next step after stacking cores consept.
it is software industry that failed to catch on and utilize all that bulldozer can offer. 4 years have past and still most of the software struggles to use 2 cores properly not to mention 4 or more.
What hurt Bulldozer was the cache design. Having L2 linked to modules broke current programming paradigms, caused the "performance deficit".
Posted on Reply
#13
Casecutter
Wow sounds like such a straight-forward way I'm surprised it just now commencing.

This ATG must place the predominate focus on Professional, that’s where the R&D could give the most returns, although that entails big expenditures in drivers and compliance. But that R&D to a most part does have relevance to consumer/gaming parts. While SOC and Consoles part can in many ways set the pace of direction PC gaming goes, more as evolving, while not lucrative short term can offer dividends long term.

Professional: the new group would be smart to focus on a given task, where a small on interposer CPU is working in coordination with the GPU, and HBM to make that particular task expediently with power needs reduced (overall system) they might find a good niche in HPC. Any good win in the fairly good profit area like HPC would be a boon to floating cash in.

I don't know how soon HBM2 can be in consoles, but if you can supply a APU all with compact shared HBM, ultimately providing improved efficiencies in power, performance, construction, and package size... There's a value in that to all parties that have merit and with that improved return. Also, the group should continue a footing in graphic advertising signage systems. I think as cursory area that they can cabbage-on with existing rudimentary technology, there’s an appreciable but constant cash flow.

As to gaming… stay true to time-lines, stop sugar-coating or pretending you best the competition across all segments. When you have a upstanding offering in a marketing position just provide review samples to keep it forefront. I don’t see social media as the predominate path to promote; and never should an executive/engineer "out" any snippets or barbs. Got something that needs to be said, vet it by PR and release it into various media’s at the same time. Want to drive buzz... reviews, and build on them posting on S-M saying "so-n-so" put up a review. Possibly connect winning free game(s) to folks that "like" the post. Then finally (for like perhaps later when there's cash flow) set-up local gaming contest/conventions that "break-out" though the use social media to land an invites.

They’ll need to control their message, especially moving forward to Arctic Islands, either it’s a released statement (as stated above), or you should come out and say we don’t “confirm nor deny” anything, it will not gain traction if you stick to that and do it early. Don’t say anything as to being XX% to the competition, heck don’t even say it’s better than our XX product by some amount. Just say we’re confident it will find plenty consumers for the intended price point.
Posted on Reply
#14
ne6togadno
cadavecaGreat news.

What hurt Bulldozer was the cache design. Having L2 linked to modules broke current programming paradigms, caused the "performance deficit".
i cant go into such details cause i duno that much about programming or cpu architecture but i can see the bigger picture. and what i see is that most of today's software scales better with increase of single core raw power rather then with increase of core numbers. look for photoshop multi-threading. it doesnt exist. 2 or 3 specific plugins benefit from increased core count and that's it. look at directx. it took ms 10+ years, after first multi core cpus, to come out with version that can utilize them properly (and it came out only cause amd pushed with mantle so that industry can advance and make use from all the tech for multithreading thay've put in their hardware)
there are tones of such examples and quite few apps that can put all those cores in use.
what i said in my previous post is valid for intel cpus too. @Frick's comment i've quoted was for bulldozer so i gave example with it.
Posted on Reply
#15
Steevo
Frick@Steevo I honestly believe years from now we'll look back on Bulldozer and think they were too early.
Their IPC was too poor for the performance they were supposed to bring.

I built a few machines with them, they were good middle of the road chips, but considering that 90% of the workload today will run faster on older C2D with insane over clocks they are still irrelevant.

AMD/ATI has had a long standing love of making/using a few new instruction sets and using the benchmarks they provide as "new numbers" when its a standard that will not see the light of day before the hardware is too old, or before someone comes along and does it better. Their occasional home runs are from others failures and not their own successes in the last 6 or so years.
Posted on Reply
#16
Casecutter
SteevoAMD/ATI has had a long standing love of making/using a few new instruction sets
While I don't want to make this a controversy, but perhaps just a change of perspective...

Is it inappropriate to develop way of implementing ideas (in this case architecture), because you can't because of patents' replicate what the competition does? Benchmarks are perhaps the only way stimulate such new technologies, how else do you intend to promote a new solution to a competitors lock on anything? Even when the benchmarking does show it might present real merit software developers who even want to invest the time, could in various fashions feel pressure from the three hundred pound gorilla who can probably intimidate in various ways.

It's in a way like saying.. stay in the back, don't rock the boat, all while everyone continues pass forward the tribute to the almighty. That's not how you progress new idea's in any market, especially in technology. There can be better ways to do something, but the status quo/dominance can put a powerful straggle-hold on emerging ideas when those rulers feel vulnerable.

Why did AMD bring out Mantle to showcase low level API and true asynchronous shaders? I almost want to think MS reneged on AMD, MS coveted it hoping to use such gains for the Xbox only, it was something they didn’t want to completely offer to PC gaming. AMD press them to see if they don't, others might pick it up. If others did (Ubuntu, SteamBox, Apple?), MS might watch their dominance with DirectX become undermined, and with it their last real vestige that somewhat props up their OS, as competition presenting an alternate (perhaps better) path for PC gaming see growth.
Posted on Reply
#17
Steevo
CasecutterWhile I don't want to make this a controversy, but perhaps just a change of perspective...

Is it inappropriate to develop way of implementing ideas (in this case architecture), because you can't because of patents' replicate what the competition does? Benchmarks are perhaps the only way stimulate such new technologies, how else do you intend to promote a new solution to a competitors lock on anything? .
AMD shares a license on CPU tech (what the discussion between Frick and I was about) with Intel, they have been unable to improve their IPC enough, and their attempts to make higher core count/higher frequency at the cost of higher power consumption CPU's in a brute force attempt to overcome it has led to lagging CPU sales, and given the whole AMD brand a "cheaper for a reason" market perception. Their poor management, and the failure of fabs to produce better processes on smaller nodes has played a large part in their failures, up to the point that they have attempted radical new designs that show the obviously weaker market position and misguided attempts to capture server/HPDT/mobile sales with a single architecture. Their current attempts to capture market with slightly improved graphics added to weak CPU cores does little to inspire confidence in the rest of the product offering, and with mediocre numbers presented by their graphics division and a very muddy future they have a lot of work to do, and show that their prior generation "vaporware" is worth the future pot to piss in.

If you would like to blindly ignore their track record that is your prerogative, but save a few very good cards they have had little success in their main business ventures of CPU and GPU performance winners.
Posted on Reply
#18
Casecutter
SteevoIf you would like to blindly ignore their track record
Wow... not ignoring their record. It's coming up on a fifth year (Oct/2011) of Bulldozer doldrums. Sure, Intel (even then a goliath) got out the gate first with a Sandy Bridge (Jan/2011). Even 11 months training things where set in stone to a large degree, all as R&D money was waning. AMD made a bad bet with that microarchitecture, and yes the BoD years ago turned rudder incorrectly many times... they where stuck, agreed. Old news... everyone and anyone has gotten to play Monday morning Q-B on such topics.
Steevothe failure of fabs to produce better processes on smaller nodes
AMD divested of their manufacturing (March/2009) so yes they relinquish that, but the continued node shrinks and improvement isn't totally their "failure". Correctly, they no longer had cash flow to support that, especially huge R&D and tooling costs, so it wasn't like they could hold onto that completely. Where they goofed when they did the deal they where to divested completely (March/2012) I see that as not a good move they should have held 1/3 share for more than a few years. AMD needed money, GloFo quickly figure out they'd rather not be tied directly to there sales numbers

With that said, it appears you couldn't comprehend my post at all, ignoring all of what was said (and that wasn't meant to be specific). Now with this retort you appear to have offered your tribute forward with faith, hopeful those on high will continue to sustain your wants.
Posted on Reply
#19
Steevo
CasecutterAMD divested of their manufacturing (March/2009) so yes they relinquish that, but the continued node shrinks and improvement isn't totally their "failure".
If we compare the real world values of power consumption/performance to their primary competitor they are lagging behind, despite being on the same node, so their failure is exactly theirs, not the specific failure of a fab or lack of specific improvements, they had the same tools to work with at the fab as Nvidia, and yet failed to produce.

They have continued to showcase mediocre performing products, failed to materialize the improvements alluded to, and I am certain that they were aware of the lack of process shrinks at the same time their competitor was.

At the end of the day owning a fab or not doesn't mean jack to the end product when you have a competitor who is able to produce and perform with the same constraints.


All I want is competition so prices stay at the sane level, and performance continues to improve, and companies remain profitable so they can innovate and move forward.
Posted on Reply
#20
Aquinus
Resident Wat-man
cadavecaWhat hurt Bulldozer was the cache design. Having L2 linked to modules broke current programming paradigms, caused the "performance deficit".
What?! You mean that huge pipeline had nothing to do with it? I think we all remember why Netburst sucked and it wasn't due to cache design. :) Don't get me wrong, they screwed up caching on BD but, that wasn't the end of its shortcomings. IPC suffered immensely because they thought they could get away with the same thing Intel thought they could with Netburst. "Longer pipeline? No problem, we'll just crank up the clocks."

Simply put, AMD sacrificed too much by going to BD IMHO when they could have invested that R&D into making the Phenom II more efficient.

I'll give AMD props for looking forward but, we all found out quickly that a CPU is only as good as its weakest link.
Posted on Reply
#21
Athlon2K15
HyperVtX™
AMD just wrapped all of its graphics technologies up in a nice box and put a bow on it for Microsoft to purchase.
Posted on Reply
#22
Casecutter
Steevopower consumption/performance to their primary competitor they are lagging behind, despite being on the same node,
Now you have flipped to GPU and TSMC 28nm? Sorry it's just hard with the your haphazard focus with the topic, let stay with graphics as this was what this started from.

AMD had not been at all that "out of the ballpark" in performance/Watt with GCN (they lead in the 6XXX series/Fermi era), until Maxwell (GM204) Sept 2014; its been just a year... although a bad year. AMD's response while late, and seemingly lackluster, falls to the plain fact they have little money to invest in R&D and production. Nvidia had R&D that found they could discard much of the non-gaming functions to provide such efficiency (though little performance increase). As Nvidia being so big now is probably looking to separate chips for gaming and professional, more than they had ever been able to in the past. I really don't keep up in the professional market not sure how that's effecting them, but I believe they still rely on mainly Kepler in many professional Sku's.

Beating down AMD graphic response in 2015 for the woe's of not having the capital brought on by decision from past BoD caused... seems to be just "ganging-on". They're in a tough situation to be sure. Would everyone have liked AMD to have had the cash flow to re-spin say Pitcairn and Hawaii to use GCN1.2... perhaps; although, consider for such expenditure what could they have gained? We saw how Tahiti-to-Tonga had given a mediocre bump, and honestly the percent of gain over Pitcairn couldn't make sense against the competitive/price landscape of "entry" cards. While since Hawaii GCN1.1 had much of the GCN1.2 that would've been less a return on investment. Smarter for AMD that they to conserved their resources and put the bulk of effort toward 14/16FinFet.
SteevoAll I want is competition so prices stay at the sane level, and performance continues to improve, and companies remain profitable so they can innovate and move forward.
Seeing at least in the graphic side the stagnation of 28nm, and honestly I still believe it wasn't a great process node for the industry. TSMC up'd prices in the beginning, they had production issues at the start, and while it had efficacy gains they seem to be forfeited to lower clocks. Nvidia finally by stripping out the non-essential bits that don't improve gaming found huge untapped potential for efficiency, and smaller die's (~30%) that helped their costs. We don't know if AMD could've actually done that same culling of stuff, as it might have effected their architecture their building on true asynchronous shader pipelines.

All we can do is watch and see if the paths of either plays out better. Will AMD find new traction with emerging Dx12 titles? Can AMD make it to 14nm FinFET and HBM2 across more SKU's before Nvidia? Will Nvidia and their coming driver that's intend to emulate graphics and compute workloads concurrently, be as good as AMD because they’ve kept to their ACE units that have been central to the GCN?

And, this has the conversation going full-circle “AMD/ATI has had a long standing love of making/using a few new instruction sets”. AMD has been waiting for their investment in realizing graphics using concurrent compute workloads, since probably before 2010. Basically when they started looking at next-gen console parts, and now with Dx12 it’s seeing fruitrition. Though did it work for them we have to wait and see? As you said, "when it’s a standard that will not see the light of day before the hardware is too old, or before someone comes along and does it better." In this case by Nvidia calculating they can hold out with Maxwell in regards to async compute, it wouldn’t really start to hurt them, and once it does it won't matter they'll have Pascal. A marketing can show it's a reason to move from Maxwell parts. The part I wonder is how many Dx12 games will be released sponsored by Nvidia in the next year where asynchronous workloads are fudged-with, so those games appear descent on Maxwell cards?
Posted on Reply
Add your own comment
Apr 24th, 2024 13:56 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts