Monday, April 7th 2008

AMD plans to reduce workforce by 10%

In a disappointing but agreed upon move, AMD has decided to cut 10% of its workforce due to lower than expected first quarter financial results. The majority of business segments (mainly CPU) did not meet financial targets and in most cases lost money (again, mainly CPU). The cuts will hit all groups (including the ones that generated profit like the graphics division, albeit at a smaller scale).

AMD currently employs 16 000 in its workforce; a 10% cut will mean approximately 1600 people will be relieved. The cuts are expected to begin mid-April and finish by the end of the 3rd quarter of 2008.
Source: BusinessWire
Add your own comment

76 Comments on AMD plans to reduce workforce by 10%

#51
a111087
pepsi71oceanAMD should hire me, ide walk into that board room and say to them all by the time i leave this office less than half of you will still be in here. Thats what my uncle did at Sears. Heck AMD should hire my uncle, hell fix them right up.
ok, but for a while they will pay you with semprons :roll:
Posted on Reply
#52
Morgoth
Fueled by Sapphire
some one on amd sould do the same thing as steve did on apple or what ever his name was
Posted on Reply
#53
BumbRush
suraswamiwww.anandtech.com/cpuchipsets/showdoc.aspx?i=3272&p=1

The above article clearly shows how the 6400 is capable of scoring more, than few Core2s and even Q6600 at default speeds in few benchmarks. Only we overclockers know the difference when overclocked etc. Average/below average joe will know nothing about it. But Intel managed to spend a foturne to market their dumb ass Netburst and still spending a lot in marketing.

AMD needs to do that. Have confidence in their product line and start spending few more on the marketing.

According to my books PIII and A64 are the best invented architecture. Core2s are nothing but overclocked 2 PIII cores glued together. Don't know what good is that in terms of tech advancement? People are fond of just jumping over the fence without even knowing anything about where they are going.

Intel did nothing interms of tech advancement. Who got the first 64 bit? Who got the first energy efficient procs? Who got the integrated memory controller? Who got native Quad? And who bought the first unified HD decoder? -- All AMD/ATI.

Intel - they reinvented the global heater:D Intel is just following the foot steps of AMD and people believe Intel did everything. What an idiotic world we live in:shadedshu
netburst(or as my friends call it Nutburst) was horrible, and intel has just been copying stuff from AMD, intels first "dual core" was 2 p4's ducktaped togather, their first quad was 2 core2duo's ducktaped togather, the core2 is just an evolution on the pentium-m that was just a p3 with some small updates and higher clocks.........

purevideo sucks, and i have it!!!, its better then it use to be, at least on the g92 it has an effect thats decent/noticeable......

i cant wait till i get payed and can order my upgradez ;)
Posted on Reply
#54
btarunr
Editor & Senior Moderator
I don't get why the first thing AMDorks crib about Intel quads is that it's two dual-core dies 'duct-taped'. Despite a supposedly 'inferior' architecture it performs better than the 'true quads'. Yes, Netburst is inferior but it was designed way back in 2000 (or earlier) when AMD was selling K6/K7 whose architecture was competitive to Netburst. It sucked in front of K8 microarchitecture just as K8 sucked in front of Intel Core microarchitecture. So, base your arguments with time sync.
Posted on Reply
#55
suraswami
btarunrI don't get why the first thing AMDorks crib about Intel quads is that it's two dual-core dies 'duct-taped'. Despite a supposedly 'inferior' architecture it performs better than the 'true quads'. Yes, Netburst is inferior but it was designed way back in 2000 (or earlier) when AMD was selling K6/K7 whose architecture was competitive to Netburst. It sucked in front of K8 microarchitecture just as K8 sucked in front of Intel Core microarchitecture. So, base your arguments with time sync.
AMD just needs some tinkering. Its actually riding a dented car. Need to get paid to repair it:cry:. Then it will be transformed into a flashy sports car:D.

I was talking about technology. Inferior tech (Intel) no matter how better it runs still suck.

"K8 sucked in front of Intel Core microarchitecture" - Wrong statement. Even with the 60s car its still competing with your brand new flashy 2008 car. Show me how many benchmarks Core2 beats the K8 at default speeds, leave alone the OC(I accept C2 OCs like mad:respect:).

"K6/K7 whose architecture was competitive to Netburst" - Once again wrong. K6 was created to compete PIII. K7 was created to compete Netburst.

"Yes, Netburst is inferior but it was designed way back in 2000 (or earlier)" - So you can put up with Intel's crap for 6/7 years but if AMD screws up with their latest K10 can't wait 6 months too?
Posted on Reply
#56
btarunr
Editor & Senior Moderator
suraswamiAMD just needs some tinkering. Its actually riding a dented car. Need to get paid to repair it:cry:. Then it will be transformed into a flashy sports car:D.

I was talking about technology. Inferior tech (Intel) no matter how better it runs still suck.
I want faster computing, I don't give a flying **** to what technology drives it. So, the Intel Core 2 series gives me that regardless of what you allege to be an 'inferior' architecture. What edge does your 'superior' architecture give you end of it?
suraswami"K8 sucked in front of Intel Core microarchitecture" - Wrong statement. Even with the 60s car its still competing with your brand new flashy 2008 car. Show me how many benchmarks Core2 beats the K8 at default speeds, leave alone the OC(I accept C2 OCs like mad:respect:).
Oh, it does suck, Suri. The fact that a Core processor crunches 4 instructions /clock cycle against a K8 (and K10) crunching 3 /clock cycle makes that clear. You're comparing a car with a three cylinder engine to a 4 burner.
suraswami"K6/K7 whose architecture was competitive to Netburst" - Once again wrong. K6 was created to compete PIII. K7 was created to compete Netburst.
Oh, and the same PIII evolved Tualatin evolves into Core according to AMDork philosophy :rolleyes:. Right, statement changed: "Netburst was made to compete with K7" and well, what d'ya know? Since K7 couldn't evolve on a clock to clock against Netbursts, the Willamette/Northwood/Prescott_478 was a success against Palomino/Thoroughbred/Barton. Point remains that you can't say "Netburst sucks" in 2008 when it was more of a 2000~2005 technology which was eventually beat by AMD K8.....which in-turn was eventually beat by Intel Core. You have the results.
suraswami"Yes, Netburst is inferior but it was designed way back in 2000 (or earlier)" - So you can put up with Intel's crap for 6/7 years but if AMD screws up with their latest K10 can't wait 6 months too?
It's still prettymuch a screwup. Face it, the K10 isn't that "Phenominal evolution" over K8, it's the same 3 instructions/cc architecture, SSE4A and a L3 cache. It really isn't an evolution of what it was touted to be. While the best quad core for under $250 is the Phenom X4 9850 BE, there's no higher offering so AMD just got into a niche market, isn't looking to take on to giant killing acts the way the Barton core took on NorthwoodHT 2.8 GHz / 3.0 GHz. Back then, AMD aimed to the top, and it got there. Now all you have is a lacklusture effort and the fact that it cut ~1600 job re-iterates that.
Posted on Reply
#57
suraswami
"Oh, it does suck, Suri. The fact that a Core processor crunches 4 instructions /clock cycle against a K8 (and K10) crunching 3 /clock cycle makes that clear. You're comparing a car with a three cylinder engine to a 4 burner.
" - hey still 6400 beats Q6600 in certain tasks. Just needs little tinkering. See it this way - when 6400 can come close to Q6600 with 3 inst if the same thing gets 4 inst? This will happen.

"Oh, and the same PIII evolved Tualatin evolves into Core according to AMDork philosophy . Right, statement changed: "Netburst was made to compete with K7" and well, what d'ya know? Since K7 couldn't evolve on a clock to clock against Netbursts, the Willamette/Northwood/Prescott_478 was a success against Palomino/Thoroughbred/Barton. Point remains that you can't say "Netburst sucks" in 2008 when it was more of a 2000~2005 technology which was eventually beat by AMD K8.....which in-turn was eventually beat by Intel Core. You have the results."

- I wouldn't say netburst was a success, common a 3.4 can't do the same work of AMD running at 2.2

"It's still prettymuch a screwup. Face it, the K10 isn't that "Phenominal evolution" over K8" - yeah accepted. Man = evolution. so wait and see.
Posted on Reply
#58
btarunr
Editor & Senior Moderator
suraswamiHey still 6400 beats Q6600 in certain tasks. Just needs little tinkering. See it this way - when 6400 can come close to Q6600 with 3 inst if the same thing gets 4 inst? This will happen.
Apply this logic: the 6400+ is a 3.20 GHz processor compared to the 2.40 GHz Q6600 (digest: 32 * 3 = 24 *4). The 'certain tasks' you talk of clearly are single/dual threaded applications. The 6400+ doesn't lead significantly though. My comparision would be E8400 to 6400+....they're priced nearly the same ($180~190 for the E8400, what is supposed to be its price that's jacked up now due to stock shortage)....or better still, E8200 (2.66 GHz, 6 MB cache, 45nm Wolfdale, $150~$170 ), come on, compare them for me.
suraswami- I wouldn't say netburst was a success, common a 3.4 can't do the same work of AMD running at 2.2
AMD what running at 2.2? If it's K7 then you're wrong, the Barton 3200+ (clocked at 2.2 GHz) couldn't outperform a P4 at 3.0 GHz. If it's K8 2.2, yes, but I already placed a condition that K8 comes in a time-frame after the whole K7, P4/PD Netbursts, it's next-gen. To compete with that very K8, Intel released Intel Core which overall is a success, to compete with Core, AMD released K10, which overall is a failure. (You have the TLB fixed xx50 now, but it still doesn't compete with the fastest Intel has to offer)
suraswamiMan = evolution. so wait and see.
Yeah right, with a 10% loss in workforce, we'll see :rolleyes:
Posted on Reply
#59
suraswami
you never know those 10% would be not so bright ones.

Yes I was talking about Barton 3200. Except in Video applications almost everything else it did well.
Posted on Reply
#60
eidairaman1
The Exiled Airman
you guys need to knock it off with the Lamebrain insults
Posted on Reply
#61
btarunr
Editor & Senior Moderator
suraswamiYes I was talking about Barton 3200. Except in Video applications almost everything else it did well.
...against a 3.40 GHz processor? :wtf:
suraswamiyou never know those 10% would be not so bright ones.
My contention is that with loss in workforce, there's a reduction in output capacity.
Posted on Reply
#62
erocker
*
eidairaman1you guys need to knock it off with the Lamebrain insults
Yes, please it's growing quite tiresome. "Stay in line!!" I used to get yelled at by my old phys. ed. teacher, I think that applies here as well.
Posted on Reply
#63
BumbRush
suraswamiAMD just needs some tinkering. Its actually riding a dented car. Need to get paid to repair it:cry:. Then it will be transformed into a flashy sports car:D.

I was talking about technology. Inferior tech (Intel) no matter how better it runs still suck.

"K8 sucked in front of Intel Core microarchitecture" - Wrong statement. Even with the 60s car its still competing with your brand new flashy 2008 car. Show me how many benchmarks Core2 beats the K8 at default speeds, leave alone the OC(I accept C2 OCs like mad:respect:).

"K6/K7 whose architecture was competitive to Netburst" - Once again wrong. K6 was created to compete PIII. K7 was created to compete Netburst.

"Yes, Netburst is inferior but it was designed way back in 2000 (or earlier)" - So you can put up with Intel's crap for 6/7 years but if AMD screws up with their latest K10 can't wait 6 months too?
k7 was to compete with p3 copermine and later netburst, the k7 goes for the slotA athlon classic, the socketA tbird and spitfire cores all the way to the tbred-b and barton cores used later.

Netburst was made to replace the even at that time superior design of the p3, the tuilitin p3 KILLED the p4 at 2x the clock, yet intel killed off the p3 line to push the p4 chips because they all belived "clocks sell chips thats all that matters" dispite their engineers HATING the design due to it being so inefficent.......i have talked face to face with the ppl that designed/updated the netburst design, none of them liked it, it was to inefficent and to hot.........horrible!!!!


core is just an evolution on their older design, im SURE that some of their engineers have been working on that design ever since they p3t was killed off eather by order or just because they knew netburst was a dead end, that would explain the pentium-m and core/core2 chips being at the state they are/where when they became widely used, they had alot of dev time b4 they hit the market, and alot of revisions as well :)
Posted on Reply
#64
BumbRush
btarunr: yes the k7 even competed with the 3.4gz p4's in all but video encoding(and some audio encoding) because the p4's design had 1 advantege it was good at encoding type apps, for everything else, hell i have seen p4 chips that at stock run hotter then a tbird or palimino chip at stock dispite having a better cooler(if the amd used the same cooler it would run alot cooler then the p4 did)

and the k8 was NOT A BIG CHANGE from the k7, it was effectivly a k7 with better sse support and onboard memory contoler, these are what killed intel, p4 needed massive memory bandwith to keep the inefficent core fed with data, meaning rdram/rimm memory was a good move at the time, and if they hadent killed the ability of every other memory maker to make rimm's it would still be around, once ddr dual ch came out it took the place of rimm/rdram because it was ALOT cheaper, but its perf was still below that of the same chip using rimm/rdram.

on the other hand, the k7 and k8 can perform just fine using single chail ddr

then look at the p4 using ddr2 dual chanil, vs the k8, still didnt beet it, dispite costingmore, when amd moved to ddr2 it was after the latancys of ddr2 droped to where it was compeditive to ddr in per ALSO it was around the same price, hence it was a good move no matter what all the pissed off 939 owners say.

people cry that 939's life was to short, but i hear far more p4 423 owners PISSED that they can just slap a new cpu into their system, hell this has been true since the p4 came out, the 423 hada 6mo lifespan b4 u couldnt find retail parts for it anymore.......at least amd hasnt killed a socket in such a short time....
Posted on Reply
#65
btarunr
Editor & Senior Moderator
Tualatin was an evolution, only that it had to be killed because at that point the USP was clock speed, and people lived in a dogma that GHz means everything, this dogma was crushed by AMD K8 to point that Intel was forced to give up on the Netbust, at one point it was touted that it would take Intel a 5+ GHz Netburst (that would double up as a room warmer) to beat the AMD SanDiego FX-57 that ran cool and blazing fast at 2.80 GHz. At this point consumers slowly got out of their shells and started to disregard "GHz moar means moar!1!"...so Intel go back to thier closets, bring out the dusty Tualatin prints, improvise great deal on it and release Yonah...which proved it could take on the K8 originally being a mobile processor. So.....Conroe, Allendale, eventually Kentsfield and it's bad days for K8.

What's my point? Tualatin was advanced, more than the then Netbursts so Intel made the evolution way back then and forgot about it. But AMD has put in a lacklusture effort in competing with the Core. They grew complacent over their K8 success, thought K8 + DDR2 would bring them back to the top but fumbled, thought K10 would do it but they did it wrong. They used the same K8 design methodology, four cores on one die, IMC of PC2-8500, 512KB dedicated caches per core at L2 (here's the problem, where Core processors enjoy shared caches and single threaded processes have access to greater amounts of fast L2 cache when needed). AMD put in a L3 cache of 2MB but the problem with L3 caches are that they're not as fast as L2 caches, don't have the same bandwidth and step up latencies instead. When there was panic in the Intel camp and they rolled out the P4 EE (Emergency Edition), all they did was to add a 2MB L3 cache which clearly was slow, stepped up latencies, etc.

BumbRush: P4 had the advantage of SSE-2 which K7 lacked, that's why P4 encoded better.
Posted on Reply
#66
BumbRush
btarunr, it wasnt all about sse2, many encoding apps back then didnt make use or or make proper use of sse2, it was more about the way the p4/netburst are made, hence they still competed with far overall better k8 chips.

also the sse support in the k7 was so/so, it was more of a taged on sse unit, think of it like the p4's amd64(x86-64) support it was taged on as an afterthought/upgrade but wasnt native to the cpu.

and i agree the k10 wasnt a great move, but this was a dession by the morons in management, we didnt see a k9 because they cancled the project, when they did that they lost one of their better designers from that.

the k9 was totaly new design, it was a mix of RISC and CISC designs from what i have been told, it was in many ways based on the concepts from the old dec alpha cpus, this rings true since much of the design team was made of ex-dec engineers who designed the alpha chips.

i honestly think that management just didnt get that reducing the instruction set(risc) could be a GOOD THING, remove rarely used or un-used portions, or portions that could be emulated using new fetures with no perf loss, replace them with new instructions that could boost perf across the board, but as we know managment many times fear change.

effectivly i see the k10 as being a rushed design that will get ironed out given a little time, at least unlike intels execs moving to and sticking with the netburst cores the amd execs had a solid design they forced the design team to stick with/update.

i would expect that the 45nm k10's would bring optimizations a plenty, basickly the k10's ur seeing now are like the pentium-m they are an evolution on a theme, and as with core2 i would expect amd to get it worked out indue time.

now intels FINNELY moving to IMC this means everybody who owns an intel system and wants to upgrade gets the fun of buying yet another new board and cpu insted of just a cpu, its why my next upgrades just gonna be a new am2+ board and whatever chips the best price/perf/overclock ratio at the time, i already got a cooler that will work, and i will take my current am2 rig and build a 2nd box using ram i got sitting on my shelf :)

i know, intels currently faster, but im honestly considering a quad or tri core since i encode alot and i KNOW the app i use to do it WILL make use of the quads(as will my audio encoding app) also the app i use will have support for the new sse4a and other optimizations for the k10 chips come soon(already being worked on) :)

i just wana beable to swap my chip a year from now and upgrade it without a new board( i have swaped my am2 chip 4 times now) i HATE replacing motherboards.....fucking pain in the ass!!!!
Posted on Reply
#67
btarunr
Editor & Senior Moderator
Not all AM2 boards today take (support) AM2+ processors hence, you can't be sure your current AM2 board to support AM3 or future chips. Isn't AM3 supposed to be DDR3 supportive though it could run on an AM2 board?

K8 + DDR2 = K9, according to most motherboard manufacturers. Many including the likes of MSI and ASUS resorted to using the term "K9" in the names of their AM2 boards. I don't know if it was a blooper but many took AM2 processors as K9.
Posted on Reply
#68
Morgoth
Fueled by Sapphire
BumbRushnow intels FINNELY moving to IMC this means everybody who owns an intel system and wants to upgrade gets the fun of buying yet another new board and cpu insted of just a cpu, its why my next upgrades just gonna be a new am2+ board and whatever chips the best price/perf/overclock ratio at the time, i already got a cooler that will work, and i will take my current am2 rig and build a 2nd box using ram i got sitting on my shelf
emm New socket yes cus LGA775 cant handel the futeres of Nehalem
and if AMD got somthing bether then core2 amd wil fail cus core2 gets replaced by nehalem its 30% faster clock to clock! take that
Posted on Reply
#69
BumbRush
btarunrNot all AM2 boards today take (support) AM2+ processors hence, you can't be sure your current AM2 board to support AM3 or future chips. Isn't AM3 supposed to be DDR3 supportive though it could run on an AM2 board?

K8 + DDR2 = K9, according to most motherboard manufacturers. Many including the likes of MSI and ASUS resorted to using the term "K9" in the names of their AM2 boards. I don't know if it was a blooper but many took AM2 processors as K9.
am2 are not k9, some mobo makers may call it that but its not the k9 was scraped and never made market.

the only am2 boards that cant take an am2+cpu are those that dont have bios updates for it, and most of those dont have bios updates because the maker used bios chip that lack the capacity to hold the needed updates(a good part is the patch for that bugg in b2 cores)

and am3 will add ddr3 support but the chips will also support ddr2, just not on the same motherboard, tho im sure asrock/ecs/pcchips will come out with boards that have slots for both kinds of ram.

also am3 chips will be capable of running in am2/am2+ boards, amd planned this all along, in reality they could have dont the same thing with 939 but from what i have been told it would have taken longer to have gotten the chips out pushing am2's launch out another year or more, hence they just did a clean jump to am2 and EOL'd 939 and 754
emm New socket yes cus LGA775 cant handel the futeres of Nehalem
and if AMD got somthing bether then core2 amd wil fail cus core2 gets replaced by nehalem its 30% faster clock to clock! take that
proof of 30% boost???? i haven seen any benches that proove it yet.......and i know 775 cant deal with the new design because intels FINNELY copying what amd did with the first athlon64's and moving to an IMC insted of chipset based memory controler, intel could be using that extra effcency as how they are saying it will be more efficent per clock, also u know how much thats gonna cost to get?

im guessing ddr3 is a must, so thats gonna run u an arm+leg+left nut, or a kidnee.....

to me its more about what i can get for my $ and what will last me the longist, sure i could get core2 and overclock it like mad, but what about when sse5 comes out and encoding apps support that, do i wana just swap out a cpu or do i wana be forced to buy a whole new rig?

i got a diffrent view then most of you i guess, i dont like swaping my board, if i got one that works well i prefer to stick with it as long as i can, board swaping to me is just to damn time consuming.......

meh, i figuar eather way if i get a chip that can do 2,9-3gz on air that will drasticly speed up encoding for me, then im all for it :)
Posted on Reply
#70
Morgoth
Fueled by Sapphire
ther are 2 benches of nehalem i look tomorow of i can find them, why ur saying intel copying amd for intergrated memmory controller if so amd copyed IBM for intergrated memmory controller... but i could be rong
and ddr3 wil drop in price end this year
btw nehalem wil remove the northbridge and i dont think amd is this far...
Posted on Reply
#71
BumbRush
amd's northbrige is acctualy on the cpu, the nortbrige as its classicly known is the memory controler and some other small componanats, the "nortbrige" on current amd boards is far from the same as it was on older boards, hell on the k10 you have to change the northbrige clock multi to overclock using Fsb(problem so many clockers had was that they didnt lower northy multi or couldnt lower it)

see people and companys still list "north" and "south" on some chipsets, but in reality its not the same as it was in the k7 days, and im SURE that intels new boards will still have 2 chips for the board chipset if they had 2 already, because the "south" tends to be sata/ide controlers and such and the "north" tends to manage the communication of pci/pci-e devices with the cpu and eachother.

intel still today uses the clasical northbrige that handels all system traffic to memory and cpu, this is one reasion they have had to upp the fsb over and over, because its saturated by all the memory and system traffic, its become like a freeway in cali at rush hour!!!!!
Posted on Reply
#72
[I.R.A]_FBi
no need for mindless fanboi fighting intels current arch is superior in everyway except memory handling .. which is going to change by next arch ... what is amd's counter?
Posted on Reply
#73
BumbRush
a pie in the face maby?

personaly i dont need to be on the "top dog" platform, as long as im not on a p4/p-d im happy, the only thing those old heaters are good for is as encoding boxes!!!!
Posted on Reply
#74
[I.R.A]_FBi
BumbRusha pie in the face maby?

personaly i dont need to be on the "top dog" platform, as long as im not on a p4/p-d im happy, the only thing those old heaters are good for is as encoding boxes!!!!
you can have a pie

keep on fooling urself:)
Posted on Reply
#75
btarunr
Editor & Senior Moderator
Morgothther are 2 benches of nehalem i look tomorow of i can find them, why ur saying intel copying amd for intergrated memmory controller if so amd copyed IBM for intergrated memmory controller... but i could be wrong
and ddr3 wil drop in price end this year
btw nehalem wil remove the northbridge and i dont think amd is this far...
You're right. It's not about who copied who, it's about what they made of it. AMD still doesn't have a working prototype processor with a DDR3 controller. If someone calls it a "flicked from AMD", remind them, they use x86, SSE 1~4, MMX which are all technologies licensed from Intel. Besides, the concept of AMD integrating a memory controller isn't a patented technology (else we'd be in the middle of a legal battle).

And no, Nehalem doesn't remove NB. Remember that thread with the pics of the Intel Smackover board? It had an exposed NB for illustrative purposes. So yeah, the NB was present, though companies like NVidia could come up with a chipset consisting of just a single chip (the way it was with the NForce 500 (AMD) series (except 590 SLI) where the chipset consisted of a single chip).
Posted on Reply
Add your own comment
May 21st, 2024 18:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts