Thursday, February 12th 2009

Samsung Begins Production of High-performance GDDR5 Memory Using 50-nm Technology

Samsung Electronics Co., Ltd., the world leader in advanced memory technology and the leading producer of high-end graphics memory, announced today that it has begun mass producing GDDR5 graphics memory using 50-nanometer class process technology.

"Our early 2009 introduction of GDDR5 chips will help us to meet the growing demand for higher performance graphics memory in PCs, graphic cards and game consoles," said Mueez Deen, director, mobile and graphics memory, Samsung Semiconductor, Inc. "Because GDDR5 is the fastest and highest performing memory in the world, we're able to improve the gaming experience with it across all platforms," he added.
Designed to support a maximum data transfer speed of 7.0Gbps, Samsung's GDDR5 will render more life-like (3D) imaging with a maximum 28GB/s bandwidth, which is more than twice that of the previous fastest graphics memory bandwidth of 12.8GB/s for GDDR4. The ultra-fast processing speed is equivalent to transferring nineteen 1.5GB DVD resolution movies in one second. The high image processing speed of the GDDR5 also supports the latest data formats (Blu-ray and full HD).

Unlike GDDR4, which processes data and images using the strobe-and-clock technique, the processing speed of the GDDR5 is much faster because it operates with a free-running clock that does not require the data read/write function to be synchronized to the operations of the clock. By adopting 50nm class technology, Samsung expects production efficiency to rise 100 percent over 60nm class technology. In addition, Samsung's GDDR5 operates at 1.35 volts (V), which represents a 20 percent reduction in power consumption compared to the 1.8V at which GDDR4 devices operate.

Now available in a 32Megabit (Mb) x32 configuration and also configurable as a 64Mb x16 device, Samsung expects GDDR5 to account for over 20 percent of the total graphic memory market in 2009. The company also said it plans to expand the 50-nm process technology throughout its graphics memory line-up this year.
Source: Samsung
Add your own comment

26 Comments on Samsung Begins Production of High-performance GDDR5 Memory Using 50-nm Technology

#1
jbunch07
So I guess that means current video cards using GDDR5 are using the 60nm technology?
Posted on Reply
#2
Weer
jbunch07So I guess that means current video cards using GDDR5 are using the 60nm technology?
"Samsung expects production efficiency to rise 100 percent over 60nm class technology."
Posted on Reply
#3
jbunch07
Weer"Samsung expects production efficiency to rise 100 percent over 60nm class technology."
Ah, Must Must have overlooked that. :o
Looks good to me though.
Posted on Reply
#4
Fhgwghads
Hopefully it gives enough incentive for Nvidia to move to GDDR5, seems like a good move if it's going to deliver the performance boost Samsung is saying it will.
Posted on Reply
#5
eidairaman1
The Exiled Airman
i see AMD breaking contract with Quimonda and moving back to Samsung.
Posted on Reply
#6
Weer
FhgwghadsHopefully it gives enough incentive for Nvidia to move to GDDR5, seems like a good move if it's going to deliver the performance boost Samsung is saying it will.
What's the point of expensive RAM when you can just increase the bus width?
Posted on Reply
#7
LAN_deRf_HA
Expensive ram is more favorable than an expensive bus, simplifies the pcb design and in the case of gddr5 even more so because of the circuit traces not needing to waste all that space with extra squiggles.
Posted on Reply
#8
Wile E
Power User
WeerWhat's the point of expensive RAM when you can just increase the bus width?
Because increasing the bus and using GDDR3 is more expensive than just buying GDDR5 to begin with.
Posted on Reply
#9
Weer
Wile EBecause increasing the bus and using GDDR3 is more expensive than just buying GDDR5 to begin with.
And how on earth does that make sense? (seriously speaking)

Increasing the bus does not cost money.
Posted on Reply
#10
Wile E
Power User
WeerAnd how on earth does that make sense? (seriously speaking)

Increasing the bus does not cost money.
Yes it most certainly does. It increases pcb complexity tremendously. It's more expensive from both an R&D standpoint, and also a manufacturing standpoint.
Posted on Reply
#11
Weer
Wile EYes it most certainly does. It increases pcb complexity tremendously. It's more expensive from both an R&D standpoint, and also a manufacturing standpoint.
That's insane! Just because you have to trace a few more lines on the PCB is costs more than RAM that is clocked twice as high? Stupid laws of physics and reality..
Posted on Reply
#12
AsRock
TPU addict
WeerThat's insane! Just because you have to trace a few more lines on the PCB is costs more than RAM that is clocked twice as high? Stupid laws of physics and reality..
Do not forget the extra layers of PCB required so you can get every thing to go to A to B so to speak.

Would be nice to see how this ram would do on a 4870 x2 lol.
Posted on Reply
#13
jbunch07
AsRockDo not forget the extra layers of PCB required so you can get every thing to go to A to B so to speak.

Would be nice to see how this ram would do on a 4870 x2 lol.
I was wondering the same thing. :ohwell:
Posted on Reply
#14
BazookaJoe
:\ - Yeah - I'm afraid i have to agree that Bus width is far more beneficial to any graphically intensive application than Ram Clocks (Although there has to be a balance between the two)

If either one leads too far ahead of the other its just a total waste overall.

The only real solution would be to integrate the ram & the gpu .. see what ppl seem to forget is that electronic impulses take TIME to travel along the tracks on a PCB.. .the real enemy here is the distance between the ram and the GPU.

Once you start working at the multi-gigahertz level, even the extra time taken for that pulse to travel an extra half an inch can be a significant problem.

Until we start putting the two closer together, there really are real-world physics based limitations that the designers have to deal with, and cant avoid.

(See : Intel moving nothbridge into CPU & others)
Posted on Reply
#15
Fhgwghads
BazookaJoe:\ - Yeah - I'm afraid i have to agree that Bus width is far more beneficial to any graphically intensive application than Ram Clocks (Although there has to be a balance between the two)
Is that one of the reasons why Nvidia is still using ddr3 on most of their video cards?
Posted on Reply
#16
eidairaman1
The Exiled Airman
WeerWhat's the point of expensive RAM when you can just increase the bus width?
that inturn makes the board higher priced.
Posted on Reply
#17
Lazzer408
it doesnt cost anymore to lay/etch 10 traces or 1000 traces other then initial layout.
Posted on Reply
#18
eidairaman1
The Exiled Airman
actually it does, because why would intel release several different sockets for the Core i series then, 1366 has the most traces= costly, and then the 1156 has like the least= cheapest.
Posted on Reply
#19
Lazzer408
They charge whatever they want. It all depends on what the market analysis team came up with. A product's cost is in no way shape or form any indication of production cost. Obviously you can't compair a car to a pencil but two different board layouts/designs have very minimal cost difference. Take CPUs for example. A 2ghz could have the same core as a 3ghz with only a multiplier lock changed yet the cost can be 3 fold.

Technology is getting to the point where manufacturer's are limited by the electrical connections to devices. The only cheap and easy solution to that is parallel lanes. They have come up with some ideas like BGA connection to the PCB and LGA sockets. Back in the 478 days Intel told me the 478 socket had a limit of 4ghz before the pins became "little antennas" as they said.

Videocards have 256bit memory address. Why so wide? Why not just make faster ram? Because you can't get the performance out of it that's why.

To get back on topic, it is a good thing to see lower power consumption. This leads to higher speeds. You have to remember that they can't jump too far ahead or they'll loose money. If ATI made say a 4870x12 single slot card, and charged $200 for it, what would that do to the graphics market? It would be ruined. We'd all own 2 of them and not need an upgrade for 5 years.
Posted on Reply
#20
kurik
Lazzer408it doesnt cost anymore to lay/etch 10 traces or 1000 traces other then initial layout.
You are both right and wrong about this. You are right that the cost differance is basically 0 when it comes to 10 tracers vs 1000 traces. But you must also consider that having 1000 traces involves having multiple layers (i think most gfx boards use 10-12 layers?). Add to that that the increased number of layers/traces also means via holes that need to be drilled (tooling costs) and get copper plating. Not even mentioning the R&D costs of making these complex boards.
Posted on Reply
#21
Lazzer408
kurikYou are both right and wrong about this. You are right that the cost differance is basically 0 when it comes to 10 tracers vs 1000 traces. But you must also consider that having 1000 traces involves having multiple layers (i think most gfx boards use 10-12 layers?). Add to that that the increased number of layers/traces also means via holes that need to be drilled (tooling costs) and get copper plating. Not even mentioning the R&D costs of making these complex boards.
A drop in the bucket for a manufacture. If 12 layers cost them $5 it would literally be less then $1 to add 2 more layers.

They won't make something like 512bit memory addressing anyways because they feel we dont need it yet. It's all marketing. We get the more profitable technology available. Not the fastest.
Posted on Reply
#22
eidairaman1
The Exiled Airman
well companies have to make a profit and i don't see you owning a 4870 so why should you complain about this? Btw Profit is what all companies have to make, no profit means nothing better is released, thus jobs are lost etc.
Posted on Reply
#23
Wile E
Power User
Lazzer408A drop in the bucket for a manufacture. If 12 layers cost them $5 it would literally be less then $1 to add 2 more layers.

They won't make something like 512bit memory addressing anyways because they feel we dont need it yet. It's all marketing. We get the more profitable technology available. Not the fastest.
The HD2900 had a 512bit bus, as does the current GTX 280/285.

And $1 across millions of boards adds up pretty quickly. It's still significant, and that likely only covers the material costs, and not the tooling costs. Overall, the 512bit bus with GDDR3 is more expensive to both develop and manufacture than the 256bit bus with GDDR5.
Posted on Reply
#24
Lazzer408
Wile EThe HD2900 had a 512bit bus, as does the current GTX 280/285.

And $1 across millions of boards adds up pretty quickly. It's still significant, and that likely only covers the material costs, and not the tooling costs. Overall, the 512bit bus with GDDR3 is more expensive to both develop and manufacture than the 256bit bus with GDDR5.
So we get slower cards so someone can get a new Mclaren :D
Posted on Reply
#25
kurik
Lazzer408A drop in the bucket for a manufacture. If 12 layers cost them $5 it would literally be less then $1 to add 2 more layers.

They won't make something like 512bit memory addressing anyways because they feel we dont need it yet. It's all marketing. We get the more profitable technology available. Not the fastest.
Where did you get those numbers? I'd be very shocked if it was even remotly close to a cost of 5 dollars for a 12-layer board. I'm sure that tooling and machine-time costs for board that has 512bit bus is way more expensive.
Posted on Reply
Add your own comment
Apr 26th, 2024 18:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts