• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Begins Production of High-performance GDDR5 Memory Using 50-nm Technology

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,878 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Samsung Electronics Co., Ltd., the world leader in advanced memory technology and the leading producer of high-end graphics memory, announced today that it has begun mass producing GDDR5 graphics memory using 50-nanometer class process technology.

"Our early 2009 introduction of GDDR5 chips will help us to meet the growing demand for higher performance graphics memory in PCs, graphic cards and game consoles," said Mueez Deen, director, mobile and graphics memory, Samsung Semiconductor, Inc. "Because GDDR5 is the fastest and highest performing memory in the world, we're able to improve the gaming experience with it across all platforms," he added.



Designed to support a maximum data transfer speed of 7.0Gbps, Samsung's GDDR5 will render more life-like (3D) imaging with a maximum 28GB/s bandwidth, which is more than twice that of the previous fastest graphics memory bandwidth of 12.8GB/s for GDDR4. The ultra-fast processing speed is equivalent to transferring nineteen 1.5GB DVD resolution movies in one second. The high image processing speed of the GDDR5 also supports the latest data formats (Blu-ray and full HD).

Unlike GDDR4, which processes data and images using the strobe-and-clock technique, the processing speed of the GDDR5 is much faster because it operates with a free-running clock that does not require the data read/write function to be synchronized to the operations of the clock. By adopting 50nm class technology, Samsung expects production efficiency to rise 100 percent over 60nm class technology. In addition, Samsung's GDDR5 operates at 1.35 volts (V), which represents a 20 percent reduction in power consumption compared to the 1.8V at which GDDR4 devices operate.

Now available in a 32Megabit (Mb) x32 configuration and also configurable as a 64Mb x16 device, Samsung expects GDDR5 to account for over 20 percent of the total graphic memory market in 2009. The company also said it plans to expand the 50-nm process technology throughout its graphics memory line-up this year.

View at TechPowerUp Main Site
 
So I guess that means current video cards using GDDR5 are using the 60nm technology?
 
So I guess that means current video cards using GDDR5 are using the 60nm technology?

"Samsung expects production efficiency to rise 100 percent over 60nm class technology."
 
"Samsung expects production efficiency to rise 100 percent over 60nm class technology."

Ah, Must Must have overlooked that. :o
Looks good to me though.
 
Hopefully it gives enough incentive for Nvidia to move to GDDR5, seems like a good move if it's going to deliver the performance boost Samsung is saying it will.
 
i see AMD breaking contract with Quimonda and moving back to Samsung.
 
Hopefully it gives enough incentive for Nvidia to move to GDDR5, seems like a good move if it's going to deliver the performance boost Samsung is saying it will.

What's the point of expensive RAM when you can just increase the bus width?
 
Expensive ram is more favorable than an expensive bus, simplifies the pcb design and in the case of gddr5 even more so because of the circuit traces not needing to waste all that space with extra squiggles.
 
What's the point of expensive RAM when you can just increase the bus width?

Because increasing the bus and using GDDR3 is more expensive than just buying GDDR5 to begin with.
 
Because increasing the bus and using GDDR3 is more expensive than just buying GDDR5 to begin with.

And how on earth does that make sense? (seriously speaking)

Increasing the bus does not cost money.
 
And how on earth does that make sense? (seriously speaking)

Increasing the bus does not cost money.

Yes it most certainly does. It increases pcb complexity tremendously. It's more expensive from both an R&D standpoint, and also a manufacturing standpoint.
 
Yes it most certainly does. It increases pcb complexity tremendously. It's more expensive from both an R&D standpoint, and also a manufacturing standpoint.

That's insane! Just because you have to trace a few more lines on the PCB is costs more than RAM that is clocked twice as high? Stupid laws of physics and reality..
 
That's insane! Just because you have to trace a few more lines on the PCB is costs more than RAM that is clocked twice as high? Stupid laws of physics and reality..

Do not forget the extra layers of PCB required so you can get every thing to go to A to B so to speak.

Would be nice to see how this ram would do on a 4870 x2 lol.
 
Do not forget the extra layers of PCB required so you can get every thing to go to A to B so to speak.

Would be nice to see how this ram would do on a 4870 x2 lol.

I was wondering the same thing. :ohwell:
 
:\ - Yeah - I'm afraid i have to agree that Bus width is far more beneficial to any graphically intensive application than Ram Clocks (Although there has to be a balance between the two)

If either one leads too far ahead of the other its just a total waste overall.

The only real solution would be to integrate the ram & the gpu .. see what ppl seem to forget is that electronic impulses take TIME to travel along the tracks on a PCB.. .the real enemy here is the distance between the ram and the GPU.

Once you start working at the multi-gigahertz level, even the extra time taken for that pulse to travel an extra half an inch can be a significant problem.

Until we start putting the two closer together, there really are real-world physics based limitations that the designers have to deal with, and cant avoid.

(See : Intel moving nothbridge into CPU & others)
 
:\ - Yeah - I'm afraid i have to agree that Bus width is far more beneficial to any graphically intensive application than Ram Clocks (Although there has to be a balance between the two)


Is that one of the reasons why Nvidia is still using ddr3 on most of their video cards?
 
What's the point of expensive RAM when you can just increase the bus width?

that inturn makes the board higher priced.
 
it doesnt cost anymore to lay/etch 10 traces or 1000 traces other then initial layout.
 
actually it does, because why would intel release several different sockets for the Core i series then, 1366 has the most traces= costly, and then the 1156 has like the least= cheapest.
 
They charge whatever they want. It all depends on what the market analysis team came up with. A product's cost is in no way shape or form any indication of production cost. Obviously you can't compair a car to a pencil but two different board layouts/designs have very minimal cost difference. Take CPUs for example. A 2ghz could have the same core as a 3ghz with only a multiplier lock changed yet the cost can be 3 fold.

Technology is getting to the point where manufacturer's are limited by the electrical connections to devices. The only cheap and easy solution to that is parallel lanes. They have come up with some ideas like BGA connection to the PCB and LGA sockets. Back in the 478 days Intel told me the 478 socket had a limit of 4ghz before the pins became "little antennas" as they said.

Videocards have 256bit memory address. Why so wide? Why not just make faster ram? Because you can't get the performance out of it that's why.

To get back on topic, it is a good thing to see lower power consumption. This leads to higher speeds. You have to remember that they can't jump too far ahead or they'll loose money. If ATI made say a 4870x12 single slot card, and charged $200 for it, what would that do to the graphics market? It would be ruined. We'd all own 2 of them and not need an upgrade for 5 years.
 
it doesnt cost anymore to lay/etch 10 traces or 1000 traces other then initial layout.

You are both right and wrong about this. You are right that the cost differance is basically 0 when it comes to 10 tracers vs 1000 traces. But you must also consider that having 1000 traces involves having multiple layers (i think most gfx boards use 10-12 layers?). Add to that that the increased number of layers/traces also means via holes that need to be drilled (tooling costs) and get copper plating. Not even mentioning the R&D costs of making these complex boards.
 
You are both right and wrong about this. You are right that the cost differance is basically 0 when it comes to 10 tracers vs 1000 traces. But you must also consider that having 1000 traces involves having multiple layers (i think most gfx boards use 10-12 layers?). Add to that that the increased number of layers/traces also means via holes that need to be drilled (tooling costs) and get copper plating. Not even mentioning the R&D costs of making these complex boards.

A drop in the bucket for a manufacture. If 12 layers cost them $5 it would literally be less then $1 to add 2 more layers.

They won't make something like 512bit memory addressing anyways because they feel we dont need it yet. It's all marketing. We get the more profitable technology available. Not the fastest.
 
well companies have to make a profit and i don't see you owning a 4870 so why should you complain about this? Btw Profit is what all companies have to make, no profit means nothing better is released, thus jobs are lost etc.
 
A drop in the bucket for a manufacture. If 12 layers cost them $5 it would literally be less then $1 to add 2 more layers.

They won't make something like 512bit memory addressing anyways because they feel we dont need it yet. It's all marketing. We get the more profitable technology available. Not the fastest.

The HD2900 had a 512bit bus, as does the current GTX 280/285.

And $1 across millions of boards adds up pretty quickly. It's still significant, and that likely only covers the material costs, and not the tooling costs. Overall, the 512bit bus with GDDR3 is more expensive to both develop and manufacture than the 256bit bus with GDDR5.
 
The HD2900 had a 512bit bus, as does the current GTX 280/285.

And $1 across millions of boards adds up pretty quickly. It's still significant, and that likely only covers the material costs, and not the tooling costs. Overall, the 512bit bus with GDDR3 is more expensive to both develop and manufacture than the 256bit bus with GDDR5.

So we get slower cards so someone can get a new Mclaren :D
 
Back
Top