• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the default setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

G.SKILL DDR4 Memory Achieves Fastest Air-Cooling Record at 4062 MHz

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,687 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
G.SKILL International Enterprise Co., Ltd., the world's leading manufacturer of extreme performance memory and solid state storage, is proud to announce that its Ripjaws 4 DDR4 memory has achieved the fastest air-cooling frequency record at DDR4 4062MHz on the ASRock X99M Killer/3.1 motherboard.

G.SKILL has been dedicated to unleash the maximum performance of DDR4 memory since its launch in August 2014. Working closely with ASRock, G.SKILL DDR4 memory is capable of reaching a new height of DDR4 memory frequency at a whopping 4062MHz! It is the fastest DDR4 frequency ever seen with both CPU and memory under standard air-cooling.



This amazing record has been validated by CPU-Z. For more detailed information, please refer to this page. "This outstanding performance is not only a tremendous glory, but also a huge acknowledgement to our overclocking ability." said James Lee, VP of ASRock Sales and Marketing.

To witness the exciting record-breaking moment, please visit the following video.


View at TechPowerUp Main Site
 
ooOOoo 28 Mhz faster than the ADATA module a few days ago!
This time we have some screenies for the fans.
 
I'm trying to think of a scenario where higher speed RAM would make enough of a difference, and I am not finding one. don't get me wrong i fully understand the ideas and why real world validation is needed, it helps push the envelope so that todays high end becomes tomorrows standard.

But, we are reaching the point where branch prediction, good programming, larger on die cache size, and whatnot are making the RAM you use for 99% of real world applications irrelevant. Essentially we need a CPU revolution to go along with all this extra RAM speed.
 
On the one hand DDR4 is approaching GDDR5 speeds. On the other hand, it still costs twice as much as GDDR5 lol
 
Which is why I am not quite sure that it won't be skipped for more interesting hardware within a year, with HBM becoming available, and as it grows I imagine we could start seeing AMD stick 4GB into a APU/CPU in a huge socket and alleviate more of their latency issues and with corrected core designs and better lithography could even..... outperform Intel.

But then again having a 4GB on die cache shared between a GPU handling the crap work of gaming, and a 8 core CPU/APU using it, and 16GB of DDR4 paired with a 390.... hmm
 
Which is why I am not quite sure that it won't be skipped for more interesting hardware within a year, with HBM becoming available, and as it grows I imagine we could start seeing AMD stick 4GB into a APU/CPU in a huge socket and alleviate more of their latency issues and with corrected core designs and better lithography could even..... outperform Intel.

But then again having a 4GB on die cache shared between a GPU handling the crap work of gaming, and a 8 core CPU/APU using it, and 16GB of DDR4 paired with a 390.... hmm

Wouldn't it be possible to eventually just use HBM ram sticks instead of DDR?
 
Yes, but having RAM would still help, perhaps the system could use it as a virtual drive and swap space, much like using it now with a virtual disk driver, but implemented and managed by hardware and the OS instead of a shim begin inserted to interrupt calls.
 
Wouldn't it be possible to eventually just use HBM ram sticks instead of DDR?
Why bother with that when they can make it HBM e-Dram
(remember folks, you heard it here first :p)
 
Why bother with that when they can make it HBM e-Dram
(remember folks, you heard it here first :p)
Why wouldn't they? You might be proposing something novel, but Intel have had HMC2 in the channel, and MCDRAM (HBM) starting at 16GB on an Si interposer package primed for some time.

Knights_Landing_Car_575px.jpg
 
....and im still here rocking my DDR2-800 at 900mhz. :-P
 
Only single channel though, whack in 4 sticks and its not happening.
 
I'm trying to think of a scenario where higher speed RAM would make enough of a difference, and I am not finding one. don't get me wrong i fully understand the ideas and why real world validation is needed, it helps push the envelope so that todays high end becomes tomorrows standard.

But, we are reaching the point where branch prediction, good programming, larger on die cache size, and whatnot are making the RAM you use for 99% of real world applications irrelevant. Essentially we need a CPU revolution to go along with all this extra RAM speed.

When the speeds are almost doubled compared to most DDR3 setups, it actually makes a good bit of difference if you have the processor/SSD speed to go along with it. Maybe not so much with gaming or 3D rendering, but file handling, compressing/decompressing large amounts of data, and even video editing/conversion, RAM speed can make a decent amount of difference.

I also find it funny that these screenshots only show the idle speed/temps of the CPU and RAM. lolol
 
I'm trying to think of a scenario where higher speed RAM would make enough of a difference, and I am not finding one.

Speaking with Crucial about this, they tell me they are seeing amazing things with APU's, but as for the mainstream users, you are pretty much correct, gaming won't show much love for you at these or most extreme speeds. As I am told, the main deal with DDR4 is power savings.
 
As I am told, the main deal with DDR4 is power savings.
...and capacity. The DDR4 spec includes support for stacked DRAM. In reality, bandwidth is plenty and improving latency means moving memory closer to the compute cores which isn't exactly feasible with DIMMs themselves. That leaves power consumption and capacity. On the front of latency, we're seeing an evolution of another layer of memory somewhere between cache (SRAM) and system memory (DRAM) where HBM and Intel's eDRAM (on Iris Pro CPUs,) are shining examples of a move that direction. Latency will be far lower if buses end up being a fraction of the length (physical length signals have to travel,) than they are now with system memory. A great example is how DDR4 is almost as fast as the L3 cache on my 3820 at stock but, the L3 cache has latency that's anywhere between 1/4 or 1/5 of DRAM in general.

With that said, I think really high frequency memory is funny because high clocks don't make electrical signals travel any faster, it just lets you cram more data in when bandwidth already isn't a problem. It doesn't change the memory hierarchy of computers in any way. So while it's nifty to see how "fast" (which is a misnomer,) they can go (single channel...) it just doesn't tell us anything useful IMHO.

With that said, give me lower latency, not high bandwidth if we're really concerned about performance. :)
 
With that said, give me lower latency, not high bandwidth if we're really concerned about performance. :)

Agree with the rest of the post as well, but this. Lots of this!
 
Speaking with Crucial about this, they tell me they are seeing amazing things with APU's, but as for the mainstream users, you are pretty much correct, gaming won't show much love for you at these or most extreme speeds. As I am told, the main deal with DDR4 is power savings.


Power savings on the low end for sure, APU's will benefit in the graphics department, on chipsets that support it..... which are... not here yet.

So by the 2016 timeline that the APU's are supposed to hit HBM and or DDR5 will be coming and start validation on CPU's.

Not knocking you man, but I expect DDR4 to be short lived.
 
Wasn't trying to make sense of it, just trying to help explain usages. I too don't see this lasting without some sort of evolution in the ICs.
 
Back
Top