voidshatter
New Member
- Joined
- Apr 25, 2011
- Messages
- 32 (0.01/day)
Processor | i7 980X [6C6T] @ 4GHz |
---|---|
Motherboard | EVGA E760 |
Cooling | Corsair H70 |
Memory | 6 x 2GB DDR3 1600MHz C7, Uncore 3200MHz |
Video Card(s) | 2 x MSI 6950 2GB Twin Frozr II @ 810/1250, 1536SP |
Storage | Intel X25-M 160G G2, 2 x 1TB WD Black |
Display(s) | Dell U2410 |
Case | Lian Li 7FNWX |
Power Supply | Corsair HX1000 |
I just don't understand it Is it because the AMD driver team is lazy?
NVIDIA still offers such function. I bet it could be handy when optimizing CUDA programs. However there is no such equivalent in ATI Stream SDK / OpenCL etc. What is wrong? OpenCL claims to be able to handle memory hierarchy, but what good can be done by hiding the video memory usage?
NVIDIA still offers such function. I bet it could be handy when optimizing CUDA programs. However there is no such equivalent in ATI Stream SDK / OpenCL etc. What is wrong? OpenCL claims to be able to handle memory hierarchy, but what good can be done by hiding the video memory usage?