- May 21, 2011
- 658 (0.26/day)
|Processor||Intel XEON E3-1275V2 / E3-1230V3|
|Motherboard||ASUS SABERTOOTH Z77 / AsRock H87 Performance|
|Cooling||Corsair H80i Watercooling|
|Memory||32GB Corsair Dominator Platinum 2400|
|Video Card(s)||Inno3D GTX 780 Ti|
|Storage||2TB SSD（4X OCZ vertex 4 256GB LSI RAID0 + Crucial M550 1TB)|
|Display(s)||2x Dell U3011 30" IPS|
|Case||Silverstone Raven 03|
|Audio Device(s)||Xonar Essence STX--> Xonar Essence One --> SPL Auditor -->Hivi X6|
|Power Supply||Corsair AX860i Platinum|
|Software||Windows 8.1 Enterprise|
Patented 7-11 years ago by ati and implemented 3 years ago in AMD gpus, along with something they're pushing for in the vesa standard, but didn't improve/capitalize on because conflicts with marketing budget/open-standard mentality?
I mean, there really isn't a lot of options here, you either match the monitor with the GPU, or you match the GPU with the monitor, since you can't just magically "improve" performance on the GPU, then this is a hardware problem, not software.
I get that G-sync works by adjusting the monitor's refresh rate to match the GPU's frame rate - OK, fairly simple and straightforward, I get this.
But what is AMD's method here? according to my understanding of the article, AMD tries to insert "blank/fake" frames to pretentiously boost the frame rate to match that of the monitor, is that what they are doing here?
if I remember correctly, this is exactly the same method Lucidlogic tried to implement with their Virtu software which was nothing but gimmick, it only 'looked' good in benchmark scores, but it didn't fix the problem, if anything it created more stuttering and decreased overall performance.