nVIDIA's business practice (because they can and can afford it because they have large market share and popularity, they are like Intel of GPU world) is to deliberately gimp their cards so you need to buy more expensive card if you don't want to replace it to fast.<snip> In recent times, 3.5 GB VRAM fiasco with GTX 970, different perfomance of graphic cards of same chip depending on amount of VRAM or something.
Not quite true, GTX 970 turned out to be the greatest buy of the Maxwell generation, it performed very close to GTX 980 at a much lower price, especially the AiB versions were a steal. And it did in fact have 4 GB RAM, the last 512 MB were just slower, but not that it really mattered, as the card would in most cases run into other bottlenecks first. GTX 970 beat GTX 960, GTX 980 and GTX 980 Ti in performance per Gflop, so in terms of resource balancing it was the most balanced of them all.
Or being cheapskate with VRAM, while on Radeon you could get 4GB, nVIDIA was putting 2GB on their counterpart. So your Radeon with 4GB would be more future proof than GeForce with 2GB if you want higher details.
It all depends on which card you (selectively) compare.
Future-proofing have been a selling argument for GCN cards since the start, with many buyers waiting for years until their Radeon card to finally beat Nvidia's counterpart, but it never happens, except for edge cases.
You select cards based on a representative selection of games at the time of purchase. Any attempt to guess which card will scale 5% better than the other 2-3 years down the line is bound to fail, and by that time it wouldn't matter anyway, since you will buy a replacement.
I know that OpenGL driver on GeForce or Quadro is well known for its stability and perfomance. Some are saying because of hacks or tricks while ATi/AMD was/is following strictly OpenGL standard.
OpenGL have two profiles; core and compatibility. Nvidia is a bit more permissive in compatibility mode, e.g. allowing HLSL syntax in shaders (since the shader core in Nvidia's driver is the same anyway), but this is
not giving extra performance or stability. Most client software have to use the core profile anyway, since certain drivers don't run compatibility at all.
AMD is the one struggling with conformance.
Mac Pro is using AMD's Pro line of graphic cards, they propably fixed those issues because it's to expensive product to be a joke, or not?
AMD's OpenGL 2.1 support is fairly stable, it's the 3.x and 4.x stuff they struggle with.
But remember that OS X lacks the recent OpenGL versions, and are now in the process of deprecating it anyway, so Mac Pro is not using AMD cards for OpenGL…