Those things are not mutually exclusive, though. 4k is going to be the standard for some time -- much like 1080P; and I would wager that we are 2 graphics generations away from being able to comfortably drive 4K with RT enabled at the high end, and 3 generations away to where mid-range cards could do it. The next gen of consoles are all going to have RT, which means, unlike hairworks -- it will have mass adoption fairly quickly.
Which totally flips the premise of this Forum. AMD's new generation of GPUs (Navi) has no trouble with ray tracing using shaders. nVidia saves power by rasterizing early. That made adding ray tracing support somewhat complex. AMD's Next Gen GPUs which will be in the next generation consoles will add additional hardware for ray tracing.
So will developers use nVidia ray tracing? No. Will game developers use the Microsoft API for Ray Tracing? Sure on DirectX games. Vulkan programmers will use Open GL. Game Developer's Tools API developers don't have to think about how ray tracing works, they just have to provide the calls with appropriate parameters. In a complete tool stack, nVidia command line options will result in nVidia specific calls, if that is what you want. But in reality we are talking a few microseconds for the calls, up to hundreds of microseconds to pass the calls to the graphics card--graphics calls get big--that was a big driver in developing Vulkan, which started out as Mantle, an AMD close-to-the metal language.
So what does AMD have to do to provide ray tracing support? As anyone who was looking would have noticed, not very much. The Green Team needed to make Ray Tracing work with their rasterized graphics. nVidia saves lots of watts with that rasterization, so it is nice that they can make it work with ray tracing. I hope it still saves as many watts...
I need to get into the gory details to explain what is going on. Ignore ray tracing and have illumination sources without reflections. You need to take the RGB values (or CMYK or whatever) for the source, calculate the effective distance to the target, then multiply the light source color values, multiply them time the target color values, and add to the register for that target pixel. GPUs were developed to do just that. If you add ray tracing right, some lights will bounce multiple times before reaching some targets. You only need to update the distance traveled for each bounce. The light won't change color in route. Then you do the normal shading using the summed ray traced illumination. Specular (mirror-like) reflections need a different route, and you may find out you have a mirror to deal with halfway through. More work, but not a timing issue. Computing the lighting, then the reflections works, but yes you have to do the mirror twice (or more).
So what is AMD planning to do in their Next Gen GPUs? Special hardware to deal with distances, instead of 'wasting' some of the compute power of the shaders which tend to work with (three) RBG values, not one distance value. Eventually you do have to split those numbers and eventually integrate with the normal image data. Will this save a few watts while ray tracing? Sure. Will it make it any faster than current graphics? Not really, but... The calculations needed for each point are known. But if you have more shaders, you can process more points in parallel. So this may speed up ray tracing a bit. What about nVidia? When they came out with their fancy ray tracing cards, they got hit with a wet fish. Yes, it works, and yes, the images are cleaner than with today's (non-ray tracing) technology. But to get the better images you have to do more calculations. The algorithms have been pounded on for forty or fifty years. Initially to get a single frame image complete before you fell asleep. (Yes, I can remember half-hour plus ray tracing times when I was a young programmer.)
So if you want to turn ray tracing on in some game with an AMD GPU, go ahead. It has been part of the DirectX interface for ages. Well, it seems like ages to me. Anyway, if ray tracing does become a big thing in the next few years, it won't affect the relative positions of AMD and nVidia. AMD may make cards that run 4K60 with ray tracing, and at that time nVidia will be selling cards to do 8k144 with ray tracing--that no users buy. But graphics developers will be a big enough market. Well, add seismic data processing. Processing the data is one thing, and increasing resolution by a factor of two means about 10 times the CPU crunching. For decades, run the best that you can overnight, and if it is promising take a week or more to get better results. They won't need the real-time game graphics, but they do want to be able to turn the (3d) representation of the oil field this way and that. They quickly adopted 4k when it came out, and I had to have it for home. (And a Vega 64 card for under $400. Couldn't turn that down.)