I would say this article is sensationalist but that's not uncommon on these "hardware" sites.
The way we should look at market share numbers is not quarter by quarter but by using a moving average so by averaging x amount of quarterly results.
3dcenter.org has this chart in their article: https://www.3dcenter.org/dateien/abbildungen/GPU-Add-in-Board-Market-Share-2002-to-Q3-2022.png
For example if you want to know the market share of the previous generation of cards (Ampere vs RDNA 2) then average the data since the Ampere launch and the result is 81.33% vs 18.66%. And this is normal, AMD's market share has been around 20% for a long time. Not really healthy from the consumers perspective because NV is controlling the market but that's the way it is.
This generation (Ampere vs RDNA2) was basically about crypto mining and so IMO this timeframe is not a good indication of AMD's "mindshare" (market share) because it didn't matter if RDNA 2 cards would've been better perf/$ _in gaming_ because everything was sold out all the time.
What this timeframe shows is that AMD has much lower capacity vs Nvidia when it comes to providing GPUs.
Nevertheless AMD has much work to do if they want a higher percentage: marketing, drivers, presentation. But I'm not sure if they care at all. The GPU business has low margins, they're much more likely to use the available production capacity at TSMC for Epyc CPUs. I think they view their GPU lineup as a proving ground for their semi-custom partners (Sony, Microsoft, others).
I have to add that NV is a master at manipulating people. I mean they introduced a feature (ray-tracing) which is basically a gimmick (barely anybody turns it on even on NV cards, it drastically lowers performance and barely noticable) but now two generations after the introduction it's hard to sell AMD cards because they're slower than slow in RT. And they didn't even improve the RT performance since the Turing generation, what we see in games is still just improved raster performance, but turning on RT has the same performance penalty on Turing, on Ampere and on Lovelace. Not sure if this is intentional by NV or their architecture has bottlenecks somewhere. So yeah.