Gainward GeForce GTX 1630 Ghost Review - Challenging the AMD RX 6400 144

Gainward GeForce GTX 1630 Ghost Review - Challenging the AMD RX 6400

(144 Comments) »

Value and Conclusion

  • Neither Gainward nor NVIDIA were willing to provide any pricing information. We've been hearing that the GTX 1630 will sell for around $150. As soon as we get new information, this review will be updated, of course.
  • 3x as fast as the GT 1030
  • Extremely quiet even under full load
  • Idle fan stop
  • Very low power consumption
  • Low temperatures
  • PCI-Express 3.0 x16 (unlike PCIe 4.0 x4 on RX 6400)
  • Good overclocking potential
  • Compact card, will fit nearly all cases
  • Updated video engine compared to GT 1030 / Pascal
  • More modern connectivity options than older cards, which mostly have DVI
  • Very low overall performance
  • Very expensive for the performance offered
  • Low board power limit (75 W), no manual adjustments allowed
  • No reason for 6-pin power connector
  • No support for DLSS
  • Memory overclocking limited (bug or intentional)
  • No AV1 decode support
  • No backplate
  • No support for ray tracing
NVIDIA's GeForce GTX 1630 comes as a surprise release to many, including us. In the last weeks and months, the crypto bubble has burst and GPU prices have dropped drastically. It seems NVIDIA intended for the GTX 1630 to offer basic graphics in a complicated market. AMD did the same by launching the Radeon RX 6400 and RX 6500 XT a few months ago, and Intel's new Arc series of discrete graphics cards has launched with the Arc A380 that targets the same segment.

Unlike the offerings from AMD and Intel, the GTX 1630 is based on older technology—not exactly "old" as the GTX 1650 was released in April 2019, and the GTX 1650 GDDR6 refresh launched two years ago in April 2020. Considering the "x30" naming of the card, I also think NVIDIA wants to replace the aging GeForce GT 1030bbased on the Pascal architecture, which has no hardware video encoding at all and lacks support for H.265 video decode acceleration. Also, most GT 1030 boards only have two outputs, one of which is DVI and pretty much useless these days.

Averaged over our whole test suite at 1080p resolution, we find the GeForce GTX 1630 severely lacking in performance. Even the AMD Radeon RX 6400 is MUCH faster, by a whopping 60%, just like the GTX 1650. I'm sure we all expected lower performance from a card numbered "1630," compared to "1650", but this is quite a lot. The Radeon RX 6500 XT is even twice (!) as fast as the GTX 1630, just like the Pascal-based GTX 1060. Compared to the GeForce GT 1030, the GTX 1630 offers a huge jump in performance, tripling the performance offered. Overall, I'm still not sure if the GTX 1630 has enough muscle for 1080p gaming at even reduced details. Reducing the GPU core count so much paired with cutting the memory interface in half, which also halves the number of ROPs, seems to have a big effect on performance. I like that NVIDIA is using GDDR6 on their card, which helps make up a bit of lost bandwidth compared to GDDR5, but it just is not enough. For a fair comparison, we tested at our usual highest settings, which of course is unrealistic for typical gaming, but the only way to get a grasp of the real performance differences between the cards in our test group. I've also done a run at 1080p "lowest", with each game set to its lowest possible setting (but still native 1080p, without upscaling), the results aren't that impressive, on the other hand, around 40 FPS in many titles is "kinda playable".

While AMD is including ray tracing support with the Radeon RX 6400 and 6500 XT, the GTX 1630 doesn't support the technology because it is based on the 16-series variant of the Turing architecture, which lacks ray tracing hardware. In my opinion this is a non-issue as ray tracing is for high-end cards that have the performance to spare, so you can eke out a little bit of extra fidelity for your gaming. When you're fighting to even get close to playable framerates, ray tracing isn't useful at all. What would be useful is support for NVIDIA's DLSS upscaling technology, which lets you run games at lower render resolution and upscale to your monitor's native resolution, enhancing the image quality with minimal performance loss through machine learning. Unfortunately, there are no Tensor Cores in the TU117 GPU, so you won't have access to that feature. AMD recently released their own upscaling algorithms called "FSR," which work on all GPUs from all manufacturers, including the GTX 1630—so, ironically, the widespread FSR support will be great for GeForce 16 owners. NVIDIA does have their own driver-based upscaler called "NVIDIA Image Scaling," or NIS, which works with the GTX 1630 and does a decent job, but offers less compelling results than DLSS and FSR.

Gainward's GeForce GTX 1630 Ghost looks like a close-to-reference implementation of the GTX 1630. The PCB design clearly shows that keeping production cost low was the most important objective during the design stage. Nevertheless, the card performed without any issues, and the simple cooling solution can handle the heat output of the GPU with ease. Actually, Gainward picked fantastic fan settings for their card. When idle, during desktop work, browsing, and media playback, the card will keep its fans off for the perfect noise-free experience. Once you start gaming and the card crosses the 60°C threshold, the fans will start spinning slowly and remain very quiet at all times. These are among the lowest noise levels I have ever encountered in any graphics card review. To put things into perspective, when running on an open bench in a quiet room with no other noise sources, the fans on the GTX 1630 Ghost are so quiet that you have to stand next to and focus on the card to try to make out the noise it generates. As soon as you move one bit, the noise of your clothes will overpower what little sound the card emits. That's exactly what I want from a media PC card—great job, Gainward!

NVIDIA has engineered the GTX 1630 with very strict power limits of 75 W; during typical gaming, it even only reaches 50 W. This is not an achievement of a very energy efficient design or a new process node as the RX 6400 uses 6 nm while the GTX 1630 uses 12 nm. Rather, it seems disabling so many units combined with a narrow 64-bit memory interface using only two memory chips instead of four cuts down on power usage tremendously, but also lowers performance accordingly. Still, for a card like this it is important to fit into PCs of all shapes and sizes with minimal power supply requirements—the GTX 1630 achieves that. What puzzles me is why Gainward installed an additional 6-pin connector on their card. It's not needed and just adds complexity. The board power limit is set to 75 W, and our testing confirms that limit. We saw nothing higher than 71 W, even in Furmark. The power limit cannot be raised for overclocking as the manual adjustment range maximum is the same 75 W as the default power limit.

Unlike the Radeon RX 6400, which only supports a narrow PCIe 4.0 x4 interface, the GTX 1630 comes with PCIe 3.0 x16 support, which will work much better with older computers that only support PCIe 3.0 at most. If you take a look at our card photos, you'll notice that some traces near the socket are missing; these are for unnecessary ground and debug connection and save a few cents. The physical interface is really PCIe x16.

During AMD's RX 6400 launch, the lack of AV1 video decode on the Navi 24 GPU was a big topic of discussion, some even declared the card unusable for any media PC usage. AV1 is a video encoder that will gain ground in the coming years, replacing H.264 and H.265 in the process. While it would be nice to have, I'm not convinced AV1 will be a "required" capability in the foreseeable future. Video streaming services have automagic fallbacks to older video codes, and I'm sure they won't cut out a large portion of their audience by requiring the AV1 codec. The GeForce GTX 1630 has support for H.264 and H.265 video encode and decode acceleration, but no AV1 support (as expected). The GT 1030 has no video encode support at all, so it's an improvement from that perspective.

Overclocking still worked very well. We gained more than 11% in real-life performance with no significant increase in heat output or power consumption. Unfortunately, memory overclocking is broken beyond a certain point. While 1675 MHz is stable all day, 1676 MHz will crash the card immediately. It seems the VBIOS doesn't have the proper timing profile for memory speeds higher than 1675 MHz, possibly because it's not tuned for the Hynix GDDR6 memory chips installed instead of Micron GDDR6.

Neither NVIDIA nor Gainward could provide any pricing (or other materials) for this review. I was asked whether I want to review the GTX 1630 with an embargo on Jun 28, 6 AM, and the card showed up soon thereafter without any additional information. I reached out to NVIDIA, but they couldn't provide any additional materials, not even a press driver, and no reviewer's guide, so we're not aware of the actual NVIDIA philosophy behind this launch. A price point of $150 has been floating around for the GTX 1630. Based on the testing in this review, $150 is way too high, and I can't imagine it would attract many buyers. The GTX 1630 only offers a fraction of the performance of competing cards in this price range—look at our Performance per Dollar charts. I've plotted some additional price points, down to $75, at which point this card becomes interesting. Yup, shocking, but that's the value it offers. I feel like at sub-$100 I'd be willing to consider the GTX 1630 for a media PC or productivity system that doesn't have integrated graphics. Very light gaming is barely in reach for the GTX 1630; for that scenario and with money being tight, I'd opt for the following cards in order: RX 570 4 GB ($120 used), GTX 1060 6 GB ($140 used), RX 6500 XT ($175), GTX 1660 Super ($210), and RTX 2060 ($250).
Discuss(144 Comments)
View as single page
May 10th, 2024 22:26 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts