• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,675 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.



DOWNLOAD: NVIDIA ChatRTX

As with the original Chat with RTX tech-demo, the new ChatRTX application's biggest feature is its ability to let users switch between AI models, or to let it build and train your own dataset based on text and images on your local machine. You can point it to a folder with documents such as plaintext, Word (doc) and PDFs, as well as images; and it will train itself on answering queries related to the dataset.


There remain some major limitations with ChatRTX which we had hoped would be fixed since its February release, and that is context—the ability to ask follow-up questions. Apparently, follow-ups are harder to implement than they seem, as the model has to connect the new question to the previous ones and its responses to them. It's also inaccurate in attributing its responses to the right text tiles. The browser-based frontend only supports Chrome and Edge, it's buggy with Firefox.

View at TechPowerUp Main Site
 
V cool, gonna try it out later today
 
The level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
 
The level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
Her (film) - Wikipedia
 
And so it begins....

Have rope, will strangle....

First they captured the AI cloud-base, now they are wiggling their way into your pc's insides, which will spawn the command & control of all pc's everywhere, all the time, all at once...

And then, all that will remain is "Hello SkyNet, how can I help you infiltrate & destroy humanity today ?

No thanks, cause resistance is NOT futile !
 
Nice, I will have to check this out. Wonder how much VRAM matters for this. 16GB seems to be the bare minimum for any sort of AI work from my limited knowledge.
 
The level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
In my opinion the reason why Cortana failed in Windows. Imagine having the HALO hologram on screen when wanting some assistance instead of some text or plain audio.
 
I find it amusing that a company whose entire business is built on top of graphics, can't be bothered to develop a decent, native GUI instead of this webserver-based crap.
Oh well... At least now people can get their wrong and nonsensical answers without risking [much of] their privacy.

Imagine having the HALO hologram on screen when wanting some assistance instead of some text or plain audio.
We had better.
Clippy-Featured.jpg


now they are wiggling their way into your pc's insides,
I have some really bad news for you, mate...
 
Last edited:
This makes me crave the sweet release of death.
 
Meh, just get GPT4All. Better and not locked down to Nvidia only.
 
It would be great to be able to buy next year a gaming PC which is not an AI PC.
Good luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.
 
Good luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.
They exist according to the Kagi search I just did.

Glad I didn't use Google or I might start getting targeted adverts...
 
AI this AI that...and here i am just wanting a reasonably good and affordable GPU without a need to sell a kidney.
 
Windows-only. Shove it, Nvidia.
 
I'm honestly amazed there aren't AI dildos yet
"I am fully functional, and programmed in multiple techniques" - Lt. Cmdr Data :D

'nuff said !
 
  • Love
Reactions: bug
Nah, I was pondering something more profound and ominous, envisioning the CEO in leather jacket entering a sinister pact with pharmaceutical companies. Their aim? To employ any means necessary to tether people to their screens, fostering obesity to peddle weight loss pills as miraculous solution. You can take it, Netflix, :P
 
Holy cow... 11.6GB...????
Fits in an x70's VRAM!

Until the next update

Nah, I was pondering something more profound and ominous, envisioning the CEO in leather jacket entering a sinister pact with pharmaceutical companies. Their aim? To employ any means necessary to tether people to their screens, fostering obesity to peddle weight loss pills as miraculous solution. You can take it, Netflix, :p
All that's left for you to do is find a link that says Huang tried Ozempic and you're up for internet hero status. The ingredients are all there, live and in effect already... :D Who needs Netflix?
 
Good luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.

They exist according to the Kagi search I just did.

Glad I didn't use Google or I might start getting targeted adverts...

"AI" in dildos now? Is NOTHING sacred? :cry: It's a good thing I'm more of a "hands on" kind of gal... Go to hell, Skynet Dildo!
 
Fits in an x70's VRAM!

Until the next update


All that's left for you to do is find a link that says Huang tried Ozempic and you're up for internet hero status. The ingredients are all there, live and in effect already... :D Who needs Netflix?
I'll have my GPT henchman do it;
Placing creation against the creator is a cliché but it works. :toast:
 
Fits in an x70's VRAM!

Until the next update
12.9GB after decompression, so no, you need an upgrade! nGreedia will kindly help you out though. ;)
 
But won't someone think about Clippy ?

Can copilot be run locally or is that coming?
 
But won't someone think about Clippy ?

Can copilot be run locally or is that coming?
I think they can all run locally, but they eat too much RAM.
Remember the debacle around Gemini on Pixels, when Google wanted to leave out the 8 because of "hardware limitations", despite it being exactly the same as the Pro, only with 8GB RAM instead of 12?

I believe what is going on here is companies figuring out ways to shrink their models while still offering useful functionality or otherwise compressing the models better, before declaring them ready to run on local machines.
 
Back
Top