• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Chat with NVIDIA RTX Tech Demo

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,662 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
Chat with RTX is a new, free software by NVIDIA. It empowers your PC to execute complex AI models, completely offline, using the GPU for acceleration. Essentially, it brings ChatGPT to your desktop, delivering impressive results, including the capability to analyze YouTube videos.

Show full review
 
this just reads like an advert.
 
I'll set my PC for rent by the hour.
 
An advert for free, open-source software?
Hi,
That is tin cupping for 5-7 trillion dollar donations hehe
ChatGPT is MS so yeah more adds from them.
 
I told you RT was made for AI, no one listened. DLSS is just an excuse for the hardware. All you RTX believers have given Nvidia a stranglehold on the AI market and the whole world is going to suffer. Good job. :banghead:
 
this just reads like an advert.
it definitely is not an advert. nvidia sent the link, the reviewer's guide, which is pretty basic and that's it

also consider that we're listing several failures and criticism .. assuming you read the review
 
I told you RT was made for AI, no one listened. DLSS is just an excuse for the hardware. All you RTX believers have given Nvidia a stranglehold on the AI market and the whole world is going to suffer. Good job. :banghead:
It's mostly different components in the GPU. RTX is dedicated ray tracing hardware, AI (and DLSS) uses tensor cores (matrix math hardware).
Nvidia has been at this for a long time. I saw them at a supercomputing trade show in 2009 and wondered why a video card company was there. A couple years later, I was told we needed to build some supercomputing software that worked with their hardware and I didn't understand why. I left the company, came back a couple years later and got dragged into a project where I found out why.

Nvidia has the software that provides the base for the AI software, and other things. AMD, and Intel aren't even close at this point in time. AMD is working on this, but their software just is not there. They just abandoned a CUDA compatability library that they were working on, so maybe they are working on something else. At the moment, Nvidia owns this market.

it definitely is not an advert. nvidia sent the link, the reviewer's guide, which is pretty basic and that's it

also consider that we're listing several failures and criticism .. assuming you read the review
Yeah it's interesting software and does some useful things. But it's still not solid. It gets lots of simple things right, and lots of complicated detailed things wrong. Definitely not trustworthy for important stuff, as at least one lawyer found out when he submitted some documents written by AI to the courts.
 
Last edited:
No, you can't talk to your graphics card and ask "how's it going?

six-year head-start with AI acceleration and i still can't ask my GPU:

6c1c5a76db0dd7f80075be0f022319df.gif
 
I found fonts less aliased in the answers. DLSS™ effect maybe...
 
Definitely not trustworthy for important stuff
Yeah for me that's the biggest issue for these systems, because if I can't trust it for certain answers then I won't ask it certain questions. On the other hand I use ChatGPT regularly to help with writing and it is a great utility that saves me time, but I'd never just copy and paste its output blindly into my articles.
 
Unrelated to what this does specifically since all of these chatbosts have the same issues but it's really funny that it got wrong the 4080 super specs.
 
This looks bad. Not supported on RTX 20 despite even the weakest card in that series having way more TOPS than latest APU's.
Then the massive hardware requirements - something that we've come to expect from Nvidia. Even they themselves see 8GB as absolute minimum.
Also the clunky set up process and the fact that despite all these hardware requirements it's worse than ChatGPT 3.5 that that can run trough the web on a potato. So why exactly should anyone bother with this beta product?

Im not saying ChatGPT is faultless. For example is asked what is the most expensive discrete GPU even released and it thought it was Turing based Titan RTX at $2500 when in truth it's actually either the dual-GPU Titan Z or Volta based unicorn Titan V at $3000 and this does not account for way more expensive workstation cards (at least ones that have display outputs).
 
Thank you for this great review! This is a great tool for productivity
 
Thank god that it only analyze your Youtube usage and not what one wach on CornSub....
 
I told you RT was made for AI, no one listened. DLSS is just an excuse for the hardware. All you RTX believers have given Nvidia a stranglehold on the AI market and the whole world is going to suffer. Good job. :banghead:
Don't worry mat. People can't just see any further beyond the Nvidia's BS. People are either naive, or straight dumb. It's like people believe the robber that he won't stab them and take their money.
Reminds me of "Dr. Sbaitso" in Dos when you bought your first soundblaster card.

For those too young to even know what it is:

I don't remember, what was the first soundcard the forst, "family" PC had. But there was AWE64.

Funnily... completely "confidential". Like the companies didn't took this experience into account, and never tried to learn and repeat this thing, over and over again. Eventually this turned into the "support assistant" that every company has to answer to people's questions, instead of real specialists. As useless as during DOS times.
 
These answers make very little sense to the educated reader, even though they look plausible at a first look—one of the biggest dangers of AI-generated texts.
I believe dangerous is an understatement, and it is THE biggest problem with AI generated anything.
 
This looks bad. Not supported on RTX 20 despite even the weakest card in that series having way more TOPS than latest APU's.
Then the massive hardware requirements - something that we've come to expect from Nvidia. Even they themselves see 8GB as absolute minimum.
Also the clunky set up process and the fact that despite all these hardware requirements it's worse than ChatGPT 3.5 that that can run trough the web on a potato. So why exactly should anyone bother with this beta product?

Im not saying ChatGPT is faultless. For example is asked what is the most expensive discrete GPU even released and it thought it was Turing based Titan RTX at $2500 when in truth it's actually either the dual-GPU Titan Z or Volta based unicorn Titan V at $3000 and this does not account for way more expensive workstation cards (at least ones that have display outputs).
Hardware requirements aren't entirely NVidia's fault, but the large memory requirements don't hurt when pushing expensive cards. I'm not sure what models they include, since I have software to run these models already (Linux) but the smallest Mistral is a 7 billion parameter model which means it can be loaded in a GPU with 8GB (barely) in 8 bit mode and is probably OK with loading in 4 bit mode.
I have both a RTX 3060 and RTX 4070 in my system and can run the smaller models (up to 13 billion parameters) without much problem. The 34 billion parameters are a bit of a tight fit. Anything above that has to run partly on CPU and partly on GPU, using something like a GGUF format or exllama2 format model and will run a bit (maybe a lot, depending) slower.

The GPT front end runs in your browser. and is just a lightweight UI that runs there. The real work is done on servers they own that have multiple of something like H100 cards where the low end cards like L40S are $10K+ each.
 
Id be happier if you could use it for pervy stuff to amuse myself for 5 minutes.

Not a fan of anything AI, but I do like stand alone, offline software.
Really starting to feel the hurt from not owning physical games. Now that steam does not work on Windows 7 it sucks to play with/benchmark my older GPUs that require win7 for SLi.
 
This looks bad. Not supported on RTX 20 despite even the weakest card in that series having way more TOPS than latest APU's.
Then the massive hardware requirements - something that we've come to expect from Nvidia. Even they themselves see 8GB as absolute minimum.
Also the clunky set up process and the fact that despite all these hardware requirements it's worse than ChatGPT 3.5 that that can run trough the web on a potato. So why exactly should anyone bother with this beta product?

Im not saying ChatGPT is faultless. For example is asked what is the most expensive discrete GPU even released and it thought it was Turing based Titan RTX at $2500 when in truth it's actually either the dual-GPU Titan Z or Volta based unicorn Titan V at $3000 and this does not account for way more expensive workstation cards (at least ones that have display outputs).
RTX Video also initially launched only with support for Gen 2 and 3 cards, Gen 1 support was added later.

This is a beta product, like DLSS it will only get better.
 
Back
Top