• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Would you pay more for hardware with AI capabilities?

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
28,648 (3.74/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
AI capabilities are becoming increasingly integrated into hardware devices, promising enhanced performance and functionality. However, this advanced technology often comes at a premium price. Would you pay more for hardware with AI features?
 
Nah. Personally I don't find any other usage for AI than making cool images.
 
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring hundreds of gigs or even terabytes of RAM for LLM datasets, hundreds of terabytes of fast storage for inference, and year(s) of training on multiple petabytes of datasets to become useful.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value other than maybe some fun image manipulation that I can't really classify as AI. Yes, you can run a few things in CUDA on your GPU but the real power is in a subscription to a cloud service where 30 hours of your GPU can be replaced by a 2-minute wait for the cloud SC to do the job instead.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
 
I have no use for AI or any interest in it for now so no.
 
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring TB of RAM, multi-TB of working memory on fast flash, and years of training on PB of datasets.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
I agree!

But the moment i can buy something that runs 100% locally (via GPU) and let´s me talk to my pc and USE it that way, i´m 100% in.
 
I'm dumber than some AIs so I don't even know how to use them and get some profit off of it. So no. I'd prefer more calculating power per watt instead.
 
I agree!

But the moment i can buy something that runs 100% locally (via GPU) and let´s me talk to my pc and USE it that way, i´m 100% in.
I feel like we're 2-5 generations of hardware away from that. Let's see what Blackwell can do and what it costs, but it would need to be a couple of orders of magnitude faster than a 4090 I think - We have to get hours down to minutes for the big workloads, and minutes down to seconds for the real-time interaction/discussion sort of behaviour you're talking about. It can be done today but using those datacenter pools of multiple dedicated AI systems either running tons of QuadroRTX/4090s or something multiples of something AI specific like the DGX-2.
 
Since AI is starting to be a thing on phones i.e the Samsung Galaxy S24. I was pleasantly surprised when my Samsung Galaxy S22 recently was upgraded to one UI 6.1 and now also have the same AI features as the S24 with no added cost. So basically running on a phone with no native AI hardware. The note / transcript assist is very impressive with the embedded voice recognition. The generative AI in photo editing is also very cool. So I voted NO in the poll. I do however fear AI features becoming subscription based in the future, where you pay on a monthly basis for the features you want.
 
The most interesting thing that I have seen in AI has to be that monitor from MSI that was at CES. It has an AI chip that will tell you from what area of the map the next enemy will be. For Action RPGs ans DOTA clones that could be very compelling.
 
The most interesting thing that I have seen in AI has to be that monitor from MSI that was at CES. It has an AI chip that will tell you from what area of the map the next enemy will be. For Action RPGs ans DOTA clones that could be very compelling.
Until they call it cheating and block anyone using one of those monitors

Personally I have no use for it unless it can be used to make NPC's in games less boring like the zombies in CP2077 otherwise it's a nope from me
 
I would pay slightly more to have it removed and more cache or some other useful hardware in its place
 
Not this generation.

All of the useful AI I've encountered so far is datacenter-hosted AI requiring hundreds of gigs or even terabytes of RAM for LLM datasets, hundreds of terabytes of fast storage for inference, and year(s) of training on multiple petabytes of datasets to become useful.

The "local AI" software I've tried has been little more than a marketing gimmick that achieves little of value other than maybe some fun image manipulation that I can't really classify as AI. Yes, you can run a few things in CUDA on your GPU but the real power is in a subscription to a cloud service where 30 hours of your GPU can be replaced by a 2-minute wait for the cloud SC to do the job instead.

For now, AI is a cloud product and there's a massive, unbridgeable gap between it and what any local NPU can achieve.
From what I've seen it's the training which is only done once, that is intensive. Inference is relatively light work that a current GPU, or even a phone could do.
Most users wont ever do any training, so there are really two different requirements.
From what I've seen larger models only make bigger crappy artwork, or make up more detailed and authoritative sounding, and yet probably incorrect text, about an even wider range of subjects.

So I believe you are right about training models, but I disagree about the power needed to run inference locally, and to a lesser extent about the value of cloud based AI's
Out of curiosity , what LLM is requiring "hundreds of terabytes of fast storage for inference"?
What exactly is this "useful" AI you've encountered, and what do you use it for?
 
Like auto-correct an AI will to a large degree get in the way.
 
Not interested in AI specific hardware (yet). Just like my jump from GCN 4.0 to RDNA3, there's a MASSIVE gap in that technology before it catches my attention with anything good.
AI stuff has been a lot of fun since early 2020 but it requires prompt input skill and a training of thought that isn't natural to my abilities, which tells me either I've spent a great deal of time in the wrong parts of the Internets OR there's just that great of a barrier to entry to get started. My initial hardware isn't tailored to AI either. A modern GPU or AIML specific peripheral is the key to gluing all of this together and it's just not interesting enough right now. Maybe when I go to a newer GPU and I'm not talking about a jump in FP64 of 2 TFlops but maybe 64 as is becoming the norm in datacenters. Even then, who is buying? I would rather stick to free services until there's proper reason to invest in this besides like I see going on with Blockade Labs or AI powered MAM like Axel AI. I would be whole heartedly interested in running that last one locally if possible but we're just not going to get there for a while.

More importantly, AI application capable ≠ AI application advantageous. It's been the same case with gaming just to support some DirectX feature level or RT or some other performance metric. I don't care about this and I don't like that features I don't care about are becoming the main focus, getting hamfisted into these devices at an exotic premium. In my timeline, a Radeon HD 6570 was ~$55 USD and that was mildly annoying in a year where I needed to switch to a motherboard that didn't have AMD's 790GX powered IGP garbage. Keeping the specific purpose silicon separate is clutter but part of my philosophy so, gotta eat the cost.

My RX 580 was in the peak of Ethereum mining, something very new at the time which I also didn't care about because it's all cryptominer scammers crapping up the market like normies, sneakerheads and every other invader and tourist that doesn't care about our hobbies. It was double MSRP. Mind you, I already considered $200 to be insane. Now we have sub-$100 Chinese hens coming home to roost on their AliExpress 2048SP miner card and a bunch of screeching to FiNd My vBiOs. They can fry for all I care. :wtf:

My 7900 XT doesn't have any particular special feature to it and at $710 USD was a steal. It appears to be holding its value too. Marketed as an AI device sounds like a plus if I care to play around with that but how many of us actually do? How much of this AI junk is just a markup on the device and how much is it? Probably half of its MSRP.

In the future it looks like whatever becomes interesting at whatever DX feature level or AI level or whatever is going to easily be $$$$ or what we know today as 4090 price territory. I don't care for it.
 
No, there is nothing i find interesting my graphics card can't accelerate or that can't be done in the cloud at the moment and i highly doubt anything interesting will be introduced within 5 years that will change that.
 
I would pay (slightly) more for hardware without "ai" capabilities. My reasoning is this capability only exists for two reasons:
-spamming with marketing buzzwords,
-running rudimentary local models for marketing purposes - like Microsoft undoubtedly does or will do with Windows 11.
It will not be capable of running anything useful to me for many generations, if ever. Therefore it's a waste of sand.
 
This might have been an interesting question five years ago.

Today it's a joke. Apple put ML cores in their phones with the iPhone X/iPhone 8 series.

Back in 2017. So no, I didn't bother submitting a vote in this poll. But it might have been fun before the pandemic.

Many TPU discussions are several years behind the times. And a lot of AI functions are already here without any fanfare. You ever get a recent fraud alert from your credit card issuer or bank? That's AI in IRL action.

So you have a mid-level Android smartphone and you don't dabble with new Internet functions. That's fine. It doesn't mean you're running AI free. At some point you'll be swimming in AI situations even if you never paid a dime extra for AI.
 
Last edited:
From what I've seen it's the training which is only done once, that is intensive. Inference is relatively light work that a current GPU, or even a phone could do.
Most users wont ever do any training, so there are really two different requirements.
From what I've seen larger models only make bigger crappy artwork, or make up more detailed and authoritative sounding, and yet probably incorrect text, about an even wider range of subjects.

So I believe you are right about training models, but I disagree about the power needed to run inference locally, and to a lesser extent about the value of cloud based AI's
Out of curiosity , what LLM is requiring "hundreds of terabytes of fast storage for inference"?
What exactly is this "useful" AI you've encountered, and what do you use it for?
ChatGPT Plus is fantastic at finding poignant details in a 1500-page technical manual, or looking for common trends across multiple research papers.

The most useful functionality for AI in my industry (AEC) is in cleaning up point-clouds. Even a small point-cloud survey can be 500GB of raw data and it's generally noisy data taken by lidar or photogrammetry and needs vetting to clean up. This used to be an algorithm-driven process that required a lot of human post-process cleanup and sanity-checking. AI tools can now turn a raw dataset into a useful 3D model much faster, removing a lot of human workload and rapidly converting half-terabyte files into useful meshes that take up a few hundred MB. Pointclouds are typically cloud-hosted, so being able to covert hundreds of gigs into hundreds of megs before they leave the survey repository makes the files way easier to handle and move around, too.

I'm sure there are other good uses for AI but identifying shapes, outlines, objects, and general pattern recognition is a huge one for the sector I work in. The larger the dictionary of training, the more accurate and useful the object-recognition seems to be.
 
At present, I don’t use any software that demands it. I wouldn’t rule it out in the future though.
 
Today, I would have to say "no". But I would like to remind people that even floating point operations were once offered as an extra add-on at additional cost. Also, I imagine what AI will do 5 years from now will be quite different from the cheap tricks we see today.
 
Today it's a joke. Apple put ML cores in their phones with the iPhone X/iPhone 8 series.
The only thing those cores are really used for are to assist with offline voice recognition and to speed up face-unlock. I might be ignorant of other uses but those are the only headline features that get any coverage/attention.
Many TPU discussions are several years behind the times. And a lot of AI functions are already here without any fanfare. You ever get a recent fraud alert from your credit card issuer or bank? That's AI in IRL action.
Most, maybe all of those things are datacenter AI running at Apple, Google, Microsoft, Amazon, your bank, whatever - they're not running locally on your personal device and they don't work when your device can't reach the internet. This poll is about whether you would pay extra for hardware that has its own local AI, not a discussion on whether you would pay for AI in general, the overwhelming majority of which is externally-hosted, online AI.

When a standout application/use-case arrives that makes it worth having local NPU/AI processing available, then there will be a reason to pay for it. At the moment we're playing the-chicken-or-the-egg game and the only way to break that cycle is to embed NPUs into all hardware for no additional cost so that there's enough of a hardware base to make software for locally-run AI economically-viable. This is, of course, just my opinion on the subject and I'm not professing to be any kind of AI expert. If there's some kind of fantastic thing that requires an NPU that I've missed, please do inform me. I do not and cannot read all the news :)
 
i don't care about AI things, most programs and games don't using AI accelerators, if they AI thing most likely it will be cloud based service, not locally.
but for the future maybe if they are more programs start demanding hardware accelerator and these parts like cpu and gpu become a standard feature on mainstream price
 
Back
Top