• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Researchers Unveils Real-Time GPU-Only Pipeline for Fully Procedural Trees

I’m talking full integration that goes beyond separate GPU/CPU ‘tiles’, ‘chiplets’ and SoCs sections. One pipeline for all instructions duplicated up to the desired power level.
This wont work, different instructions benefit from different optimization.

This has already been tried, it miserably failed.

Of course what you said is the current compute scheme in use. It wasn’t that way in very beginning when video display adapters only handled signaling and did no computing. Eventually methods as outlined in this article might see a return to a more integrated computing scheme between high levels of parallalism and complex instruction sets.
Yeah, and there's a REASON we stopped doing that. We hit a wall pretty quickly in terms of capability. Voodoo GPUs were not significantly more advanced then Pentiums but the performance difference for rendering 3d models was night and day.

Some tasks benefit greatly from parallelization, others do not. That is basic computing 101.
 
This wont work, different instructions benefit from different optimization.

This has already been tried, it miserably failed.


Yeah, and there's a REASON we stopped doing that. We hit a wall pretty quickly in terms of capability. Voodoo GPUs were not significantly more advanced then Pentiums but the performance difference for rendering 3d models was night and day.

Some tasks benefit greatly from parallelization, others do not. That is basic computing 101.
All I’m saying is that a single chip that does great in both parallel and conplex tasks would be cool.

By the way, I still don’t get this, well school taught me this so nothing else is possible attitude. The point of basic 101 anything is to dumb things down to get a wide audience interested in something. Not define the length and breath of what’s possible.
 
All I’m saying is that a single chip that does great in both parallel and conplex tasks would be cool.
Well it would be cool, in the same way that sustained net positive nuclear fusion would be cool. What I'm saying is such a chip cannot exist, based on how code functions as a concept.
By the way, I still don’t get this, well school taught me this so nothing else is possible attitude. The point of basic 101 anything is to dumb things down to get a wide audience interested in something. Not define the length and breath of what’s possible.
People bring up 101 because if you dont understand the most basic concepts you cannot possibly understand anything more complex. You're advocating for a chip that can do two opposing things equally well, using the same design. Some software runs faster in serial, some in parallel. Different tasks prefer different process operations. The fact that, when presented this information, you bring up computers from 40 years ago, that were far simpler, that did everything on one processor shows your ignorance to how code functions and why a "jack of all trades" processor is not desirable in the modern age, and more importantly, your unwillingness to learn WHY such an idea was abandoned back when PCs still used command lines for their primary user interfaces.

That is why you need to go back to "how code works 101" and learn the most basic "dumbed down" version of the subject so you gain the slightest idea of what you are talking about. Hence the 101 reference.
 
Amdahl’s law is about computers and programming conceived by humans on earth. While gravity exists on all planets, human’s way of implementing computational devices is specific just to our current way of thinking. Its not universal but a limit of our species’ understanding.
So what you are saying is in some alternate universe with different laws of physics and mathematics your ideas will work.
 
So what you are saying is in some alternate universe with different laws of physics and mathematics your ideas will work.
He is not a scientist he is a troll
 
He is not a scientist he is a troll
My PhD in Chemistry from LSU says otherwise.

So what you are saying is in some alternate universe with different laws of physics and mathematics your ideas will work.
Guys, this is just a discussion about potential future tech. The three of you (Patriot, Visible Noise, and TheinsanegamerN) took the conversation no where fast. Under no circumstances is what being discussed impossible or possible. It's just a discussion on the merits of combining two human made concepts that have nothing to do with alternate universes, your experience in coding or the physical laws of the universe. But this forum is not a forum for experts which clearly NO ONE here including me has shown any expertise whatsoever. So I'm ending it with this, our lives are short and most of the biggest discoveries yet to come will not happen in our lifetime or anyone's for that matter as we will continue to progress forever. What you think is fundamental today will be the joke of some tech geek 100 years from now.
 
Last edited:
All I’m saying is that a single chip that does great in both parallel and complex tasks would be cool.

By the way, I still don’t get this, well school taught me this so nothing else is possible attitude. The point of basic 101 anything is to dumb things down to get a wide audience interested in something. Not define the length and breath of what’s possible.
It is not that at all, its 20yrs of tech industry experience talking. And watching the development of architectures try all sorts of things. It's the fact that nothing is free, everything has tradeoffs utopia doesn't exist. For claiming to be a PHD you seem to want to defy physics badly. I will grant you that the "rules of physics" are the world as we currently best understand but... in the topic of cpu/gpu arch you are soo out of your depth you can't even spot expertise.

There is a means of having generic performant hardware... they are called FPGAs, and it requires insane coding expertise to make them efficient and they still aren't a best choice for many applications.

If you want to know about the direction of compute, read up on silicon photonics, and all the competing fabrics that will eventually use it, nvlink, UAlink, Ultra Ethernet.
The reason the industry moved to dedicated silicon chunks for specialized acceleration is efficiency... you can have specific tasks handled and powered down when not needed.

There are certainly things that can be combined, FP32 and Int32 in gpus have been separate and combined back and forth from generation to generation, and you get... a tradeoff, it can do more FP32 all at once, but worse if both are needed at the same time.

You struck the 3 of us with profound ignorance and want to wave inapplicable credentials while claiming to want to learn while not listening.
 
Tbh to have a computer with a truly unified architecture with CPU, GPU and RAM all together we have to abandon the von Neumann architecture that all modern PCs are based upon.. it's not impossible, it's just that we have not found yet what's best to ditch that for good..
 
This aged poorly after the Witcher 4 demo.
 
Awesome, now if the can work through a real water simulation, like waterfalls, flowing water, etc, that would be a miracle. Water and fire simulation are still the worst of the worst implementations in any games out there, even on the AAAAA ones.
 
Back
Top