• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

HalfLife2 RTX Demo Is out!

Are you interested in HalfLife 2 with Ray Tracing?

  • Yes! Bring it on!

    Votes: 44 42.3%
  • Yes, worth a try.

    Votes: 26 25.0%
  • No, I like the original look more.

    Votes: 20 19.2%
  • Something else, comment below.

    Votes: 14 13.5%

  • Total voters
    104
I have to say I enjoyed playing it last night. It ran great on my system with my 14700k+4070 Ti @ 1440p. I didn't touch any settings just loaded in. I'll play around with settings later. It's probably set to balanced by default. Was I blown away.. Naw it looks good but honestly I'm ready for HLX. lol I would rather the team convert it to Source 2.
 
I'll throw my computer at this thing to see. I don't doubt the struggle of turing cards though. I could barely get Portal RTX to run even after a hour of tuning with it, so I don't expect this to be any different. I can't really judge it anyway if I can't run it though.

I think the more disappointing thing, besides the 'oh wow, another RTX game that is designed to push graphics, pushes graphics, and runs badly on most hardware', is the fact that the team behind this game are so wonderfully talented, and this game is getting dragged through the MUD already. Hopefully they can see through the fog and see that its just people upset the same way they were with Portal RTX.
Still though, I think the team should of avoided working w/ NVIDIA outside of their software that they used to make the game (you should look it up btw, its actually pretty cool on a unrelated note) because the trailers for it are a bad look when I dont think it was their intention to try and sell graphic cards. I could be misinterpreting how that whole thing is handled anyway, though.

Also, heard rumors about there being gpu utilization leaks.

As a diehard HL fan, I'll also judge it based on how the lightning and RT implementation is done too if I can get it playable with good enough fidelity..
 
Last edited:
Sorry, you get that when immovable object meets impenetrable wall.
Immovable objects don't have conflicts with impenetrable walls. Your analogy is as flawed as your attitude which needs desperate improvement.
Tell me again who's frustrated here.
Don't need to, everyone can see it.
It was an objective question
No, it was you trying to stir the pot while at the same time flying in the face of reason when you know damn well what the answer is. Anyone who can not see the advantages to lighting and other light-ray based FX is someone oblivious to reality or just being belligerent for the sake of being annoying and unpleasant.

Moving on... and back on topic.

RandomGaminginHD just did another video with Radeon IGP 780M.
This was an unplayable slideshow. Even on 480P it was just sad.
 
Just for shits n giggles.

1742414381858.png
1742414422459.png
1742414399703.png

1742414492904.png
1742414516870.png
1742414563388.png
1742414546146.png
1742414580281.png


If you look at the gist.... performance. Performance. Performance. Overall looks are different, artistic style impacted, and 'I game to play games and not look at a blurry mess'.
 
Just for shits n giggles.

View attachment 390621View attachment 390623View attachment 390622
View attachment 390624View attachment 390625View attachment 390627View attachment 390626View attachment 390628

If you look at the gist.... performance. Performance. Performance. Overall looks are different, artistic style impacted, and 'I game to play games and not look at a blurry mess'.
So rather unsurprisingly, its the same talking points as people used for / against Portal RTX. Who woulda thunk?

Anyone who can not see the advantages to lighting and other light-ray based FX is someone oblivious to reality or just being belligerent for the sake of being annoying and unpleasant.
I don't see RT as purely a advantage. I think stuff like Lumen is awesome, and still looks good enough to where I'm happy with it. Plus, its pretty optimized.
Before I say, I should preface that I don't mean to imply anything by saying this, but I dont think RT has a place in every game. I think raster still can look very good and sometimes be a much better option overall for a games sake even if it looks worse off for it.

Obviously most games don't use RT as their only lightning, and if they do, its either as a tech demo of sorts like these RTX games have been, benchmarking, or stupidity (Lumen though, its fine.) The future of RT is in optimized (but wisely used) RT, like Lumen kind of is (though its only really the first half of that but ehh whatever, it fits enough for my opinion)

I don't see RT as 'the future', not the way its currently being used. I see stuff like Lumen, and their approach to RT, as the future. And who knows, I could still be wrong.

As for how it effects this game, and its lightning compared to the original? I'll see. I am setting myself up for disappointment, I think though.

I haven't seen a chance to throw my RT opinion out there so rant over.
 
NEWS FLASH, 8GB of VRAM is still enough for gaming in 2025! Also, raytracing is here to stay.
My 4060LP with 8GB vram was doing just fine getting 45 to 55fps last night before the game crashed with the latest game ready driver. I don't really see the problem but I've kind of been stuck under the umbrella of 60Hz gaming for a long time. Everyone wants the best graphics they can get but at some point you just need to relax and enjoy the game for what it is using what adjustments you can do to make that happen.
 
I don't see RT as 'the future', not the way its currently being used.
You just highlighted the key point: There's more than one way to skin a cat. RayTracing is flexible and can be done in many varying ways. The general concept tracing light rays is the same, but the way it's done is very customizable for the specific needs of the use-case scenario.

However, until a better way to replicate lighting comes about, mimicking nature with raytracing is now the standard and isn't going anywhere.

My 4060LP with 8GB vram was doing just fine getting 45 to 55fps last night before the game crashed with the latest game ready driver.
I'm betting real money you could get above 60 consistently if you tweak some more.
 
I'm betting real money you could get above 60 consistently if you tweak some more.
I might need some guidance with that as I'm not quite familiar with Nvidia settings.
 
I might need some guidance with that as I'm not quite familiar with Nvidia settings.
The settings I'm talking about are in game. You just need to tinker with dialing things down until you reach your target FPS while hitting the quality you like. Keep in mind, this is an unoptimized beta, so things in the engine are going to change and get better.
 
The settings I'm talking about are in game. You just need to tinker with dialing things down until you reach your target FPS while hitting the quality you like. Keep in mind, this is an unoptimized beta, so things in the engine are going to change and get better.
Ah ok. I did go though those last night. I will double check but I recall being underwhelmed with the available options.
 
This is why I don't get all the kvetching about RT being slow; rasterisation used to be slow too and guess what, the hardware did eventually catch up. Everyone seems to have forgotten how truly wretched it was trying to get big-name games to run at playable framerates on graphics hardware between 1995 and 2010; since then we've been truly spoiled by how powerful that hardware has become. RT is that decade-and-a-half cycle repeating, and just like back then people were frustrated at how apparently slow things were moving, and just like back then it is getting and will get better.

Yes, there is the Moore's Law wall to contend with this time around, but we also have other solutions like frame generation to help us overcome the limitations of physics. You may not like them, but they are solutions, they do work well enough, and they too are getting and will get better. And eventually the hardware will become powerful enough to not need them, and we will discard them.
The problem back then was that technology moved so fast, and games pushed boundaries with every release, so you had to buy a new graphics card, or even do a complete system upgrade every year. I got the PC I mentioned above in 1998. The GeForce 256 came out only a year later. The difference was night and day.

The problem now is that (some) game developers treat things like RT as a tick box exercise, something they have to use at all costs, instead of something that makes their game better. And this has been going on for 6 years at least, without too much advancement in hardware. A 2080 Ti is still a decent piece of hardware, even though it's 3 generations behind. Like you said, there's the Moore's Law wall, but we're closing our eyes, trying to pretend that it isn't there.

If RT really was the future, then I'd like to see Nvidia and AMD making advancements in that front. I mean, AMD has with RDNA 4, as it runs RT miles better than a 7900 XTX, but if you look at Nvidia's performance charts, you'll see that Blackwell is gimped by RT at an equal measure as Ada, Ampere and Turing was, so nothing has changed there architecturally.

Frame generation doesn't solve anything as long as it can't deal with a low frame rate input without introducing severe lag and graphical glitches. Lipstick on a pig.
 
Ah ok. I did go though those last night. I will double check but I recall being underwhelmed with the available options.
The only thing I can think of that applies to everyone, is AA. Turn off any form of AA, regardless of type and things should improve. How dramatic that will be?..

you'll see that Blackwell is gimped by RT at an equal measure as Ada, Ampere and Turing was, so nothing has changed there architecturally.
There's a reason for that, there's only a few ways to calculate light-ray trajectories and it's very mathematically intensive. There is no avoiding that fact and ATM, there are no short-cuts that work well like there are with rasterization. To do ray-tracing properly it has to be done the long/right way. We can't fake simulating physics accurately.

Make no mistake though, accurately simulating physics in games is the future and the way forward. RT is here to stay.
 
Last edited:
I would only add that in addition to RT I want physics in games to really take off and this seems to have going backwards instead of forwards. If we have RT lighting that allows dynamic time of day to look good and dynamic environment then make the damn environment really dynamic. Allow us to break stuff, move it, destroy it and do whatever. CPU power is there I believe so there should be no reason to not improve that aspect.

It's hard to sell RT when the baked lighting would work in a game. Make that game have advanced destruction, it will show off both RT and physics.
 
There's a reason for that, there's only a few ways to calculate light-ray trajectories and it's very mathematically intensive. There is no avoiding that fact and ATM, there are no short-cuts that work well like there are with rasterization. To do ray-tracing properly it has to be done the long/right way. We can't fake simulating physics accurately.

Make no mistake though, accurately simulating physics in games is the future and the way forward. RT is here to stay.
I mean, the 9070 XT that sort of equals a 7900 XT in raster is faster than the XTX in RT. On that logic, the 5070 Ti that equals a 4070 Ti Super in raster should be much faster than that in RT, right? But it's not. That's why I'm saying that RT isn't moving anywhere. Not forward, not back, it just is... After 4 Nvidia generations now. It still eats your PC for breakfast. Nothing is changing. This is not how I envisioned RT being the future 6 years ago.

Back in the HL 1 days, new games obliterated your PC, but you bought the next graphics card a year later and you were good. This isn't the case now.
 
Back in the HL 1 days, new games obliterated your PC, but you bought the next graphics card a year later and you were good. This isn't the case now.
Yeah it's RT but also just the actual stagnation / slowing of progress in actually making chips smaller and faster in every metric, those days are long gone. AMD definitely did great by disproportionately improving RT in RDNA4, Nvidia has changed the RT cores in Blackwell but there could be other reasons we don't see the benefits, there could be a bottleneck elsewhere, or the way current games are coded don't take advantage of it, still disappointing. AMD got a big leap perhaps because they were further behind and they still had bigger wins to conquer that were already conquered in Nvidia hardware (after all they are still faster at RT).
 
I mean, the 9070 XT that sort of equals a 7900 XT in raster is faster than the XTX in RT. On that logic, the 5070 Ti that equals a 4070 Ti Super in raster should be much faster than that in RT, right? But it's not.
That's not a fair comparison. AMD's implementation of RT is different from NVidia's.
Back in the HL 1 days, new games obliterated your PC, but you bought the next graphics card a year later and you were good. This isn't the case now.
That's because of the "Moore's Law" thing. As we edge closer to the atomic scale, advances in IC scaling diminish. We can't just drop the lith scale a notch and triple our compute power anymore. Those days are behind us.
After 4 Nvidia generations now. It still eats your PC for breakfast.
Yeah, that's how intense the math is.
This is not how I envisioned RT being the future 6 years ago.
100% agree with you on that one. Greater advances were expected. That's why optimizations in coding are so important right now.

My guess is that coders will find ways to do RT and related physics in games in a very optimized way. But it's going to take more time. Personally, I see someone coming up with a task oriented ASIC specifically designed for RT/Physics that will offload or perhaps supplement GPU's to good effect.
 
Yeah it's RT but also just the actual stagnation / slowing of progress in actually making chips smaller and faster in every metric, those days are long gone. AMD definitely did great by disproportionately improving RT in RDNA4, Nvidia has changed the RT cores in Blackwell but there could be other reasons we don't see the benefits, there could be a bottleneck elsewhere, or the way current games are coded don't take advantage of it, still disappointing. AMD got a big leap perhaps because they were further behind and they still had bigger wins to conquer that were already conquered in Nvidia hardware (after all they are still faster at RT).

My hope is that it's just a one off for Nvidia that it was rushed to meet AI demand and that the next couple generations will be better or that they will come out with something to mitigate the advances in process node not being as robust as in the past, My worry is that the AI bubble will need to pop first before they even care to invest in making better gaming gpus

Now that I am retired I am likely in that 700-1k range for a gpu upgrades and honestly Blackwell worries me and even though AMD made some good strides it's not hard when you are so behind at a specific gpu task kinda how Zen1 murdered Bulldozer but was only Haswell levels of IPC, amazing but at the time Intel was still ahead hopefully they can carry the same momentum and by the time I need my next upgrade they have a killer 700-900 usd GPU.

I guess only time will tell but even really old games like this still struggle on cutting edge hardware unless you really compromise the image quality I get that it is more of a tech demo and likely not as optimized as a ground up product would be but just imagine how something like Hellblade 2 would perform with even this level of RT lol.
 
Yeah it's RT but also just the actual stagnation / slowing of progress in actually making chips smaller and faster in every metric, those days are long gone. AMD definitely did great by disproportionately improving RT in RDNA4, Nvidia has changed the RT cores in Blackwell but there could be other reasons we don't see the benefits, there could be a bottleneck elsewhere, or the way current games are coded don't take advantage of it, still disappointing. AMD got a big leap perhaps because they were further behind and they still had bigger wins to conquer that were already conquered in Nvidia hardware (after all they are still faster at RT).
I know it sounds mad, but I don't think Nvidia has done much with the RT cores since Turing. If the 2080 lost let's say 48% performance in X game with RT on vs off, then the 3080 did, too, and the 4080 and the 5080. The only reason we see more FPS is because we see more FPS in general due to having more cores running at higher frequencies, and not because those cores have improved. But the relation between raster and RT performance has stayed the same. This is not the "RT is the future" promise that we've been fed for the last 6 years.

Sure, AMD made a huge leap with RDNA 4 on the RT front because they had more ground to cover, I give you that.

That's not a fair comparison. AMD's implementation of RT is different from NVidia's.
And? I don't care about the implementation, I care about what I see on screen.

That's because of the "Moore's Law" thing. As we edge closer to the atomic scale, advances in IC scaling diminish. We can't just drop the lith scale a notch and triple our compute power anymore. Those days are behind us.
Exactly.

My guess is that coders will find ways to do RT and related physics in games in a very optimized way. But it's going to take more time. Personally, I see someone coming up with a task oriented ASIC specifically designed for RT/Physics that will offload or perhaps supplement GPU's to good effect.
That wouldn't be bad. Just like dedicated AI chips wouldn't be bad for those who only need that, while leaving gaming GPUs alone.
 
I know it sounds mad, but I don't think Nvidia has done much with the RT cores since Turing. If the 2080 lost let's say 48% performance in X game with RT on vs off, then the 3080 did, too, and the 4080 and the 5080. The only reason we see more FPS is because we see more FPS in general due to having more cores running at higher frequencies, and not because those cores have improved. But the relation between raster and RT performance has stayed the same. This is not the "RT is the future" promise that we've been fed for the last 6 years.
I suppose, I believe them when they say they did XYZ improvements to the RT core (double ray-triangle intersection speed?, and to lie about that must be a serious form of fraud) but that facet itself mustn't be what current games with RT are actually benefitting from or there is a bottleneck to the RT load itself elsewhere in the GPU, or another variable I haven't considered because I don't know what I don't know. Looking at the architecture breakdown of Blackwell it's fairly typical of Nvidia from the alst few generations, it's forward looking but it will take time for those features to actually get leveraged. Interesting to keep an eye on to see if the delta (or lack thereof) between it ad Ada and even further back continues to widen as years go on or if it stays exactly the same. I'd certainly be interested in some form of expert/engineer's detailed take on Ada->Blackwell and RDNA3->4 and the RT improvements and why they are or aren't realised as the best I/we can do is speculate.
Sure, AMD made a huge leap with RDNA 4 on the RT front because they had more ground to cover, I give you that.
However they did it, bloody well done to them.
 
I suppose, I believe them when they say they did XYZ improvements to the RT core (double ray-triangle intersection speed?, and to lie about that must be a serious form of fraud) but that facet itself mustn't be what current games with RT are actually benefitting from or there is a bottleneck to the RT load itself elsewhere in the GPU, or another variable I haven't considered because I don't know what I don't know. Looking at the architecture breakdown of Blackwell it's fairly typical of Nvidia from the alst few generations, it's forward looking but it will take time for those features to actually get leveraged. Interesting to keep an eye on to see if the delta (or lack thereof) between it ad Ada and even further back continues to widen as years go on or if it stays exactly the same. I'd certainly be interested in some form of expert/engineer's detailed take on Ada->Blackwell and RDNA3->4 and the RT improvements and why they are or aren't realised as the best I/we can do is speculate.

However they did it, bloody well done to them.

It's also best to probably isolated the RT cores and compare

2080ti to 3080ti only saw an increase in RT cores of like 18% but at least on my end I was seeing 50% uplift in RT heavy games.

3080 to 4080 only saw an 11% gain in RT cores but a much larger gain in RT perfomance.

While the overall performance of the gpu matters the most part of it has been Nvidia being more reserved about adding more RT cores for whatever reason.
 
I suppose, I believe them when they say they did XYZ improvements to the RT core (double ray-triangle intersection speed?, and to lie about that must be a serious form of fraud) but that facet itself mustn't be what current games with RT are actually benefitting from or there is a bottleneck to the RT load itself elsewhere in the GPU, or another variable I haven't considered because I don't know what I don't know. Looking at the architecture breakdown of Blackwell it's fairly typical of Nvidia from the alst few generations, it's forward looking but it will take time for those features to actually get leveraged. Interesting to keep an eye on to see if the delta (or lack thereof) between it ad Ada and even further back continues to widen as years go on or if it stays exactly the same. I'd certainly be interested in some form of expert/engineer's detailed take on Ada->Blackwell and RDNA3->4 and the RT improvements and why they are or aren't realised as the best I/we can do is speculate.
I believe Nvidia when they say they've done XYZ to the RT cores, but where are the benefits? It's like AMD with the chiplet design in RDNA 3. It's highly advanced tech, but did it make the cards faster? Or cheaper? No. It didn't even go into the margins, as we all see they lost some massive cash on Radeon last year. A technology is only as good as the implementation of it. AMD has proved that with FX, then chiplets on RDNA 3, and now Nvidia is proving it with Blackwell, it seems.

I also don't think Blackwell is "forward thinking" in any way. Looking at the diagrams, it's still the same architecture as Turing, only with the INT/FP roles of the shader cores mixed up to be more versatile - which again, shows no gains in real life.

It's also best to probably isolated the RT cores and compare

2080ti to 3080ti only saw an increase in RT cores of like 18% but at least on my end I was seeing 50% uplift in RT heavy games.

3080 to 4080 only saw an 11% gain in RT cores but a much larger gain in RT perfomance.

While the overall performance of the gpu matters the most part of it has been Nvidia being more reserved about adding more RT cores for whatever reason.
What gain in RT performance? Do you mean gain in performance in general? RT performance alone hasn't improved much since Turing (see my explanation above).
 
I believe Nvidia when they say they've done XYZ to the RT cores, but where are the benefits? It's like AMD with the chiplet design in RDNA 3. It's highly advanced tech, but did it make the cards faster? Or cheaper? No. It didn't even go into the margins, as we all see they lost some massive cash on Radeon last year. A technology is only as good as the implementation of it. AMD has proved that with FX, then chiplets on RDNA 3, and now Nvidia is proving it with Blackwell, it seems.

I also don't think Blackwell is "forward thinking" in any way. Looking at the diagrams, it's still the same architecture as Turing, only with the INT/FP roles of the shader cores mixed up to be more versatile - which again, shows no gains in real life.

I honestly think it has to do with them not wanting to stick a bunch of rt cores on the gpu probably due to cost and wanting to keep margins up more than how much they are or are not improved.

My theory is they improve then so that they can keep perfomance similar without having to add a bunch more.

Totally unfounded and not based in reality but it's what I think lol.
 
I honestly think it has to do with them not wanting to stick a bunch of rt cores on the gpu probably due to cost and wanting to keep margins up more than how much they are or are not improved.

My theory is thet improve then so that they can keep perfomance similar without having to add a bunch more.
It definitely comes down to costs vs benefits. Since AI is their cash cow, and their gaming GPUs deserve a "good enough I guess" award, why improve? Just like Intel during the quad core era.
 
It's also best to probably isolated the RT cores and compare

2080ti to 3080ti only saw an increase in RT cores of like 18% but at least on my end I was seeing 50% uplift in RT heavy games.

3080 to 4080 only saw an 11% gain in RT cores but a much larger gain in RT perfomance.

While the overall performance of the gpu matters the most part of it has been Nvidia being more reserved about adding more RT cores for whatever reason.

It's not 18%. It's less than 10%. If you compared with equal rasterization & shader count & r.o.p's Nvidia has only gained at most 6% increase in RT efficiency every generation. The 3080 ti has much higher rasterization than a 2080 ti. Every one of the comparisons you made here increased rasterization, it's very hard to compare Nvidia's own generations to each other. When nvidia keep changing parts for rasterization while only claiming their RT is better.
 
Back
Top