I haven't tried it out yet, but my Asus Z790-H Rog Strix board has a bios option where you enable it to have I think it's the scroll lock keyboard key to enable or disable the E-cores on demand easily which looks interesting. I'll have to give it a test sometime. It's a pretty slick option. They also have a way to assign what the reset switch does. They could probably in a bios update have that option assigned to reset switch option kind of like a turbo boost button for the reset switch. Yeah everything old is new again the the boost button and those pesky Atom cores.
Test it and it's a COMPLETE MESS!!! The game stutters and lags like no tomorrow, sometimes becoming unplayable!
Seriously now, do those guys never properly test the new released features?? Do they even have a proper testing department??
I do not understand why so much dev time is still poured into Cyberclunk
It won't get better. Game's done. I guess its the culmination of millions spent on marketing and attracting partnerships, and having to deliver on it somehow. Man what a shitshow. What a waste of time. No content additions but 'polish'. Patch after patch. And then you play the game and wonder what's changed. The vast majority of systems are still clearly not up to snuff and the base game is what it has been since 1.0.
Have you played the game since Phantom Liberty launched ? There's been a few things that's been overhauled, they cleaned up the talent tree, removed the useless skills like underwater combat, made your character much more mobile (you can slide, dash, vault, get out/in of your car while it's moving and shoot while doing so), the cyberware system has been overhauled and balanced as well, gangs have now non-scripted car chases with you, you can spend time with your romance outside of quest, the metro system is now functional, bars are more interactive, there are mini-games on arcade cabinets, the racing quest is also a repeatable mini-game. Smasher has been buffed, PL also added a few encounters that aren't clearable with brute force alone (spoiler: one encounter is closer to the gameplay of alien isolation). Sure, the base game is still fairly linear, but reworking and expanding the base storyline in addition would have probably taken too much dev time.
It's still not Bioshock meet GTA, but 2.0 wasn't just a bug polish, they added, cleaned, and reworked a few stuff. 2.0 give a few hints that the gameplay team is getting at First person games. CDPR wasn't exactly known to have the mastery of say Capcom or Platinum games when it comes to combat design (TW3 was better than skyrim for sure, but the combat alone isn't that memorable compared to other action-RPG. Still, they've become much better at third person games when you compare it to TW1).
If they've learned their lesson about the development hell of CP2077, they should be able to add more depth, more weight to what they started to do with 2.0. (And they should also ignore the people who complained about TW3 being too long, too overwhelming)
I do not understand why so much dev time is still poured into Cyberclunk
It won't get better. Game's done. I guess its the culmination of millions spent on marketing and attracting partnerships, and having to deliver on it somehow. Man what a shitshow. What a waste of time. No content additions but 'polish'. Patch after patch. And then you play the game and wonder what's changed. The vast majority of systems are still clearly not up to snuff and the base game is what it has been since 1.0.
Yeah I've never been interested in Cyberpunk... pretty much for that reason. No matter how much they polish around the edges, the foundation of the game is still broken. But it has found something of a niche as an nvidia tech demo. And since people keep buying it, I guess, they keep developing it .
I imagine from a business perspective it makes sense not to abandon the game, especially a company like CD Projekt Red that had such a good reputation before the release. They're probably trying to claw some of it back.
Yes. These patches are good, but surerely are too late. It should have been the release state. The game is indeed a tech demo, and being alive only due to life support, most likely from "incentives" from "green" GPU manufacturer. Otherwise, CDPR would abandon this pile of shame ASAP, and wouldn't even bother to fix the game, since their new projects are not even on their in-house engine, but on UE instead.
This is the same thing, that happened during Crysis times, where intel and nVidia were showing off their most expensive hardware, and been pulling it's corpse for as long as possible, with it's Crysis Warhead.
But despite Crysis development has been cut down abruptly (which proves tech demo idea even more), without being even fixed essential bugs, it had many mods, even total conversion ones based on (MWLL). There are mods, that fix the bugged net code, graphics rendering issues, performance optimising, etc, so the game still being played, Heck, there were standalone games based on CryEngine 2 (MWO). Because the Crysis has made the foundation for this to happen. It still felt like a solid game.
I strongly doubt Cyberpunk is about to have the same fate, replayability and treatment outside CDPR themselves (it's basically locked ecosystem). And also get any other implementation, except the Cyberpunk itself, and mods solely within it's context. This again proves, that CP2077 is just another techdemo, companies use to profit from, by steadily adding "new" features of "particular" GPU series, and carried along with new GPU/CPU relesases. Just in order to trick people into looking the same old game several times, just to find what have changed.
My biggest gripe with this update, is that they failed to fix sonic shock! Loosing the blood pump every time you eat is just annoying, but you can work around it. Sonic Shock however is pretty much essential doing netrunner stuff against large groups, something that sets this game apart. So this certainly won't be the last update. Waiting for the next one impatiently.
Yeah I've never been interested in Cyberpunk... pretty much for that reason. No matter how much they polish around the edges, the foundation of the game is still broken. But it has found something of a niche as an nvidia tech demo. And since people keep buying it, I guess, they keep developing it .
I imagine from a business perspective it makes sense not to abandon the game, especially a company like CD Projekt Red that had such a good reputation before the release. They're probably trying to claw some of it back.
Their efforts are commendable, I wish every game studio devoted much effort to what they made. It's definitely a good look when you consider future sales.
Their efforts are commendable, I wish every game studio devoted much effort to what they made. It's definitely a good look when you consider future sales.
True, but they're not doing this because they are Samaritans. Not long ago Phantom Liberty came out, and it sold well! So supporting that for a while should not be that odd.
Plus, they are working on the new Orion project, for which they want to keep a healthy user base.
Thank you! I swear all I ever see nowadays are AMD fan boys bashing Intel and trying to make a point of how their more efficient CPU is better than Intel's all because it consumes less power while still maintaining a competitive edge. From a gamer/hardware/overclocking enthusiasts perspective, all I care about is pure performance. The only area where efficiency really matters, is with laptops and that is purely for the increased battery life and reduced heat in such a small, confined space that lacks proper ventilation. I would much rather have a better performing CPU that sucked down some power over a lesser one that tries to brag about how it's more efficient. Most importantly, I am big into overclocking and Intel is by far the best choice when it comes to purchasing an unlocked CPU that will allow you to tweak and overclock until your heart is content. Also, it seems people don't realize that competition between two large CPU manufacturers is a good thing for the consumer, because it inspires and drives an ever-improving CPU product. This is needed to help prevent the stagnation of technology in one particular company. If a CPU manufacturing company owned a monopoly on the CPU market due to it not having any real competition, then said company would eventually stop striving to innovate and improve upon itself over time. Or it would do so it a much slower rate. This most important thing to take from all this is that no one should be loyal to one specific brand of CPU. You should always go with the CPU model/brand that performs the best for your needs at that given time and make sure it provides the right number of features that you desire. Lastly, it's important that you also get a CPU that is low maintenance and just works the majority of the time. What you don't want is a finicky CPU that has unmature drivers and an inconsistent management of which processing cores to use or park, for specific content. No end user should have to resort to a program like Project Lasso just to get their CPU to use the right cores and/or CCDs for a specific task/app/game. That's just too much effort and stress for an average CPU end user to deal with, especially when they could just purchase the other brand (Intel) and not have to screw around at all for it to work like it should out of the box. The problem is that AMD does not advertise that it's going to take an end user extra effort and having to go through specific instructional steps just to get their new gaming oriented CPU to work like it should in the first place. I'm referring to the AMD 7950x3D here and I have owned one briefly before I ended up just giving it and the x670e motherboard to a good friend just so I could switch back to a low hassle Intel CPU with more mature drivers and a hardware based Core allocation technology, like Thread Director. AMD uses immature software based Core/CCD allocation/direction. I apologize a head of time for this lengthy comment/rant, but this blind AMD fanboyism and bandwagon loyalty has got to stop in order for there to be a healthy competitive and more honest CPU marketplace for the consumer.
Friendly reminder that AMD doesn't sell an AM5 CPU without an iGPU, and the majority of the die is in fact made of P-cores and the uncore . I can understand the hate for the e-cores, those are more beneficial to productivity, but I've never understood the hate around the iGPU, on desktop CPU. I've only heard hearsay about how it's keeping the chip from reaching the highest clock possible, but I haven't seen proof that KF are faster than K chips. I never keep my old GPUs around, so I appreciate the peace of mind of having the iGPU if the GPU need to be repaired/replaced. I think that it's one of the reason as to why AMD decided to add one on all their mainstream CPUs.
Intel's upcoming Arrow Lake-S high end gaming oriented CPUs will have a much larger L2 cache reserve than the current 13/14th gen Raptor Lake CPUs and due to the new process node(20A) and other technologies, both the P cores and E cores will be able to operate at higher frequencies with more efficiency and less voltage droop. Despite what most people think about hybrid core architecture in the Intel CPUs of recent time, there are some games that actually perform better with the e cores in use, than without them. One such game is Battlefield 2042, which fully utilizes all the P cores and some of the e cores. The faster the e core frequency, you will often find higher more consistent 1% lows, which equates to a more stable and fluid gaming experience, with less dips in the framerate. I have seen such a correlation when overclocking my 13900KS. The default e-core frequency on the 13900KS is 4.3Ghz, but I have OCed mine to 4.7Ghz and when running particular games, I have noticed a moderate boost to not only the average framerate, but in particular, the 1% lows, which came up higher than before. The stock default P cores are set to 6Ghz for 1-2 active P cores and 5.6Ghz for 3-8 active P cores. My P cores are all synced to 5.9Ghz up to a 60c threshold and will then run at 5.8Ghz after that temp. In gaming, I will rarely ever exceed 60c unless I'm precompiling shaders on a brand new game installation. Running my 13900KS at these frequencies has netted me roughly a 7% performance increase in single threaded performance over a stock 13900K and a 9% increase in multithreaded performance over the stock 13900K. This all according to the latest CPU-Z built-in benchmark. Also, the upcoming Arrow Lake-S CPUs are reported to now support an official DDR5 frequency of 6400 MT/s, whereas the 13/14th gen raptor lake officially supported only 5600 MT/s. Now we all know that most 13900K/14900K CPU owners could run at least 7200 MT/s on a 4 DIMM motherboard and 8000+ on a 2 DIMM board, with the right 13/14th gen CPU, that had a good memory controller. With Arrow Lake-S with an apparently stronger Memory Controller, we should easily start seeing 2-DIMM motherboard users getting up to 9000 MT/s DDR5 speeds without needing to have a golden memory controller.
I haven't tried it out yet, but my Asus Z790-H Rog Strix board has a bios option where you enable it to have I think it's the scroll lock keyboard key to enable or disable the E-cores on demand easily which looks interesting. I'll have to give it a test sometime. It's a pretty slick option. They also have a way to assign what the reset switch does. They could probably in a bios update have that option assigned to reset switch option kind of like a turbo boost button for the reset switch. Yeah everything old is new again the the boost button and those pesky Atom cores.
I tried testing the Scroll Lock Legacy gaming function on my 13900KS and it does not behave the same performance-wise compared to just manually turning off the E cores for a specific game. Turning off the E-cores manually in the game Star Citizen yielded a much smoother gaming experience than enabling the scroll lock legacy gaming feature that tells the CPU to park the e cores rather than turning them completely off. In theory the Scroll Lock feature should give the same results as disabling the e cores in the BIOS, but in practice I've seen different results.
I just installed the mod from LukeFZ and not only does FSR3 with FG works on my 7900XTX, but it also fixed horrible ghosting, not only that it fixed HDR finally when using FSR. And now I get 80-100fps with FG, at 4K, FSR set to Quality and RT set to Ultra. And the game feels smooth. Minor issue is a glitching cars when driving fast, but that's nothing compared to ghosting that was in FSR2.1 that CDPR implemented.
It's a shame that one guy could do it, but whole company still can't.
BTW it works wonderfully in Witcher 3 with RT as well.
Yes. These patches are good, but surerely are too late. It should have been the release state. The game is indeed a tech demo, and being alive only due to life support, most likely from "incentives" from "green" GPU manufacturer. Otherwise, CDPR would abandon this pile of shame ASAP, and wouldn't even bother to fix the game, since their new projects are not even on their in-house engine, but on UE instead.
This is the same thing, that happened during Crysis times, where intel and nVidia were showing off their most expensive hardware, and been pulling it's corpse for as long as possible, with it's Crysis Warhead.
But despite Crysis development has been cut down abruptly (which proves tech demo idea even more), without being even fixed essential bugs, it had many mods, even total conversion ones based on (MWLL). There are mods, that fix the bugged net code, graphics rendering issues, performance optimising, etc, so the game still being played, Heck, there were standalone games based on CryEngine 2 (MWO). Because the Crysis has made the foundation for this to happen. It still felt like a solid game.
I strongly doubt Cyberpunk is about to have the same fate, replayability and treatment outside CDPR themselves (it's basically locked ecosystem). And also get any other implementation, except the Cyberpunk itself, and mods solely within it's context. This again proves, that CP2077 is just another techdemo, companies use to profit from, by steadily adding "new" features of "particular" GPU series, and carried along with new GPU/CPU relesases. Just in order to trick people into looking the same old game several times, just to find what have changed.
It feels like a tech showcase for Nvidia at this point, waiting for the next version of ray tracing or DLSS and sure enough within a short time it will be patched into the game.
I just installed the mod from LukeFZ and not only does FSR3 with FG works on my 7900XTX, but it also fixed horrible ghosting, not only that it fixed HDR finally when using FSR. And now I get 80-100fps with FG, at 4K, FSR set to Quality and RT set to Ultra. And the game feels smooth. Minor issue is a glitching cars when driving fast, but that's nothing compared to ghosting that was in FSR2.1 that CDPR implemented.
Did it, too. Apparently, I did something wrong because if I pick "DLSS" it won't upscale, the framerate and the image quality stayed the same as native. FSR stayed as horrible as it previously was. XeSS got broken. Frame generation was a complete glitch show.
Did it, too. Apparently, I did something wrong because if I pick "DLSS" it won't upscale, the framerate and the image quality stayed the same as native. FSR stayed as horrible as it previously was. XeSS got broken. Frame generation was a complete glitch show.
For me the latest version of it didnt work (0.10) either FG didnt work, or game just crashed. Downloaded 0.9 from his website and just copied the files. Works like a charm. I select DLSS from settings, FG on. Only sometimes if I change from Quality to Perf game crashes but afterwards it works.
For me the latest version of it didnt work (0.10) either FG didnt work, or game just crashed. Downloaded 0.9 from his website and just copied the files. Works like a charm. I select DLSS from settings, FG on. Only sometimes if I change from Quality to Perf game crashes but afterwards it works.
Default values from 0.9. Didn't change anything there. I'm not at my computer but fake GPU and all the other values were false. In 0.10 that file looks different, he made some changes there. However with 0.10 I tried to change everything there and it didn't work. Either FG didnt start even if it was enabled of game just crashed when selecting DLSS and then after every restart it would crash at the logos.
Default values from 0.9. Didn't change anything there. I'm not at my computer but fake GPU and all the other values were false. In 0.10 that file looks different, he made some changes there. However with 0.10 I tried to change everything there and it didn't work. Either FG didnt start even if it was enabled of game just crashed when selecting DLSS and then after every restart it would crash at the logos.
The only area where efficiency really matters, is with laptops and that is purely for the increased battery life and reduced heat in such a small, confined space that lacks proper ventilation.
Better by how much? 14900K is within 5% of 7950X3D in performance (temperatures too but only in gaming), but consumes two or three times a much power while doing so (even in gaming). That <5% may be worth it to you, but for most people i reckon it is not.
Most importantly, I am big into overclocking and Intel is by far the best choice when it comes to purchasing an unlocked CPU that will allow you to tweak and overclock until your heart is content.
As if that is even advantageous these days. Intel has already pushed it so far that there is no OC headroom left and modern boost algorithms mean that even if you OC manually you can never match those boost frequency's manually. This means that in gaming it will actually lose performance. Not to mention the fact that when stressed then Intel quickly reaches temperature limits that further limit overclocking.
From TPU's review:
Overclocking the Core i9-14900K is easy, thanks to its unlocked multiplier. The biggest problem is the heat though, even at stock you'll be reaching 100°C and higher. Overclocking the 14900K means setting the thermal limit to 115°C up from 100°C, and then figuring out what's the highest voltage you can give the CPU without hitting throttling at 115°C, depending on your cooling solution. Switching from air to our Arctic AIO helped with controlling the heat, but it wasn't a huge difference. Our maximum all-core OC is 5.5 GHz on the P-Cores, plus 4.4 GHz on the E-Cores, 100% stable. this still isn't enough to beat the stock configuration in lighter applications and most games, because here the CPU will boost two cores up to 6.0 GHz.
Lastly, it's important that you also get a CPU that is low maintenance and just works the majority of the time. What you don't want is a finicky CPU that has unmature drivers and an inconsistent management of which processing cores to use or park, for specific content. No end user should have to resort to a program like Project Lasso just to get their CPU to use the right cores and/or CCDs for a specific task/app/game.
First you say you like overclocking and then low maintenance. These two don't go together. The best low maintenance CPU is one that does not require overclocking and does not care about how fast your RAM speed is. And that is 7800X3D by far.
That's just too much effort and stress for an average CPU end user to deal with, especially when they could just purchase the other brand (Intel) and not have to screw around at all for it to work like it should out of the box.
And later below your post you describe how you are messing with turning off e-cores to get the best gaming performance. Thereby contradicting your earlier statements. The same way a user could buy 7950X3D and turn off the CCD without the V-Cache. For gaming the 7800X3D requires none of that and is cheaper too.
The problem is that AMD does not advertise that it's going to take an end user extra effort and having to go through specific instructional steps just to get their new gaming oriented CPU to work like it should in the first place. I'm referring to the AMD 7950x3D here and I have owned one briefly before I ended up just giving it and the x670e motherboard to a good friend just so I could switch back to a low hassle Intel CPU with more mature drivers and a hardware based Core allocation technology, like Thread Director. AMD uses immature software based Core/CCD allocation/direction. I apologize a head of time for this lengthy comment/rant, but this blind AMD fanboyism and bandwagon loyalty has got to stop in order for there to be a healthy competitive and more honest CPU marketplace for the consumer.
Yes and intel is low effort with overclocking and turning off e-cores?
For gaming the 7800X3D is best by far and requires no overclocking, no thread directors and does not care about ram speeds. Not to mention the incredibly efficiency. It is the best option for most users.
Of course. As long as someone only writes their side it's fine. As soon as there is a rebuttal it's suddenly as VS thread.
I agree with UTVOL06 that we need both AMD and Intel competing. That is how the consumer wins. It's bad if one gains utter dominance.