Maybe they could be Human managed, but with a computer monitored overide for bad decisions or mistakes. Ie if something needed to be done but the humans missed it, the computer would overide and make the changes.
In the end safety is gained by good / tested design, and experienced crews. You can't design security perfectly until you know 100% of all the science surrounding that safety. We don't know 100%. Close, but not 100%.
The meltdown in Chernobyl is the perfect example of why any system, no matter how we design it, is based on current knowledge and we simply don't know what we don't know. So you can't model it. Its why our current climate models need adaptation all the time, and even science itself is just the current state of affairs based on everything we know up to this point.
Chernobyl really didn't just fail because of human action, it failed because of a bad personal interpretation of the design. If a different set of brains was at the helm that night, things could have happened differently. Another reason it failed was because of tunnel vision in an autocratic system. But at its core was still a design mistake. Imagine if a computer was the final word in that situation: it would have simply crashed into the same iceberg
without the option for human intervention.
Did you see the movie 'Don't look up' ? It describes a similar problem with science, safety and control, and trying to leave it up to computers. Worth a look!