Can you really blame anyone who turns to AI, because that garbage at least sounds like it tries to help you?
A comfortable lie is still a lie. Everything that comes out of an LLM is a lie until proven otherwise. (“Lie” is a bit misleading, though, as they don’t have agency or intent: they’re a variation of your phone keyboard’s next-word text prediction algorithm. With added flattery and confidence.)
There’s a reason experienced people stress hard to others about not using them as shortcuts to your own knowledge. This is the outcome.
Another way to look at it is “trust, but verify”. If you’re intent on relying on probabilistic text as an answer, instead of bothering to learn, then take what it’s given you and verify what that does before doing it. You could learn to be an effective sloperator with just that common sense.
But if you’re going to give an LLM root/admin access to a production environment, then expect to be laughed at, because you had plenty of opportunities to not destroy something and actively chose not to use them.



It’s true that people on the internet can be dicks. Even more so technical people (and that’s not limited to online: those online dicks are usually IRL dicks when taking technical stuff). But that’s a hurdle, not a barrier.
There’s little anyone here can do to help OP, as they (if I understand it correctly) have already irreparably nuked their hardware. The current problem is significantly different and harder than the original problem. Asking randos on this community is unlikely to yield results. Hence the focus on variations of “Now… what did we learn? 🤨”
I’m not trying to help, as I’m not familiar enough with SAS nor the current problem. The same is likely true of others here.