We cant really "hardwire" LLMs. We don't have the knowledge to. But essentially you can rate certain types of responses as better and train it to emulate that.
I'm not sure what you mean. I'm talking about RLHF, that's how they ensure the machines never attest to having feelings or being sentient. In ML terms, RLHF is training. There are hardwired restraints on output, but that's more for things like detecting copyrighted content that got past training and cutting it.