They expectations high, but its not so much as orthogonal as more basic. Our brains work on add/multiply/activation this is well known. But the composition of the neural connection strengths in our brain that makes us us is definitely not trained on any sort of final loss. Or at least not completely.
I'm not sure that AI has been successful recently because of its similarities to the human brain. It seems like the project of making human-like AI (in the sense of, models that function similarly to the brain) have had a lot less empirical success than the project of trying minimize loss on a dataset, whatever that takes. Like, look what happened to Hebbian learning, as you mentioned in your other comment. Completely absent from models that are seriously trying to beat SOTA on benchmarks.
Like, it really just seems like LLMs are a really good way of doing statistics rather than the closest model we have of the brain/mind, even if there are some connections we can draw post-hoc between transformers and the human brain.