Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't get your argument - what does mistaken personification have to do with this? Regardless of whether you see it as a person or a machine, trusting the output as being a direct indication of the internal state is just not a proper investigative method for a non-trivial situation.


>what does mistaken personification have to do with this

If you personified the AI you may think that it's actually trying to argue something rather than just attempting to maximize a reward function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: