Hacker News new | past | comments | ask | show | jobs | submit login

As long as AIs are incapable of recognizing when they are plagiarizing, as humans are generally capable of, the double standard seems entirely warranted.



> as humans are generally capable of

Citation needed. I've never plagiarised on purpose, sure, but I've caught myself at least several dozen times well after the act.


Well, that you caught yourself is already something that makes a difference. It would already change the equation if Copilot would send an email saying “Hey, that snippet xyz I suggested yesterday is actually plagiarized from repo abc. I’m truly sorry about that, I’ll do my best to be more careful in the future.”

As far as “citation needed”, humans are being convicted for plagiarism, so it is generally assumed that they are able to tell and hence can be held responsible for it.

Responsibility or liability is really the crux here. As long as AIs can’t be made liable for their actions (output) like humans or legal entities can, instead the AI operators must be held accountable, and it’s arguably their responsibility to take all practical measures to prevent their AIs from plagiarizing, or from otherwise violating license terms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: