Hacker News new | past | comments | ask | show | jobs | submit login

I really don't understand how there can be a problem with how Copilot works. Any human just works in the same way. A human is trained on lots and lots of of copyrighted material. Still, what a human produces in the end is not automatically derived work from all the human has seen in his life before.

So, why should an AI be treated different here? I don't understand the argument for this.

I actually see quite some danger in this line of thinking, that there are different copyright rules for an AI compared to a human intelligence. Once you allow for such arbitrary distinction, it will get restricted more and more, much more than humans are, and that will just arbitrarily restrict the usefulness of AI, and effectively be a net negative for the whole humanity.

I think we must really fight against such undertaking, and better educate people on how Copilot actually works, such that no such misunderstanding arises.




I think there's a parallel in surveillance systems. For example, it's perfectly reasonable for a police officer conducting an investigation to follow a suspect as they drive around town. After all, it's happening in public and it's not illegal to watch what someone does in public (caveat being taking it to the level of stalking).

However, is it reasonable to write an AI system that monitors the time and location of all license plates seen around town, puts them into a database, and then that same officer can simply put in the suspect's license plate instead of actually following them around? Maybe, maybe not, that's not my point here. But the creation of that functionality can easily lead to its abuse.

Is this exactly the same case as Copilot? Of course not, these are two wildly different systems. But I think it's an interesting parallel to consider when discussing the point of "it's okay when a human does it" because humans and algorithms operate at two very different levels of scale. The potential for abuse of the latter being far higher and far easier than something a human has to do manually.


Humans are able to recognize when they are plagiarizing someone else’s work. AIs currently aren’t.


I would argue, also humans are far from perfect here. But this is anyway not so much my argument. I agree with you, this should be improved. But I don't see such a big problem in improving this. I'm sure we will find some ways to get this also to a human-level or better.

I'm mostly talking about the statement "[Copilot] relies on unprece­dented open-source soft­ware piracy". This is just wrong. It learns from open-source code, just like a human does.


That's not right. Copilot has a copyright supression feature, humans don't. It's actually opposite.


>So, why should an AI be treated different here? I don't understand the argument for this.

Because the AI is not a human and only humans have rights, including the right to learn.


> >So, why should an AI be treated different here? I don't understand the argument for this. > Because the AI is not a human and only humans have rights, including the right to learn.

Okay then: Who counts as 'human'? What's the qualifier for being a 'human'?

------

(The following questions all point to the same underlying question.)

Are you human if you have only one leg or 8 fingers due to a genetic deformity? What about albinism or sickle cell disease?

If someone had robotic implants, are they human? Is it inhuman to have an artificial leg? What about both legs?

Same scenario as above, but both arms & legs are replaced. Are they human?

Same as above, but now everything below the torso has been replaced. Same question.

Same question, but now everything below the neck.

If someone were to successfully transplant their brains into a robot body, are they still human?

Someone embeds a neural implant into their brain: Still human?

Same question, now multiple neural implants.

Same question, but now the brain-to-implant ratio is 2:1. Brain mass & neural count hasn't changed since then.

Same question, but with the brain-to-implant ratio now 3:1.

4:1. 5:1. 6:1. 8:1. 10:1. 15:1. 20:1. 30:1. 50:1. 100:1. 200:1. 500:1. 1000:1.

The neural count now starts to decrease because of regular cell degradation. What's the percentage point before they're considered non-human?

90%? 80%? 70%? 60%? 50%? 40%? 30%? 20%? 10%? 5%? 2%? 1%? 0.5%? 0.2%? 0.1%? 0.01%? 0.001%?

------

Where is the dividing line between 'human' and 'non-human'?


Idiotic sealioning having nothing to do with AI and neural nets.


AI is not treated differently here. If a human produced this kind of code: https://twitter.com/DocSparse/status/1581461734665367554 they would be sued as well

I am not sure how can anyone root for AI after seeing those kinds of outputs. It's like high-school level plagiatrism.


I'm not talking about cases where code is copied, as in your example. I fully agree, this should be fixed. But I don't see such a big problem here. We can do sth about this and reduce those cases to a reasonable human-level minimum or below.

I explicitly say human-level because humans would also not be totally immune to this. It can happen that you unintentionally write the same code you have seen somewhere.

It can also even happen that you write the same code just by pure chance.

I'm talking about the statement in general, that all Copilot output is derived work. This is just wrong, as it is for a human as well.

I'm talking about the statement "[Copi­lot] relies on unprece­dented open-source soft­ware piracy". This is just wrong. A human also relies on open-source software (and even private software) to learn, and this is not piracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: