I think their point isn't that terrorists leveraging this tech not a problem. It is certainly a problem. But the greater problem being a few large entities being the only ones who have access to or control over it.
I think it's pretty clear that terrorists or any other bad actor will find great value & utility in this tech. The article from OpenAI says 'Humans can be convinced by synthetic text.' & research at Cornell found people find it almost as convincing as New York Times articles. I would be interested in learning about the methods you guys are using to determine of it? I wonder how that could be measured?
So let's assume the answer is 'YES! This technology is dangerous". The Middlebury program, Cornell, and more and more universities and research groups find the same thing. Then what will the recommendations be? Certainly not to release it into the wild. I think they will be to keep it locked up. To keep it in the hands of a few large and powerful companies, with the resources to 'manage' such a thing.
This seems to be what the original comment is trying to illustrate, and I think it's an interesting point to consider the implications of long term. The tech exists now. There is no going back. So is it worse to let it out of the box, or to let a but a few have control over it?
In spite of all that we're studying wrt abuse potential, I (and my team) generally support open-sourcing tech, and I hope that we can contribute not to "oh this is dangerous, don't release" but rather to "oh this is dangerous, it's already released, what are we going to do now?"
Great, keep up the good work! Are you able to discuss how studies like yours work? Is it along the lines of determining if people can distinguish between human written and AI generated text? Sounds like a difficult question to answer.
I suspect they will release the full model in time. It's already trending in that direction.
I think it's pretty clear that terrorists or any other bad actor will find great value & utility in this tech. The article from OpenAI says 'Humans can be convinced by synthetic text.' & research at Cornell found people find it almost as convincing as New York Times articles. I would be interested in learning about the methods you guys are using to determine of it? I wonder how that could be measured?
So let's assume the answer is 'YES! This technology is dangerous". The Middlebury program, Cornell, and more and more universities and research groups find the same thing. Then what will the recommendations be? Certainly not to release it into the wild. I think they will be to keep it locked up. To keep it in the hands of a few large and powerful companies, with the resources to 'manage' such a thing.
This seems to be what the original comment is trying to illustrate, and I think it's an interesting point to consider the implications of long term. The tech exists now. There is no going back. So is it worse to let it out of the box, or to let a but a few have control over it?