Hacker News new | past | comments | ask | show | jobs | submit login

Crackpot Theory: Copilot (and by association many ML tools) is a form of probabilistic encryption. Once encoded, it's virtually impossible to pull the code (plaintext) directly out of the raw ML model (the cyphertext), yet when the proper key is input ('//sparse matrix transpose'), you get the relevant segment of the original function (the plaintext) back.

We've even seen this with stable diffusion image generation, where specific watermarks can be re-created (decrypted?) deterministically with the proper input.




This is not crackpot -- this is literally how it works. Here's an example that points to this, https://arstechnica.com/information-technology/2022/09/bette...

Anybody looking at the source image and the generated result would say they are the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: