Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How did they generate these? If I try with ChatGPT then it refuses, citing a possible violation of their content policy. Even when I tell it that this is for me personally, it knows who I am, and that it's just for a test -- which obviously I could be just pretending, but again, it knows who I am but still refuses.



If you're using ChatGPT directly as opposed to the API, the system prompts could be driving it.

Also, in section 3.6 of the paper, they talk about just switching fishing email, to email helps.

Or said differently, tell it that it's for a marketing email, and it will gladly write personalized outreach


You can host open source llm offline.


They team specifically "use AI agents built from GPT-4o and Claude 3.5 Sonnet". The question here is "how did they manage to do so" not "what else can do it with less effort".

As those two are run by companies actively trying to prevent their tools being used nefariously, this is also what it looks like to announce they found an unpatched bug in an LLM's alignment. (Something LessWrong, where this was published, would care about much more than Hacker News).


There are many uncensored open-weights models available that you can run locally.


I'm aware what I can do. I was wondering how they did it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: