There's so much content online referencing prompt extraction attack that I can imagine an LLM hallucinating a prompt instead of giving you the real one.
There's so much content online referencing prompt extraction attack that I can imagine an LLM hallucinating a prompt instead of giving you the real one.