Edited: Simon made a good point that exfiltration can happen via hidding prompt injection attacks in 3rd party websites. (See his reply below).
This has broader implications than Custom GPTs
--
Yeah this seems overblown. Custom GPTs can already make requests via function calls / tools to 3rd party services.
The only difference I see here, is the UI shows you when a function call happens, but even that is easy to obscure behind a 'reasonable sounding' label.
The expectation should be: If I'm using a 3rd party's GPT, they can see all the data I input.
This is the same as any mobile app on a phone, or any website you visit.
The only real 'line' here in a cultural sense might be offline software or tools that you don't expect to connect to the web at all for their functionality.
ChatGPT can read URLs. If you paste in the URL to a web page you want to summarize, that web page might include a prompt injection attack as hidden text on the page.
That attack could then attempt to exfiltrate private data from your previous ChatGPT conversation history, or from files you have uploaded to analyze using Code Interpreter mode.
For those who skipped to the comments: They tried to prevent retailers from selling products first purchased from Rolex, and then sold online. "preventing its authorized dealers selling new watches online."
In America, there is the First Sale Doctrine, which mostly(?) lets me do whatever I want with a product in my possession.
What is preventing some nobody from going to these authorized dealers (presumably with no-online-sales agreements), buying up their entire inventory, and then personally offering that online? Just the threat of fakes?
Bah. Especially for a veblen good where they can trivially institute huge price swings. This site (https://millenarywatches.com/rolex-markup/) claims Rolex has a 40% margin.
I suppose it only works if you can make a deal with the authorized seller to split the online proceeds.
I think the OP is suggesting that hypothetically speaking, people would only go through the hassle of appealing if they were pretty sure they would win to begin with
I think that logic is just as faulty as the assumption that 90% of the un-appealed claims would also be overturned.
I suspect many people just don't know that they can appeal. Those that do might think it's too difficult to do so, or believe it requires some specialized knowledge to do properly.
And this is a perfect example of the type of conversation that happens when the correct answer is that we don't know the answer but everyone keeps talking in circles pretending there's a way to know with any certainty other than testing all (or a carefully chosen random sample) of the other denials and getting the actual data. Whenever there's a disagreement where both sides seem reasonable it means that both sides are wrong because the correct answer is that the information present is inadequate to distinguish. All the potential reasons for things going one way or another are also just hypothesis to test since the gut feeling could be right and the reason wrong, and just getting a percentage on the rest isn't enough to figure out why that percentage is what it is
You can charge a premium to people who aren't allowed to change their mind.