There was parents arrested over bath time and playing in the yard sprinklers, photos being processed at photo mats, will the same thing happen by apple mistakenly reporting parents?
When I worked telecom, we had md5sum database to check for this type of content. If you emailed/sms/uploaded a file with the same md5sum, your account was flagged and sent to legal to confirm it.
Also if a police was involved, the account was burned to dvd in the datacenter, and only a police officer would touch the dvd, no engineer touched or saw the evidence. (Chain of Evidence maintained)
Prob changed since I haven't worked in telecom in 15 years, but one thing I've read for years, is the feds knew who these people are, where they hang out online, even ran the some of the honeypots. The problem is they leave these sites up to catch the ring leaders, the feds are aware, they have busts almost every month of rings of criminals. Twitter has had accounts reported, and they stay up for years.
I dont think finding the criminals are the problem, seems like every time this happens, theres been people of interest for years, just not enough law enforcement dedicated to investigating this.
All the defund the police, I think moving some police from traffic duty to Internet crimes would be more of an impact on actual cases being closed. Those crimes lead to racketeering and other organized crime anyways.
> There was parents arrested over bath time and playing in the yard sprinklers, photos being processed at photo mats, will the same thing happen by apple mistakenly reporting parents?
No, because they’re not identifying content, they’re matching it against a set of already-known CSAM that NCMEC maintains. As you go on to say, telecoms and other companies already do this. Apple just advanced the state of the art when it comes to the security and privacy guarantees involved.
A set of unverified hashes that you hope only came from NCMEC. Telecoms do this on their own devices - not yours.
Apple just opened the door for constant searches of your digital devices. If you think it will stop at CSAM you have never read a history book - the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.
> A set of unverified hashes that you hope only came from NCMEC. Telecoms do this on their own devices - not yours.
Yes, those are the main things we’re concerned about.
> Apple just opened the door for constant searches of your digital devices.
Specifically, it opens the door to them scanning content which is then end-to-end encrypted, which is the main problem.
I think the jury is out on whether this capability will be abused. Apple has said they will reject requests to use it for other purposes, but who really knows whether they will end up being forced to add hashes that aren’t CSAM?
I agree that both of these are potential problems.
> the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.
Which camera system? Do you have a citation for that?
That's absolute nonsense but it's one of those things where I'd be interested to try and unpick the provenance of how someone could believe something so ridiculous.
Anecdotally my Mum works for Coventry City Council (though she is in events planning) but has noted complaints from colleagues about “busy work” from “fussy old people who keep asking for camera footage” — though Coventry often declines.
One or two news reports of local councils maybe using CCTV, doesn’t back up your claim.
The UK doesn’t have a super camera system used for minor crimes like you insinuate.
The high camera counts in the UK come from including private CCTV cameras in the data which privately owned and are not linked together, hence the government is not using a network of cameras to monitor dog poo clean up as you claim.
Don't know about UK but in France they are now using CCTV to fine not well parked delivery guys for a 2 minutes stop. While I agree vehicle parked anywhere can be a big inconvenience and deserve a fine, I don't think that's a big crime justifying deployment of such a surveillance system.
This totally makes sense, having one cop checking 100+ CCTV is far more efficient than a full team walking in the streets. Once you justified the cost on privacy and managed to deploy such system, it's so easy and convenient to use it for something else.
The UK government explicitly lays out a strategy for provate cameras to be bought and operated with mandatory rules for police access to footage. [1]
This is on top of the cameras that ARE owned by government entities - 18+ city councils [2]. And it's expanding [3].
Why do you think it matters if they are linked together? Retaining footage and handing it over to police on request (not warrant) is a requirement. The IPA allows collecting this information in bulk (eg from cctv providers) with warrants. [4]
I don't think any of your links remotely substantiate what you claimed ("the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.").
What even is the "camera system originally intended for serious crimes"?
It's a lossy hash match though. If it wasn't, then subtly re-encoding the image would hide it. So they're definitely going to be mistakingly matching images.
> Will Apple actually flatly refuse any and all such demands? If they do, it’s all good. If they don’t, and these features creep into surveillance for things like political dissent, copyright infringement, LGBT imagery, or adult pornography — anything at all beyond irrefutable CSAM — it’ll prove disastrous to Apple’s reputation for privacy protection. The EFF seems to see such slipping down the slope as inevitable.
What seems to be missing from this discussion is that Apple is already doing these scans on the iCloud photos they store. Therefore, the slippery slope scenario is already a threat today. What’s stopping Apple from acquiescing to a government request to scan for political content right now, or in any of the past years iCloud photos has existed? The answer is they claim not to and their customers believe them. Nothing changes when the scanning moves on device, though, as the blog mentions, I suspect this is a precursor to allowing more private data in iCloud backups that Apple cannot decrypt even when ordered to.
When I worked telecom, we had md5sum database to check for this type of content. If you emailed/sms/uploaded a file with the same md5sum, your account was flagged and sent to legal to confirm it.
Also if a police was involved, the account was burned to dvd in the datacenter, and only a police officer would touch the dvd, no engineer touched or saw the evidence. (Chain of Evidence maintained)
Prob changed since I haven't worked in telecom in 15 years, but one thing I've read for years, is the feds knew who these people are, where they hang out online, even ran the some of the honeypots. The problem is they leave these sites up to catch the ring leaders, the feds are aware, they have busts almost every month of rings of criminals. Twitter has had accounts reported, and they stay up for years.
I dont think finding the criminals are the problem, seems like every time this happens, theres been people of interest for years, just not enough law enforcement dedicated to investigating this.
All the defund the police, I think moving some police from traffic duty to Internet crimes would be more of an impact on actual cases being closed. Those crimes lead to racketeering and other organized crime anyways.