Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "hack" might be very simple, since I'm sure it's possible to craft images that look like harmless memes but trigger the detection for CP.


The new and improved swatting.


Couldn't the hack just be as simple as sending someone an iMessage with the images attached? Or somehow identify/modify non-illegal images to match the perceptual hash -- since it's not a cryptographic hash.


Does iCloud automatically upload iMessage attachments?


No, iMessages are stored on the device until saved to iCloud. However, iMessages may be backed up to iCloud, if enabled.

The difference is photos saved are catalogued, while message photos are kept in their threads.

Will Apple scan photos saved via iMessage backup?


I would assume yes, that this would cover iMessage backups since it is uploaded to their system.


I think so, since the iMessages are synced across devices.


Doesn't need to, the detection is client side at first.


No, like many others commenting on the issue, you seem to only have a vague idea of how it works. Only photos being uploaded to iCloud are being scanned for CSAM.


Then tell us. Because this is what apple says:

The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.

Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy.

https://www.apple.com/child-safety/

There is no ambiguity here. Of course they will scan images in the cloud as well, but they are explicit in saying that it is (also) on the device itself.


The operative separator is “Next”

Apple is announcing 3 new ‘features’.

First one scans iMessage messages / photos on device / warns kids and partners.

Second one is the CSAM photo hash compare in iCloud upload feature.

Third one is the Siri search protection/warning feature.


Stand corrected on the first part.

But surely iCloud upload feature is on the device. And if it was only in the cloud they wouldn't need to mention iOS or iPadOS at all.


But what’s the practical difference between scanning photos when they’re uploaded to iCloud on a server, or on device?


A world of difference. Both in practical terms and principle.

To start, once you upload something to the cloud you do - or at least are expected to - realize that it is under full control of another entity.

Because of that you might not use iCloud or you might not upload everything to iCloud.


I think you might still be confused? Only photos being uploaded to iCloud are scanned. So users can still choose not to use iCloud and avoid this.

I certainly hope you didn’t get yourself all worked up without actually understanding what you’re mad at :)


You are mistaken, the iMessage feature is for parental consent and is not used at all for the CSAM database.

It is not related to the CSAM database feature.

Read details here: https://daringfireball.net/2021/08/apple_child_safety_initia...


And you have an overly optimistic idea that they will not enable this feature more broadly. You really want to trust them, when this incident shows that they do not intend to be fully forthright with such changes?


They published full technical documents of what is happening and what is changing, and this is what this debate is about. It's a bit odd to argue that they are not forthright, this is all documented. They could have updated their terms of service vaguely and never mention that feature, they did not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: