I'm only surprised it took this long for an in-the-wild attack to appear in open literature.
It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):
"We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"
If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.
Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.
Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.
No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.
That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.
That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.
You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.
Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.
It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)
Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.
Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.
Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.
I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.
This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.
Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.
The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.
I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.
I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.
“Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.
Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc
The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.
Outside of lab settings, the only way to do that is:
- (1) you get root access to the user's device
- (2) you compromise a recent chat backup
The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).
About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.
The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.
I'm only surprised it took this long for an in-the-wild attack to appear in open literature.
It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):