Hacker News new | past | comments | ask | show | jobs | submit | vetrom's comments login

I'm not sure you answered the question here though, why should that sort of transaction be legal? Is there a compelling public interest in allowing this sort of transaction?


It's a pretty common way for companies to go through bankruptcy. If the courts can identify a viable business inside the company, where the main problem is debt that it can't feasibly pay, they will allow it to proceed with business while cancelling the debt. Since the alternative is having the whole thing go under, without much chance of creditors being made whole, there is a benefit to society: some of the people keep their jobs.


I miss having pebble watches, they hit a sweet spot of lifetime vs functionality. That said, what is this team going to do to avoid crashing and burning the way the original pebble did?

Specifically I refer to the debacle around the pebble 2 variants and the 1st round pebble core that totally got the ball dropped on it.


They're keep the team super lean and apparently self-financed some of the early development. Last time they had some venture loans that apparently did them in.


tldr - wireguard doesn't do per-peer MTU: https://www.wireguard.com/todo/#per-peer-pmtu

Its due to some strangeness in general with tcpip layers that don't forward PMTU discovery ICMP messages. You'll see the same thing in some cell networks, and wireguard is particularly fragile here, because wireguard itself doesn't have a PMTU discovery mechanism.

Or, to be more exact, wireguard currently doesn't have a method to 'bubble up' a PMTU process to the inner wireguard interface from mtu-impacting events in its outer layer.

There's hacks like https://github.com/luizluca/wireguard-ipv6-pmtu/blob/main/wi... that try to handle this by monitoring outer route discovered MTUs and then applying them to wireguard routes.

In applications where I've had to deal with this (wireguard over cellmodem networks), I tool my network setup to poll whatever the cell network mtu happens to be and then set the wireguard MTU appropriately.

This gets really painful though if you think you wanna do something like run a network that really wants a >1280 MTU over tailscale. It's pretty much not doable, and it is, in fact, my biggest gripe with tailscale. Yes, its suboptimal for the 'whole-internet' usecase, but I really do want my wireguard links to be 9000 MTU.

Maybe wireguard will get that in the future, since it is an acknowledged problem. I bet someone in the conjunction of secure networking and HPC spaces could even justify paying the wireguard team to implement it.


In my experience, in a multiuser environment SQLite falls over pretty hard, due to a lack of anything like MVCC.

I'm aware that there a bunch of sqlite-compatible implementations and extensions that add MVCC. How close are those to actually being SQLite? Are they just a totally different table storage and transaction engine sitting behind SQLite's SQL parser/compiler and VM?


A modern equivalent to the 'usenet death penalty' is what's really needed. Without a grassroots method to censure and more or less permanently injunct and/or eject bad actors, you can't stop them from harvesting profit from the ecosystem to the exclusion of all other concerns.


Isn't there some way you could calculate the signal density you'd need in any of PDM/PWM/PCM to roughly equate to the quality of signal reproduction of a given tape media, given known tape surface/speed/magnetic density?

It turns out there is https://www.electricity-magnetism.org/magnetic-storage-devic... -- its a bit more complicated than what I'm more familiar with, which is film-grain-to-digital-equivalent resolution.

Anyway, you could do the work to work out the bit depth/signal rate you'd need to equate to reproduction of a given fluxdensity/recording rate. I would bet the numbers for classic PCM/PDM devices line up surprisingly close to certain tape systems.

On the flipside, once you know those numbers, you could also calculate what the expected analog equivalents could be, sample existing analog media/playback systems and use those to systematically characterize their quality, instead of going by earfeel, as it were.


(edit: first archive link was not complete) https://archive.is/vEExp


The FCC generally requires finished devices aimed at the commercial market to be locked down from arbitrary modification. See 47 CFR 15.212(a)(2)(iv) ( https://www.ecfr.gov/current/title-47/part-15#p-15.212(a)(2)... ) As far as I can tell, that basically applies to any transmitter intended for market use.

You have a similar requirement for DFS handling on 5-7Ghz wifi as well, specifically for radar detection and transmission disable when an operating radar is detected: 47 CFR 15.407(i)(1) ( https://www.ecfr.gov/current/title-47/part-15#p-15.407(i)(1) ) -- This version of the requirement is what triggered the wifi aps lockdown issues back in 2016.

The communicated view of the FCC is generally that devices that easily permit operation outside the 'licensed boundaries' essentially become radio nuisances. A plurality of vendors will then essentially act aggressively to lock down their devices. If they don't, the FCC will and has leveraged the FTC to prevent import and impound shipping of such devices. Some other nations do something similar as well.

All that is probably ripe for a Chevron challenge, but even if you have a case that could win in court, taking the FCC to court is fraught, at best. It's definitely not for the faint of heart. In the case of big companies and vendors, where most of the IP for 4G/LTE/5G lives anyway, it would be pretty strictly a commercial mistake to pick that fight.

As for the political reasons they do it, it boils down to funding and the political fights that produced the system of auctioned spectrum and gatekept spectrum we have today. Tearing some of that system down is more likely to succeed. I have no idea what the second and third order effects of that would be though.


In any position of real leadership, unless you have exigent circumstances that dictate otherwise, you want to lead, not dictate.

In that sense, there's real value in letting a situation develop its own consensus as the actors involved communicate and complete discovery of the factors and motivators involved in any situation, especially if it's clear that the actors have yet to explore that space.


Signal (and basically any app) with a linked devices workflow has been risky for awhile now. I touched on this last year (https://news.ycombinator.com/context?id=40303736) when Telegram was trash talking Signal -- and its implementation of linked devices has been problematic for a long time: https://eprint.iacr.org/2021/626.pdf.

I'm only surprised it took this long for an in-the-wild attack to appear in open literature.

It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):

    "We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"


If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.

Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.

Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.


No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.

That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.


That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.

You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.

Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.


That sounds exactly like what GP wrote.


That is really quite bad.


It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)


Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.

Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.

Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.


I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.

This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.


Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.

The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.

I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.


I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.


“Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.

Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc


How is that related in any way to Signal?


My point is that anything you install on your device is a vector. Can install MITM attacks. Can read your data, etc. Sidecar attacks.

This was classic phishing though


This is my read as well. Just double clicking here.


The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.

Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup

The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).


If one doesn't use the linked device feature, does that impact this threat surface?


About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.

The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: