This is huge. There's been 3rd party Signal library for this for years -- and for some reason I can't determine, the developers have opted NOT to do this.
this is why molly.im was a lifesaver for me.. trying to move a family member from VIBER to SIGNAL and ran into the annoying roadblock of not being able to link an Android tablet to an Android phone like Viber can - but molly does it fine.
"If you wish to use the same phone number for both Molly and Signal, you must register Molly as a linked device. Registering the same number independently on both apps will result in only the most recently registered app staying active, while the other will go offline."
Yeah, pretty sure that's what me and the other comment meant. Linked device, like using Signal on Desktop. Or Signal on iPad. Linking wasn't available on Signal for Android for some reason.
Specifically I'm using Signal as the main device, with Molly as the linked device on 2nd phone.
> “The outbound and cross-bound DDoS attacks can be just as disruptive as the inbound stuff,” Dobbin said. “We’re now in a situation where ISPs are routinely seeing terabit-per-second plus outbound attacks from their networks that can cause operational problems.”
ISPs are starting to feel the pain, so perhaps in the near future they will do something about it.
I, too, am jealous of China's high speed railroads. However, on the whole, China has overbuilt their infrastructure, and that may not look so smart in 40-50 years when the maintenance bills start coming due.
Is it factually true? Because some routes that I’m personally aware of are constantly over booked when it comes to rails. Some, I guess, might be overbuilt, but time will show. I’ll agree on some malls though, but it’s more like private stuff, than government-led initiatives.
So, perhaps 2020s China ~ 1950s US demographics. The bridges that recently collapsed in the US (2024 Baltimore/Francis Scott Key Bridge and 2007 Minneapolis I-35W Mississippi River Bridge) were built in 1964 and 1972-1977 respectively.
Noone has yet compared the Chinese construction times/costs to the replacement Baltimore Francis Scott Key Bridge: cost ~$2b, estimated October 2028. Will have 600ft bridge towers, 1600ft main span (increased from 1209ft), total span length 3300 ft, improved pier protection. Surprised they didn't add a freight rail link.
The Freestyle Pro is almost a good keyboard. The Esc and function keys are all offset to the left by one key compared to a standard layout, which drove me nuts. I have a Freestyle Edge RGB now, which I like much better. (Though I replaced the wrist rests with some from Goldtouch.)
Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).
It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.
JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)
I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.
My $0.02, since the gap here on perception of the situation fascinates me:
JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.
If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.
(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> JPEG XL as a technical project was a real nightmare
Why?
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.
In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.
> they'll have 0 idea why this has so much mindshare
Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?
Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.
We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
If you look at CDNs, WebP and AVIF are very popular.
> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.
The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.
[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.
> AVIF is better at low to medium quality, and JXL is better at medium to high quality.
BTW, this is no longer true. With the introduction of tune IQ (Image Quality) to libaom and SVT-AV1, AVIF can be competitive with (and oftentimes beat) JXL at the medium to high quality range (up to SSIMULACRA2 85). AVIF is also better than JPEG independently of the quality parameter.
JXL is still better for lossless and very-high quality lossy though (SSIMULACRA2 >90).
>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.
It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.
Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.
> So why would we change it? As a non-Indigenous entity, we acknowledge that it is inappropriate for the Foundation to use Indigenous themes or language.
It seems like this could be easily solved in models that support tool calling by providing them with a tool that takes a token and returns the individual graphemes.
It doesn't seem valuable for the model to memorize the graphemes in each of its tokens.
Yes, but are you going to special case all of these pain points? The whole point of these LLMs is that they learn from training data, not from people coding logic directly. If you do this people will come up with a dozen new ways in which the models fail. They are really not hard to find. Basically asking them to do anything novel is at risk of complete failure. The interesting bit is that LLMs tend to work best a "medium difficulty" problems. Homework questions and implementing documented APIs and things like that. Asking them to do anything completely novel tends to fail as does asking them to do something so trivial that normal humans won't bother even writing it down.
It makes sense when users ask for information not available in the tokenized values though. In the abstract, a tool that changes tokenization for certain context contents when a prompt references said contents is probably necessary to solve this issue (if you consider it worth solving).
It's a fools errand. The kinds of problems you end up coding for are the ones that are blatantly obvious and ultimately useless except as a gotcha to the AI engines. All you're doing is papering over the deficiency of the model without actually solving a problem.
This is less a deficiency of the model, and more of a deficiency of the encoder IMO. You can consider the encoder part of the model, but I think the semantics of our conversation require differentiating between the two.
Tokenization is an inherent weakness of current LLM design, so it makes sense to compensate for it. Hopefully some day tokenization will no longer be necessary.
That takes away from the notion that LLMs have emergent intelligent abilities. Right now it doesn't seem valuable for a model to count letters, even though it is a very basic measure of understanding. Will this continue in other domains? Will we be doing tool-calling for every task that's not just summarizing text?
> Will we be doing tool-calling for every task that's not just summarizing text?
spoiler: Yes. This has already become standard for production use cases where the LLM is an external-facing interface; you use an LLM to translate the user's human-language request to a machine-ready, well-defined schema (i.e. a protobuf RPC), do the bulk of the actual work with actual, deterministic code, then (optionally) use an LLM to generate a text result to display to the user. The LLM only acts as a user interface layer.
How is counting letters a measure of understanding, rather than a rote process?
The reason LLMs struggle with this is because they literally aren't thinking in English. Their input is tokenized before it comes to them. It's like asking a Chinese speaker "How many Rs are there in the word 草莓".
It shows understanding that words are made up of letters and that they can be counted
Since tokens are atomic, which I didn't realize earlier, then maybe it's still intelligent if it can realize it can extract the result by writing len([b for b in word if b == my_letter]) and decide on its own to return that value.
We're up to a gazillion parameters already, maybe the next step is to just ditch the tokenization step and let the LLMs encode the tokenization process internally?
When an optometrist chose to file a claim against my medical insurance (which did not pay) rather than my vision insurance (which would), the only way I was able to get them to fix it was by filing a complaint with the BBB. Multiple calls with customer service only resulted in "there is nothing we can do to change it", but somehow they figured out how to fix it when they got the BBB complaint letter.
How else can private citizens keep businesses honest? Complaining on the internet only works if you have lots of followers.
> How else can private citizens keep businesses honest?
I know this was a rhetorical question, but in many countries there is some type of Fair Trading Office, meaning a government body with power to adjudicate consumer complaints about businesses and the legal teeth to enforce its judgements.
But then you have companies like Parking Revenue Recovery Services (PRRS), who have already had to settle [1] with the AG once before, and yet the AG refuses to take action on additional complaints, for years.
PRRS sent me a sham parking fee two weeks after their settlement with the AG in 2022.
The AG's response to my complaint
> We have investigated your complaint and based on the information we have received to date, we are taking no further action at this time.
This was three years ago. And Coloradans, faced with an AG that won't do anything for them, have taken to PRRS's non-accredited BBB page to file thousands of complaints [2].
I don't think the BBB would have any effect in this situation either, because PRRS doesn't rely on reputation for its business. They simply rely on having conveniently placed parking lots throughout the city with people needing a place to park.
This was three years ago, and here we are in 2025 and Denver is still dealing with this situation [3] and as far as I know, the AG still hasn't done anything about it.
You are formally documenting for all future processes that you have filed a complaint, and should include relevant contracts, receipts, correspondence, and support materials. This can be enough to encourage the business to be more responsive.
So it's a low-cost first step that can trigger mediation or settlement and help resolve your dispute (and contribute to broader consumer protection). It's a quick way to signal to the business to work with you.
Submitting a letter or formal complaint can prompt the AG (sometimes) to contact the business in an attempt to mediate or resolve the issue informally. The AG also often offers informal mediation services though some division. The AG’s office sometimes starts investigations based on complaints from the public.
> government body with power to adjudicate consumer complaints about businesses and the legal teeth to enforce its judgements
These would in fact be great thing. The problem being that if this existed and did what it was supposed to do, it would only be a matter of time before Trump appointed someone to either destroy it or weaponize it against perceived enemies.
The bigger question is how can we have a body that can both protect consumers from bad businesses AND can also be itself protected from the purchased political influence of those bad businesses.
> The bigger question is how can we have a body that can both protect consumers from bad businesses AND can also be itself protected from the purchased political influence of those bad businesses
I don't have an answer, but one could be found by looking at places that have just that. In the UK we have the Citizens Advice bureau and the Trading Standards organisation which are safely independent.
Though I have a feeling the answer may be; "Don't live in the USA".
That’s exactly right. BBB is Yelp for boomers, and that’s it. I’ve heard plenty of older family and acquaintances wave it around like a weapon: “if they don’t honor my (ridiculous) request, I’ll sic the BBB on them!” And then… what? I’m sure someone, somewhere checks BBB before doing business with a company, but I’ve never personally seen someone do it, and without that feedback loop, the BBB is just another private review site with zero teeth.
People don't look businesses up on Facebook or X much here either, and yet many times I've witnessed my SO break through being stonewalled by some business with just a simple line, delivered in calm voice: "okay, I'll take it to Facebook, we'll see if you like a drama there" (or "X" more recently). Just like that, 90% success rate, and no actual public drama on social media.
Don't know how that works. It's Poland, approximately nobody here even used Twitter, nor do they use X - and yet, businesses big and small seem super sensitive to that.
I’m a bit older than you, but same. I’ve bought property and land, and it never occurred to me to check BBB for anything, ever. I know it’s a thing that exists, and that’s about the extent of it.
I don't understand why they need to "substantiate" their opinion, but the commenter they're replying to wouldn't need to substantiate their anecdotal evidence. They can "literally" claim anything.
I do think Mindless2112's experience is common with larger businesses, making the BBB potentially more effective in those cases than online review sites like Google or Yelp. Also, do those sites send complaint letters? I don't know.
There is a miles of difference between providing one specific claim based on experience that does not claim that this happens with all businesses. Vs the other claim that BBB complaints doesn't work and is no more than a glorified lists in general.
They mean inherently I think. As in it doesn't enforce any "better business" practices. The only threat is public shaming through them. But they themselves won't actually solve it.
I don't see anything to indicate that Mindless2112 is "lying," but it does look like they might be a bit confused. BBB really is just another review website like Yelp and Google reviews (with the added step of sending the business a letter in the mail).
This must be some sort of weird optometrist thing. Mine always asks for my medical insurance, and after I tell them they have my vision plan info, they still want my medical plan info. Then they try to bill me medical plan instead of the vision plan. I then correct the situation, but it takes time and several phone calls. Maybe it's some sort of scam where they think the medical insurance will pay more?
Interesting that traffic didn't return to completely normal levels after the incident.
I recently started using the "luci-app-https-dns-proxy" package on OpenWrt, which is preconfigured to use both Cloudflare and Google DNS, and since DoH was mostly unaffected, I didn't notice an outage. (Though if DoH had been affected, it presumably would have failed over to Google DNS anyway.)
> Interesting that traffic didn't return to completely normal levels after the incident.
Anecdotally, I figured out their DNS was broken before it hit their status page and switched my upstream DNS over to Google. Haven't gotten around to switching back yet.
After trying both several time I since stayed with google due to cloudflare always returning really bad IPs for anything involving CDN. Having users complain stuff take age to load because you got matched to an IP on opposite side of planet is a bit problematic especially when it rarely happen on other dns providers. Maybe there is a way to fix this but I admit I went for the easier option of going back to good old 8.8.8.8
I've also changed to 9.9.9.9 and 8.8.8.8 after using 1.1.1.1 for several years because connectivity here is not very good, and being connected to the wrong data center means RTT in excess of 300 ms. Makes the web very sluggish.
Does that setup fall back to 8.8.8.8 if 9.9.9.9 fails to resolve?
Quad9 has a very aggressive blocking policy (my site with user-uploaded content was banned without even reporting the malicious content; if you're a big brand name it seems to be fine to have user-uploaded content though) which this would be a possible workaround for, but it may not take an nxdomain response as a resolver failure
Realistically, either you ignore the privacy concerns and set up routing to multiple providers preferring the fastest, or you go all-in on privacy and route DNS over Tor over bridge.
Although, perhaps, having an external VPS with a dns proxy could be a good middle ground?
If you're the technical type you can run Unbound locally (even on Windows) and let it forward queries with DoT. No need for neither Tor nor running your own external resolver.
And it’s not conspiracy theory - it was very suspicious when we did some testing on small, aware group. The traffic didn’t look like being handled anonymously at Google side
Yeah it's not like they have a long track record of being caught red-handed stepping all over privacy regulations and snarfing up user activity data across their entire range of free products...
> Interesting that traffic didn't return to completely normal levels after the incident.
Clients cache DNS resolutions to avoid having to do that request each time they send a request. It's plausible that some clients held on to their cache for a significant period.
reply