Hacker Newsnew | past | comments | ask | show | jobs | submit | LegionMammal978's commentslogin

> Well, they said they would unkill xslt if someone would rewrite and maintain it so that it's not the abandonware horrorshow it was.

Who said this? I was never able to find any support among the browser devs for "keep XSLT with some more secure non-libxslt implementation".


Heat conduction requires a medium, but radiation works perfectly fine in a vacuum. Otherwise the Sun wouldn't be able to heat up the Earth. The problem for spacecraft is that you're limited by how much IR radiation is passively emitted from your heat sinks, you can't actively expel heat any faster.

There is lots and lots and lots of space on Earth where hardly anyone is living. Cheap rural areas can support extremely large datacenters, limited only by availability of utilities and workers.

We also have to build a lot more solar and nuclear in addition of the datacenters themselves, which we need to do anyway but it would compound the land we use for energy production.

Yet a colossal number of servers on satellites would require the same energy-production facilities to be shipped into orbit (and to receive regular maintainence in orbit whenever they fail), which requires loads of land for launch facilities as well as processing for fuel and other consumable resources. Solar might be somewhat more efficient, but not nearly so much so as to make up for the added difficulty in cooling. One could maybe postulate asteroid mining and space manufacturing to reduce the total delta-V requirement per satellite-year, but missions to asteroids have fuel requirements of their own.

If anything, I'd expect large-scale Mars datacenters before large-scale space datacenters, if we can find viable resources there.


It makes sense, I would be curious to see the price computations done by the different space GPUs startups and Big Tech, I wonder how they are getting a cheaper cost, or maybe it is marketing.

Indeed, the GPL's definitions of "modify" and "propagate" restrict the license's scope to actions that would otherwise infringe on copyright if not permitted. And fair use and similar doctrines generally act as carve-outs to copyright infringement.

Things like bump functions [0] would generally do the trick.

[0] https://en.wikipedia.org/wiki/Bump_function


> It's always either an `unwrap` (and we know how well that can go [2])

If a mutex has been poisoned, then something must have already panicked, likely in some other thread, so you're already in trouble at that point. It's fine to panic in a critical section if something's horribly wrong, the problem comes with blindly continuing after a panic in other threads that operate on the same data. In general, you're unlikely to know what that panic was, so you have no clue if the shared data might be incompletely modified or otherwise logically corrupted.

In general, unless I were being careful to maintain fault boundaries between threads or tasks (the archetypical example being an HTTP server handling independent requests), I'd want a panic in one thread to cascade into stopping the program as soon as possible. I wouldn't want to swallow it up and keep using the same data like nothing's wrong.


> so you have no clue if the shared data might be incompletely modified or otherwise logically corrupted.

One can make a panic wrapper type if they cared: It's what the stdlib Mutex currently does:

MutexGuard checks if its panicking during drop using `std::thread::panicking()`, and if so, sets a bool on the Mutex. The next acquirer checks for that bool & knows state may be corrupted. No need to bake this into the Mutex itself.


My point is that "blindly continuing" is not a great default if you "don't care". If you continue, then you first have to be aware that a multithreaded program can and will continue after a panic in the first place (most people don't think about panics at all), and you also have to know the state of the data after every possible panic, if any. Overall, you have to be quite careful if you want to continue properly, without risking downstream bugs.

The design with a verbose ".lock().unwrap()" and no easy opt-out is unfortunate, but conceptually, I see poisoning as a perfectly acceptable default for people who don't spend all their time musing over panics and their possible causes and effects.


> If a mutex has been poisoned, then something must have already panicked, likely in some other thread, so you're already in trouble at that point.

I find that in the majority of cases you're essentially dealing with one of two cases:

1) Your critical sections are tiny and you know you can't panic, in which case dealing with poisoning is just useless busywork.

2) You use a Mutex to get around Rust's "shared xor mutable" requirement. That is, you just want to temporarily grab a mutable reference and modify an object, but you don't have any particular atomicity requirements. In this case panicking is no different than if you would panic on a single thread while modifying an object through a plain old `&mut`. Here too dealing with poisoning is just useless busywork.

> I'd want a panic in one thread to cascade into stopping the program as soon as possible.

Sure, but you don't need mutex poisoning for this.


> 1) Your critical sections are tiny and you know you can't panic, in which case dealing with poisoning is just useless busywork.

Many people underestimate how many things can panic in corner cases. I've found quite a few unsafe functions in various crates that were unsound due to integer-overflow panics that the author hadn't noticed. Knowing for a fact that your operation cannot panic is the exception rather than the rule, and while it's unfortunate that the std Mutex doesn't accomodate non-poisoning mutexes, I see poisoning as a reasonable default.

(If Mutex::lock() unwrapped the error automatically, then very few people would even think about the "useless busywork" of the poison bit. For a similar example, the future types generated for async functions contain panic statements in case they are polled after completion, and no one complains about those.)

> 2) You use a Mutex to get around Rust's "shared xor mutable" requirement. That is, you just want to temporarily grab a mutable reference and modify an object, but you don't have any particular atomicity requirements.

Then I'd stick to a RefCell. Unless it's a static variable in a single-threaded program, in which case I usually just write some short wrapper functions if I find the manipulation too tedious.


If the O(n^3) schoolbook multiplication were the best that could be done, then I'd totally agree that "it's simply the nature of matrices to have a bulky multiplication process". Yet there's a whole series of algorithms (from the Strassen algorithm onward) that use ever-more-clever ways to recursively batch things up and decrease the asymptotic complexity, most of which aren't remotely practical. And for all I know, it could go on forever down to O(n^(2+ε)). Overall, I hate not being able to get a straight answer for "how hard is it, really".

For anyone interested, there is a introductory survey of the current lower bound at: https://en.wikipedia.org/wiki/Computational_complexity_of_ma...

This is a bit interesting in how it doesn't require further interactivity with the attacker once the libc address has been obtained, unlike most basic ROP examples, which I've rarely seen require anything fancier than return-to-main. The more the chain does in a single pass, the more it might need gadgets smarter than "set register to immediate and return".


It's pretty weird, my impression is that the APIs are flexible enough to implement most sane behaviors, but websites keep managing to mess it all up. Perhaps it's just one of those things that no one bothers re-testing as the codebase changes.


In my experience, the problem is two-fold. First product managers/owners don't consider the URIs, so it ends up not being specified. They say "We should have a page when user clicks X, and then on that page, user can open up modal Y", but none of it is specified in terms of what happens with the URIs and history.

Then a developer gets the task to create this, and they too don't push back on what exact URIs are being used, nor how the history is being treated. Either they don't have time, don't have the power to send back tasks to product, simply don't care or just don't think of it. They happily carry along creating whatever URIs make sense to them.

No one is responsible for URLs, no one considers that part of UX and design, so no one ends up thinking about it, people implement things as they feel is right, without having a full overview over how things are supposed to fit together.

Anyways, that's just based on my experience, I'm sure there are other holes in the process that also exacerbates the issue.


As a UX designer, this is a failure of the UX designers, IMO. If you're a UX designer for web, you should be aware of web technology and be thinking about these things. Even if you don't know enough to fully specify it, you should be able to enough such that you can have conversations with a developer to work together to fully spec it out.

That said, I've also worked with some developers that didn't like intruding on their turf, so to speak. Though I've also worked with others that were more than happy to collaborate and very proactive about these sorts of things.

Furthermore, as a UX designer this is the sort of topic that we're unlikely to be able to meaningfully discuss with PMs and other stakeholders as it's completely non-visual and often trying to bring this up with them and discuss it ends up feeling like pulling teeth and them wondering why we're even spending time on it. So usually it just ended up being a discussion between me and the developers with no PM oversight.


Web developers should make it a habit to ask/require URL structures be part of the spec.

I've had people be surprised by the request because its something they don't usually consider, but I've never had anyone actually push back on it.


Nothing weird about it, you see people arguing right here whether a site should add a new history entry when a filter is set.

Interacting with the URL from JS within the page load cycle is inherently complex.

For what it's worth, I'd also argue that the right behavior here is to replace.

But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).

Of course the author's case is the good/special one where they already visited the site with a filter in the URL.

But when you might be interested in using the view/page with multiple queries/filters/paramerers, it might also be unexpected: for example, developers not having a dedicated search results page and instead updating the query parameters of the current URL.

Also, from the history APIs perspective, path and query parameters are interchangeable as long as the origin matches, but user expectations (and server behavior) might assign them different roles.

Still, we're commenting on a site where the main view parameter (item ID, including submission pages) is a query parameter. So this distinction is pretty arbitrary.

And the most extreme case of misusing pushState (instead if replace) are sites where each keystroke in some typeahead filter creates a new history entry.

All of this doesn't even touch the basic requirement that is most important and addressed in the article: being able to refresh the page without losing state and being able to bookmark things.

Manually implementing stuff like this on top of a basic routing functionality (which should use pushState) in an SPA is complex very quickly.


> But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).

I would have one state for when the user first entered the page, and then the first time they modify a filter, add a 2nd state. From thereon, keep updating/replacing that state.

This way if the user clicks into the page, and modifies a dozen things they can

1. Refresh and keep all their filters, or share with a friend 2. Press back to basically clear all their filters (get back to the initial state of the page) 3. Only 1 more press of back to get back to where-ever they came from


I agree, this would be a good approach.

Unless of course, you initially visited the page with a stateful URL.


On the other hand, I've not uncommonly seen this idea misused: Alice asks for Y, Bob says that it's an XY problem and that Alice really wants to solve a more general problem X with solution Z, Alice says that Z doesn't work for her due to some detail of her problem, Bob browbeats Alice over "If you think Z won't work, then you're wrong, end of story", and everyone argues back and forth over Z instead of coming up with a working solution.

Sometimes the best solution is not the most widely-encouraged one.


I've seen this too. Explicitly talking about something being an XY problem is a red flag, because the goal is usually to dismiss you with a canned answer that doesn't help you.

The point of XY problems isn't to call people out on supposedly bad behaviour, it's to push them in the right direction and provide more context.


Yes, often an issue on stackoverflow. It's one of the reasons why it can be frustrating to use as you get more experienced: if an expert is at the point of asking on stackoverflow they're probably doing something at least a little bit unusual! But people who answer on stackoverflow mostly see questions from less experienced people and so default to operating in that mode.

I generally try to answer the Y but also indicate that it suggests there may be an X that could be better achieved some other way, and mention Z if I'm reasonably confident in what X is. It might increase the chance that the person asking just does Y anyway even if Z would be better, but frankly that's not really my business.


Bob saying "you should use Z end of story" it's just as a hardheaded and unhelpful as Bob saying "X doesn't do that end of story".


Unfortunately, still quite common. The ego is quite the tricky one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: