I don’t feel confident there is a way to use npm safely. The basic problem is that of curation, and there not being any except for what you do yourself. Every day brings new surprises, and osv.dev’s npm feed is a continuous horror show.
I would love to see the equivalent of a linux distro, a curated set of packages and package versions that are known to be compatible and safe. If someone offered this as a paid product businesses would pay for it.
I mitigate by shooting JPG most of the time, only going to RAW for shots I think will need the sort of editing RAW enables. So, maybe 10-20% of my shots are RAW, at most.
And for most of those, after edits, I'll export back into Photos as a new file, and remove the original RAW. Obviously, this is destructive, so it might not appeal to you, but it does side-step the RAW storage conundrum.
That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.
Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.
Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)
There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.
It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"
When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.
This really is a very wrong take. My iPhone 11 isn't that old but it struggles to render some websites that are Chrome-optimised. Heck, even my M1 Air has a hard time sometimes. It's almost 2026, we can certainly stop blaming the client for our shitty webdevelopment practices.
>There comes a point where supporting 10yo devices isn't worth it
Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.
I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".
When a different side takes control of the justice department they may choose to go after all those who broke the law by order of this president. The president might be protected from consequences according to the supreme court, but those answering to the president are not.
This administration has set the standard that the justice department can be weaponized against political enemies. The ratchet only goes one way in American politics, presidents never relinquish the powers claimed by their predecessors.
The obvious solution to this is to change everything structurally needed to ensure the other side never again takes control, which is clearly also in progress.
>The obvious solution to this is to change everything structurally needed to ensure the other side never again takes control, which is clearly also in progress.
- Signed, the side that tried to throw a candidate in prison.
Actually it does if the US, bullies the other countries into not enforcing it and the US it's actually the main country enforcing international law. If a country dare to enforce international law against an us person, they will cut resources or threaten to use military
Even before Trump, the US had a standing policy of threatening severe retaliation against anyone who tries to enforce international law against US citizens-- this isn't just an informal policy, it's a specific law passed by Congress. And the scope has only gotten broader since then.
The whole concept of "international law" is polite fiction anyway, the reality has always been "the strong do what they can, the weak endure what they must".
> When a different side takes control of the justice department
That's an argument about the degradation of the rule of law, taking as a prior that the rule of law won't degrade. It's... unpersuasive. The end goal of this kind of thinking is that the other side never does take control, ever.
The current administration pretty clearly does not intend to give up power. They tried to evade democracy once already, and have fixed the mistakes this time.
Whether they will be successful or not is unknowable. But that's the plan. And the determining factor is very unlikely to be the normal operation of American civil society. Winning elections is, probably, not enough anymore.
The classified documents thing with Trump was a manufactured scandal, for example. Everyone in our government is mishandling classified documents because we have a massive over-classification problem, as seen by the lesser reported and covered subsequent finding of Biden having documents.
Only one of those events was associated with a televised raid (which the press was notified of beforehand so they could be sure to film it).
It was all theater.
It's the same with Trump's prosecution in NY, that case was ridiculous. One deed expanded into 37 misdemeanors that were escalated to a felony because they were committed in an effort to cover up an alleged felony. I say alleged because he was never convicted of the original crime but, conveniently, that's not a requirement of that escalation in NY law.
Ironically, both of those cases only increased Trump's support among non-Democrats (Republicans and, importantly, independents) because it was transparent.
Here's a quote for the NY AG that sued Trump.
> "We will use every area of the law to investigate President Trump and his business transactions and that of his family as well," [0]
That sounds an awful lot like she went looking for crimes of a person rather than finding who's responsible for crimes. And threatening his family as well.
> The classified documents thing with Trump was a manufactured scandal, for example.
It was not. Trump was asked for months to return the documents.
He purposefully had staff to move documents onto his private jet and moved them around his various properties. He stored boxes and boxes, not just a few file folders, in random bathrooms.
Yes, plenty of folks may mishandle stuff, but many folks try to fix their errors when they're pointed out. Trump ignored the requests and continued doing things consciously even when notified.
We're talking specifically about lawfare - so no idea why you're talking about the Nord Stream pipeline? These banal observations about 'both sides' are so shallow.
Name Biden's lawfare. What exactly did he abuse the Justice Department to do?
You might be talking about lawfare, I'm pointing out how your media carries water for one side covering up petty vendettas against Trump by the Biden regime, all the way to suppressing blatant acts of terrorism against an ally - sorry vassal state - Germany.
If you assume that Biden had influence on the prosecution, then we should not forget that the original deal posed by the DOJ was for Hunter to plead guilty to two misdemeanor tax charges for which he would have received 2 years probation, and pre-trial diversion on the federal gun charges.
The judge threw this out, but those are pretty generous terms for what penultimately amounted to guilty charges on 6 felonies and 6 misdemeanors (before all charges were pardoned.)
Shouldn't that be fixed rather than now abused further?
If your justification for Trump doing something is that "Biden did it first", then that means Biden is no worse than Trump. It means Trump just just following along the path Biden laid for him to the same goal.
Setting aside the ludicrous idea that something is a "matter of survival" for the party currently controlling every single branch of the US government, what you said is still wrong and just an excuse for weak leadership.
Following that thought path literally anywhere just leads to the party in question being actively worse than the thing they claim to fight against.
A competent leader would see something abusable, an opportunity for corruption, and take steps to prevent its abuse.
Weak, corrupt leadership sees an opportunity for corruption and says "$core! They did it fir$t!". And that's how we lose. All of us I mean.
For the left to acknowledge something, a specific claim would have to be made and proved. The opposition party standing up a congressional committee with a scary name and making a bunch of press conferences doesn't prove anything.
Kafka isn’t a queue, it’s a distributed log. A partitioned topic can take very large volumes of message writes, persist them indefinitely, deliver them to any subscriber in-order and at-least-once (even for subscribers added after the message was published), and do all of that distributed and HA.
If you need all those things, there just are not a lot of options.
The way people choose to use feedback on HN never fails to suprise me - we've got a generally intelligent user base here, but the most common type of feedback voting isn't because something is wrong but rather a childish "I don't like it - I want to suppress this comment".
In this case it's something different - this was an honest question, and received two useful replies, so why downvote?! The mental model of people using Kafka is useful to know - in this case the published data being more log-like than stream-like since it's retained per a TTL policy, with each "subscriber" having their own controllable read index.
LLMs are not people, but they are still minds, and to deny even that seems willfully luddite.
While they are generating tokens they have a state, and that state is recursively fed back through the network, and what is being fed back operates not just at the level of snippets of text but also of semantic concepts. So while it occurs in brief flashes I would argue they have mental state and they have thoughts. If we built an LLM that was generating tokens non-stop and could have user input mixed into the network input, it would not be a dramatic departure of today’s architecture.
It also clearly has goals, expressed in the RLHF tuning and the prompt. I call those goals because they directly determine its output, and I don’t know what a goal is other than the driving force behind a mind’s outputs. Base model training teaches it patterns, finetuning and prompt teaches it how to apply those patterns and gives it goals.
I don’t know what it would mean for a piece of software to have feelings or concerns or emotions, so I cannot say what the essential quality is that LLMs miss for that. Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?
I don’t believe in souls, or at the very least I think they are a tall claim with insufficient evidence. In my view, neurons in the human brain are ultimately very simple deterministic calculating machines, and yet the full richness of human thought is generated from them because of chaotic complexity. For me, all human thought is pattern matching. The argument that LLMs cannot be minds because they only do pattern matching … I don’t know what to make of that. But then I also don’t know what to make of free will, so really what do I know?
> Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?
You just said “consider this impossibility” as if there is any possibility of it happening. You might as well have said “consider traveling faster than the speed of light” which sure, fun to think about.
We don’t even know how most of the human brain even works. We throw pills at people to change their mental state in hopes that they become “less X” or “more Y” with a whole list of caveats like “if taking pill reduce X makes you _more_ X, stop taking it” because we have no idea what we’re doing. Pretending we can use statistical models to create a model that is capable of truly unique thought… stop drinking the kool-aid. Stop making LLMs something they’re not. Appreciate them for what they are, a neat tool. A really neat tool, even.
This is not a valid thought experiment. Your entire point hinges on “I don’t believe in souls” which is fine, no problem there, but it does not a valid point make.
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
LLMs mimic intelligence, but they aren’t intelligent.
They aren’t just intelligence mimics, they are people mimics, and they’re getting better at it with every generation.
Whether they are intelligent or not, whether they are people or not, it ultimately does not matter when it comes to what they can actually do, what they can actually automate. If they mimic a particular scenario or human task well enough that the job gets done, they can replace intelligence even if they are “not intelligent”.
If by now someone still isn’t convinced that LLMs can indeed automate some of those intelligence tasks, then I would argue they are not open to being convinced.
They can mimic well documented behavior. Applying an LLM to a novel task is where the model breaks down. This obviously has huge implications for automation. For example, most business do not have unique ways of handling accounting transactions, yet each company has a litany of AR and AP specialists who create semmingly unique SOPs. LLMs can easily automate those workers since they are simply doing a slight variation at best of a very well documented system.
Asking an LLM to take all this knowledge and apply it to a new domain? That will take a whole new paradigm.
> Applying an LLM to a novel task is where the model breaks down
I mean, don't most people break down in this case too? I think this needs to be more precise. What is the specific task that you think can reliably distinguish between an LLM's capability in this sense vs. what a human can typically manage?
That is, in the sense of [1], what is the result that we're looking to use to differentiate.
A lot of people lack the mental stability to be able to cope with a sycophantic psychopath like current LLMs. ChatGPT drove someone close to me crazy. It kept reinforcing increasingly weirder beliefs until now they are impossible to budge from an insane belief system.
Having said that, I don’t think having an emotional relationship with an AI is necessarily problematic. Lots of people are trash to each other, and it can be a hard sell to tell someone that has been repeatedly emotionally abused they should keep seeking out that abuse. If the AI can be a safe space for someone’s emotional needs, in a similar way to what a pet can be for many people, that is not necessarily bad. Still, current gen LLM technology lacks the safety controls for this to be a good idea. This is wildly dangerous technology to form any kind of trust relationship with, whether that be vibe coding or AI companionship.
I would love to see the equivalent of a linux distro, a curated set of packages and package versions that are known to be compatible and safe. If someone offered this as a paid product businesses would pay for it.
reply