Hacker News new | past | comments | ask | show | jobs | submit | gameman144's comments login

> Literally some people only communicate via one vendor with no interoperability.

Doesn't iMessage fit this bill to a tee? (Where the vendor in question also locks you into the hardware, not just the software?)


iMessage isn't really popular in Europe. Whatsapp is.

iMessage is insignificant outside of the US

No one uses iMessage in Europe.

Does it though? I feel like there's a whole epistemological debate to be had, but if someone says "My toaster knows when the bread is burning", I don't think it's implying that there's cognition there.

Or as a more direct comparison, with the VW emissions scandal, saying "Cars know when they're being tested" was part of the discussion, but didn't imply intelligence or anything.

I think "know" is just a shorthand term here (though admittedly the fact that we're discussing AI does leave a lot more room for reading into it.)


I agree with your point except for scientific papers. Let's push ourselves to use precise, non-shorthand or hand waving in technical papers and publications, yes? If not there, of all places, then where?

"Know" doesn't have any rigorous precisely-defined senses to be used! Asking for it not to be used colloquially is the same as asking for it never to be used at all.

I mean - people have been saying stuff like "grep knows whether it's writing to stdout" for decades. In the context of talking about computer programs, that usage for "know" is the established/only usage, so it's hard to imagine any typical HN reader seeing TFA's title and interpreting it as an epistemological claim. Rather, it seems to me that the people suggesting "know" mustn't be used about LLMs because epistemology are the ones departing from standard usage.


colloquial use of "know" implies anthropomorphisation. Arguing that usign "knowing" in the title and "awarness" and "superhuman" in the abstract is just colloquial for "matching" is splitting hairs to an absurd degree.

You missed the substance of my comment. Certainly the title is anthropomorphism - and anthropomorphism is a rhetorical device, not a scientific claim. The reader can understand that TFA means it non-rigorously, because there is no rigorous thing for it to mean.

As such, to me the complaint behind this thread falls into the category of "I know exactly what TFA meant but I want to argue about how it was phrased", which is definitely not my favorite part of the HN comment taxonomy.


I see. Thanks for clarifying. I did want to argue about how it was phrased and what is alluding to. Implying increased risk from "knowing" the eval regime is roughly as weak as the definition of "knowing". It can be equaly a measure of general detection capability, as it can about evaluation incapability - i.e. unlikely news worthy, unless it reached top HN because of the "know" in the title.

Thanks for replying - I kind of follow you but I only skimmed the paper. To be clear I was more responding to the replies about cognition, than to what you said about the eval regime.

Incidentally I think you might be misreading the paper's use of "superhuman"? I assume it's being used to mean "at a higher rate than the human control group", not (ironically) in the colloquial "amazing!" sense.


I really do agree with your point overall, but in a technical paper I do think even word choice can be implicitly a claim. Scientists present what they know or are claiming and thus word it carefully.

My background is neuroscience, where anthropomorphising is particularly discouraged, because it assumes knowledge or certainty of an unknowable internal state, so the language is carefully constructed e.g. when explaining animal behavior, and it's for good reason.

I think the same is true here for a model "knowing" somethig, both in isolation within this paper, and come on, consider the broader context of AI and AGI as a whole. Thus it's the responsibility of the authors to write accordingly. If it were a blog I wouldn't care, but it's not. I hold technical papers to a higher standard.

If we simply disagree that's fine, but we do disagree.


I think you should be more precise and avoid anthropomorphism when talking about gen AI, as anthropomorphism leads to a lot of shaky epistemological assumptions. Your car example didn't imply intelligence, but we're talking about a technology that people misguidedly treat as though it is real intelligence.

What does "real intelligence" mean? I fear that any discussion that starts with the assumption such a thing exists will only end up as "oh only carbon based humans (or animals if you happen to be generous) have it".

Any intelligence that can synthesize knowledge with or without direct experience.

So ChatGPT? Or maybe that can't "really synthesize"?

How would ChatGPT come up with something truly novel, not related to anything it's ever seen before?

Has a human ever done that?

We obviously can, otherwise where do our myriad of complex concepts, many of which aren't empirical, come from? How could we have modern mathematics unless some thinker had devised the various ways of conceptualizing and manipulating numbers? This is a very old question [1] with a number of good answers as to how a human can [2].

1: https://plato.stanford.edu/entries/hume/#CopyPrin

2: https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_dis...


As you link to The Copy Principle: it, or at least that summary of it, appears to be very much what AI do.

As a priori knowledge is all based on axioms, I do not accept that it is an example of "something truly novel, not related to anything it's ever seen before". Knowledge, yes, but not of the kind you describe. And this would still be the case even if LLMs couldn't approximate logical theorem provers, which they can: https://chatgpt.com/share/685528af-4270-8011-ba75-e601211a02...


You'd have to pick something that fits:

> come up with something truly novel, not related to anything it's ever seen before?

I've never heard of a human coming up with something that's not related to anything they've ever seen before. There is no concept in science that I know of that just popped into existence in somebody's head. Everyone credits those who came before.


Here you say:

> with or without

But in the other reply, you're asking for:

> something truly novel, not related to anything it's ever seen before

So, assuming the former was a typo, you only believe in a priori knowledge, e.g. maths and logic?

https://en.wikipedia.org/wiki/A_priori_and_a_posteriori

I mean, LLMs can and do help with this even though it's not their strength; that's more of a Lean-type-problem: https://en.wikipedia.org/wiki/Lean_(proof_assistant)


Yeah I was specifically asking for synthetic a priori knowledge, which AI by definition can't provide. It can only estimate the joint distribution over tokens, so anything generated from it is by definition a posteriori. It can generate novel statements, but I don't think there's any compelling definition of "knowledge" (including the common JTB one) that could apply to what it actually is (it's just the highest probability semiotic result). And in fact, going by the JTB definition of knowledge, AI models making correct novel statements would just be an elaborate example of a Gettier problem.

I think LLMs as a symbolic layer (effective, as a "sense organ") with some kind of logical reasoning engine like everyone loved decades ago could accomplish something closer to "intelligence" or "thinking", which I assume is what you were implying with Lean.


My example with Lean is that it's specifically a thing that does a priori knowledge: given "A implies B" and "A", therefore "B". Or all of maths from the chosen axioms.

So, just to be clear, you were asked:

> What does "real intelligence" mean?

And your answer is that it must be a priori knowledge, and are fine with Lean being one. But you don't accept that LLMs can weakly approximate theorem provers?

FWIW, I agree that the "Justified True Belief" definition of knowledge leads to such conclusions as you draw, but I would say that this is also the case with humans — if you do this, then the Gettier problems show that even humans only have belief, not knowledge: when you "see a sheep in a field", you may be later embarrassed to learn that what you saw was a white coated Puli and there was a real sheep hiding behind a bush, but in the moment the subjective experience of your state of "knowledge" is exactly the same as if you had, in fact, seen a sheep.

Just, be careful with what is meant by the word "belief", there's more than one way I can also contradict Wittgenstein's quote on belief:

> If there were a verb meaning "to believe falsely," it would not have any significant first person, present indicative.

Depending on what I mean by "believe", and indeed "I" given that different parts of my mind can disagree with each other (which is why motion sickness happens).


> And your answer is that it must be a priori knowledge, and are fine with Lean being one. But you don't accept that LLMs can weakly approximate theorem provers?

I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

>but I would say that this is also the case with humans

Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Wittgenstein's quote is funny, as it reminds me a bit of Kant's refutation of Cartesian duality, in which he points out that the "I" in "I think therefore I am" equivocates between subject and object.


> I said that a hypothetical system that used gen AI to interact with the world (get text, images, etc.) and then a system like Lean to synthesize judgments about those things could potentially resemble "intelligence" like humans possess.

What logically follows from this, given that LLMs demonstrate having internalised a system *like* Lean as part of their training?

That said, even in logic and maths, you have to pick the axioms. Thanks to Gödel’s incompleteness theorems, we're still stuck with the Münchhausen trilemma even in this case.

> Most of the "solutions" to Gettier problems that I find compelling rely on expanding the "justified" aspect of it, and that wouldn't really work with gen AI, as it's not really possible to make logical statements about its justification, only probabilistic ones.

Even with humans, the only meaning I can attach to the word "justified" in this sense, is directly equivalent to a probability update — e.g. "You say you saw a sheep. How do you justify that?" "It looked like a sheep" "But it could have been a model" "It was moving, and I heard a baaing" "The animatronics in Disney also move and play sounds" "This was in Wales. I have no reason to expect a random field in Wales to contain animatronics, and I do expect them to contain sheep." etc.

The only room for manoeuvre seems to be if the probability updates are Bayesian or not. This is why I reject the concept of "absolute knowledge" in favour of "the word 'knowledge' is just shorthand for having a very strong belief, and belief can never be 100%".

Descartes' "I think therefore I am" was his attempt at reduction to that which can be verified even if all else that you think you know is the result of delusion or illusion. And then we also get A. J. Ayer saying nope, you can't even manage that much, all you can say is "there is a thought now", which is also a problem for physicists viz. Boltzmann brains, but also relevant to LLMs: if, hypothetically, LLMs were to have any kind of conscious experiences while running, it would be of exactly that kind — "there is a thought now", not a continuous experience in which it is possible to be bored due to input not arriving.

(If only I'd been able to write like this during my philosophy A-level exams, I wouldn't have a grade D in that subject :P)


The toaster thing is more as admission that the speaker doesn't know what the toaster does to limit charring the bread. Toasters with timers, thermometers and light sensors all exist. None of them "know" anything.

Yeah, I agree, but I think that's true all the way up the chain -- just like everything's magic until you know how it works, we may say things "know" information until we understand the deterministic machinery they're using behind the scenes.

I'm in the same camp, with the addition that I believe it applies to us as well since we're part of the system too, and to societies and ecologies further up the scale.

I think the parent's argument is that wherever in your framework you're calling `doth_match` and then `doeth_thou`, you have a single function that's both deciding and acting. There has to be a function in your program that's responsible for doing both.


A function that asks a question with one function call and calls another based on the answer isn’t doing any work or asking any questions. It’s just glue code. And as long as it stays just glue code, that’s barely any of your code.

Asking for absolutes is something journeymen developers need to grow out of.

The principle of the excluded middle applies to Boolean logic and bits of set theory and belongs basically nowhere else in software development. But it’s a one trick pony that many like to ride into the ground.


I am not sure that the before case maps to the article's premise, and and I think your optimized SIMD version does line up with the recommendations of the article.

For your example loop, the `if` statements are contingent on the data; they can't be pushed up as-is. If your algorithm were something like:

    if (length % 2 == 1) {
      values[i] += 1;
    } else {
      values[i] += 2;
    }

then I think you'd agree that we should hoist that check out above the `for` statement.

In your optimized SIMD version, you've removed the `if` altogether and are doing branchless computations. This seems very much like the platonic ideal of the article, and I'd expect they'd be a big fan!


The point was more that, you shouldn't always try to remove the branch from a loop yourself, because often the compiler will do a better job.

For a contrived example, we could attempt to be clever and remove the branching from the loop in the first example by subtracting two from every value, then add three only for the odds.

    for (int i = 0; i < length ; i++) {
        values[i] -= 2;
        values[i] += (values[i] % 2) * 3;
    }
It achieves the same result (because subtracting two preserves odd/evenness, and nothing gets added for evens), and requires no in-loop branching, but it's likely going to perform no better or worse than what the compiler could've generated from the first example, and it may be more difficult to auto-vectorize because the logic has changed. It may perform better than an unoptimized branch-in-loop version though (depending on the cost of branching on the target).

In regards to moving branches out of the loop that don't need to be there (like your check on the length) - the compiler will be able to do this almost all of the time for you - this kind of thing is standard optimization techniques that most compilers implement. If you are interpreting, the following OPs advice is certainly worth doing, but you should probably not worry if you're using a mature compiler, and instead aim to maximize clarity of code for people reading it, rather than trying to be clever like this.


I don't love heavy-handed approaches like this, but it does seem very in line with small government politics.

Essentially, this is saying that the executive can't create regulations that add regulations that limit what businesses can do (which would be relevant when the party in power of the executive changes)


it specifically targets the states - the executive seems free to create those regulations by my reading?


Whoops, yep you are completely right and I was totally wrong! Bad day for my reading comprehension!


The executive doesn't care what the law says. This law constrains the states. The opposite of what the GOP believes in almost every other things where the states are sufficiently -ist/-phobic.


The parent isn't objecting to tariffs, as far as I can tell. They're objecting that tariffs, which are a power vested in Congress, are being driven by one man (the President) in the executive branch.


Congress has the ability to rein in presidential power. In extreme they can threaten impeachment.

Republicans control Congress so they are complicit in what Trump is doing.


Totally, they have the ability to rein in the executive branch for sure! I think the parent's frustration is very justified (and might even be more valid -- there exists a clear path to not have tariffs imposed by edict, and our representatives just aren't taking it).


If the people wanted congress to rein in Trump they would not have voted the opposition out.

This is very much what we wanted. And we clicked "yes" on the question at every electoral level.

I understand everyone's got a little buyer's remorse. At the same time however, if someone clicked through 4 or 5 dialog boxes all saying "Are you sure you want to do X, Y, and Z?" And then, upon the system's execution of X, Y, and Z, suddenly complained that X, Y, and Z are not what they wanted?

I don't know?

I guess I'm just saying I'd be a bit surprised at their disappointment.


> Children are sponges of information and often their parents and various information outlets don't provide them with enough to satisfy their curiosity

I don't think this is necessarily a bad thing, though -- a big part of parenting is determining when your kid is old enough for the truth. There is a lot of hard-to-swallow evil in human history, and I think it's fine for parents to shield their kids from some of that curiosity before they're ready.

I remember being a kid and watching cartoons, being frustrated that they never actually killed the bad guys. It was probably better for my development, though, that I wasn't exposed to the level of violence that would have satisfied my curiosity at that young age.


> the "art" is to be able to mate that animal.

I'm assuming (hoping?) that this was supposed to be "tame"? If not, I've got some questions about Bruce Lee.


> Hard to take him seriously when he doesn't himself pay more tax

Why? Advocating for a change doesn't imply that you need to act like that change has been made already.

For instance, I am a huge advocate for building out more sidewalks in my city. I don't currently walk everywhere, though, because the sidewalks today aren't suitable for that -- the change needs to be made first.

Likewise, I could donate all my spare money to the city to fund sidewalks, but this goes against my visceral sense of fairness: I am advocating for my local government to build sidewalks, not saying that sidewalks should be built by any means necessary and that I'm willing to foot the bill alone.


Wouldn't this make users pay for every possible feature they could ever use on a given site? For instance, in Google Maps I might use Street View 1% of the time, and the script for it is pretty bulky. In your ideal world, would I have to preload the Street View handling scripts whenever I loaded up Google Maps at all?


If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.

Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.


> If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.

Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.

Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.

It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.


“Worse” here is relative to how we have designed sites such as Google maps today. The current web would fundamentally break if we stopped supporting scripts after page load, so moving would be painful. However, we build these lazy and bloated monolith SPAs and Electron apps because we can, not because we have to. Other more efficient and lightweight patterns exist, some of us even use them today.

If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.

Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.

https://www.w3.org/TR/html-imports/


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: