Hacker News new | past | comments | ask | show | jobs | submit | geysersam's comments login

How can you make that claim? Have you ever used an LLM that hasn't encountered high school algebra in it's training data? I don't think so.

I have at least encountered many LLMs with many school's worth of algebra knowledge, but fail miserably at algebra problems.

Similarly, they've ingested human-centuries or more of spelling bee related text, but can't reliably count the number of Rs in strawberry. (yes, I understand tokenization is to blame for a large part of this. perhaps that kind of limitation applies to other things too?)


Similarly, they've ingested human-centuries or more of spelling bee related text, but can't reliably count the number of Rs in strawberry

Sigh


That sigh might be a chronic condition, if it's happening even when people demonstrate a decent understanding of the causes. You may want to get that looked at.

The terms are too unclear here. Can you define what it means to "be able to parse human language"? I'm sure contemporary chatbots score higher on typical reading comprehension tests than most humans. You're certainly correct that llms "only" react to stimuli with a trained response, but I guess anything that isn't consciousness necessarily fits that description

Good point, thanks for calling that out. I'm honestly not sure myself! On further reflection, it's probably a matter of degrees?

So for example, a soldier is trained, and then does what it is told. But the soldier also has a deep trough of contextual information and "decision weights" which can change its decisions, often in ways it wasn't trained for. Or perhaps to put it another way: it is capable of operating outside the parameters it was given, "if it feels like it", because the information the soldier processes at any given time may make it not follow its training.

A dog may also disobey an order after being trained, but it has a much smaller range of information it works off of, and fewer things influence its decision-making process. (genetics being a big player in the decision-making process, since they were literally bred to do what we want/defend our interests)

So perhaps a chat AI, a dog, and a soldier, are just degrees along the same spectrum. I remember reading something about how we can get AI to be about as intelligent as a 2-year-old, and that dogs are about that smart. If that's the case (and I don't know that it is; I also don't know if chat AI is actually capable of "disobeying", much less "learning" anything it isn't explicitly trained to learn), then the next question I'd have is, why isn't the AI able to act and think like a dog yet?

If we put an AI in a robot dog body and told it to act like a dog, would it? Or would it only act the way that we tell it dogs act like? Could/would it have emergent dog-like traits and spawn new dog lineages? Because as far as I'm aware, that's not how AI works yet; so to me, that would mean it's not actually doing the things we're talking about above (re: dogs/soldiers)


How much cost reduction does 30% ai written code translate to? It's easy to imagine that ai doesn't write the most expensive lines of code. So it might correspond to 10% cost reduction.

10% is nothing to scoff at, but I don't think it should factor into the decision to rewrite existing packages or trust third parties if you're very security minded.


AI-writing has the cost of human orchestration, debugging, review. Code is now cheap to write, but for there to be a net efficiency gain, those other tasks have to not bloat too much.


No I don't think so because if you make your assumptions early then the same assumptions exist in the entire program and that makes them easy to reason about


You're right but if you want `y` to have the same shape as `x` I think you also need to slice it after

    y = linalg.solve(A, x[..., None])[..., 0]
I don't really mind numpy syntax much, it gets the job done in most scenarios. Numba complements it really well when the code is easier to express like a couple of nested loops.

I used Julia for a while, but found it easier to predict the performance of python+numpy+numba, Julia has a few footguns and the python ecosystem is just insanely polished.


Ah the good old days again, what a beautiful vision. Decadence and lazyness begone! Good luck running your bloated CI pipelines and test suits on megahertz hardware! /s


Bots love haiku. They're hopeless romantics.


I didn't interpret what they wrote as encouragement to suicide.


Isn't the question you're posing basically Pascals wager?

I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.

> Predicting the future is famously difficult

That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"

We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.


> I think the chance they're going to create a "superintelligence" is extremely small.

I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.

Most forecasters on prediction markets are predicting AGI within a decade.


Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?

> that progress on AI will just stop for some reason

Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.

Is it certain that every mystery will be solved eventually?


Aren't we already passed 1/1000th of the performance we would classify as superintelligence?

There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.

We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.

I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".


I agree, the 1/1000 ratio was a bit too extreme. Like you said, almost any way that's measured it's probably fair to say chatgpt is already there.


Yes, this is literally Pascal's wager / Pascal's mugging.


In what way does academia have monopoly on credentials?

You can start issuing your own credentials tomorrow.


I don't know where you live, but in the "developed" parts of the world this is illegal. There will either be some government agency or some council of credential-giving institutions and they will give you a license to issue degrees, or most likely they will not give it to you.


In the US, you can just make up degrees — but you have to be honest that you’re unaccredited.

Accreditation is regulated by NGOs who need government approval and without that you cant received financial aid (or participate in some programs) — but you can hand out pieces of paper for completing your program.

https://www.ed.gov/laws-and-policy/higher-education-laws-and...


There are accreditation bodies. So I don’t think your self-proclaimed engineering degree is going to help you get a job or a professional engineering license like one from an ABET accredited school.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: