Hacker News new | past | comments | ask | show | jobs | submit login

It’s one thing to be skeptical of the state of art and only believe something when you actually see it working (a useful antidote against vapor ware)

It’s another to keep making wrong assertions and predictions about the pace of advancement because of a quasi-religious belief that humans with meat-brains are somehow fundamentally superior .




Expecting what we collectively call “artificial intelligence” to mimic our own intelligence, which is continuously being refined, does not seem like a quasi-religious belief.

Intelligence and consciousness are at the fringe of our understanding, so this skeptical approach seems like a reasonable and scientific way to approach categorizing computer programs that are intended to be called “artificial intelligence”. We refine our hypothesis of “this is artificial intelligence” once we gain more information.

You’re free to disagree of course, or call these early programs “artificial intelligence”, but they don’t satisfy my crude hypothesis above to a lot of folks. This doesn’t mean they aren’t in some ways intelligent (pattern recognition could be a kind or degree of intelligence, it certainly seems required).


The part I push back on is the confidence with which people claim these LLMs “are definitely not intelligent / thinking”.

We can’t even define clearly what human thinking is, yet so many folks claim “nope, LLMs are just pattern matching. Wake me up when it actually has a thought.”

And there are two points to make on that: the first is again, we can’t even explain our own thoughts or rational thinking. And second, I’ve yet to see how it even matters .

The output of GPT-4, for example, is pretty much on point with your average person on certain topics. Whether or not it’s “truly thinking” under the hood is irrelevant, imo, if it gives a really good illusion of it.


> We refine our hypothesis of “this is artificial intelligence” once we gain more information.

You're basically saying skepticism is the correct approach and it doesn't matter if we make confident yet wrong predictions about the (lack of) future potential of AI.

I mean, sure, that works too. But I think that's basically admitting the goalposts are moving.


You can call it that if you want, but it’s not the same as goalpost shifting for well-definable things like “universal healthcare” or “trans rights”. We don’t collectively agree on what artificial intelligence is, so it makes sense that it is constantly refined, and efforts that fall short are called out as such.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: