Hacker News new | past | comments | ask | show | jobs | submit login

> Demystifying Gödel's Theorem: What It Actually Says

> If you think his theorem limits human knowledge, think again

https://www.youtube.com/watch?v=OH-ybecvuEo




thanks for the pointer.

first, with Neil DeGrasse Tyson I feel in fairly ok company with my little pet peeve fallacy ;-)

yah as I said, I both get it and don't ;-)

And then the video escapes me saying statements about the brain "being a formal method" can't be made "because" the finite brain can't hold infinity.

that's beyond me. although obviously the brain can't enumerate infinite possibilities, we're still fairly well capable of formal thinking, aren't we?

And many lovely formal systems nicely fit on fairly finite paper. And formal proofs can be run on finite computers.

So somehow the logic in the video is beyond me.

My humble point is this: if we build "intelligence" as a formal system, like some silicon running some fancy pants LLM what have you, and we want rigor in it's construction, i.e. if we want to be able to tell "this is how it works", then we need to use a subset of our brain that's capable of formal and consistent thinking. And my claim is that _that subsystem_ can't capture "itself". So we have to use "more" of our brain than that subsystem. so either the "AI" that we understand is "less" than what we need and use to understand it. or we can't understand it.

I fully get our brain is capable of more, and this "more" is obviously capable of very inconsistent outputs, HAL 9000 had that problem, too ;-)

I'm an old woman. it's late at night.

When I sat through Gödel back in the early 1990s in CS and then in contrast listened to the enthusiastic AI lectures it didn't sit right with me. Maybe one of the AI Prof's made that tactical mistake to call our brain "wet biological hardware" in contrast to "dry silicon hardware". but I can't shake of that analogy ;-) I hope I'm wrong :-) "real" AI that we can trust because we can reason about it's inner workings will be fun :-)


> My humble point is this: if we build "intelligence" as a formal system, like some silicon running some fancy pants LLM what have you, and we want rigor in it's construction, i.e. if we want to be able to tell "this is how it works", then we need to use a subset of our brain that's capable of formal and consistent thinking. And my claim is that _that subsystem_ can't capture "itself". So we have to use "more" of our brain than that subsystem. so either the "AI" that we understand is "less" than what we need and use to understand it. or we can't understand it.

I don't know if you've read Jacob Bronowski's The origins of knowledge and imagination, but the latter part of his argument are essentially this. Formal systems are nice for determining truth, but they're limited and there is always some situation that forces you to reinvent that formal system (edge cases, incorrect assumptions, rules limitation,...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: