Hacker News new | past | comments | ask | show | jobs | submit login
To Build Truly Intelligent Machines, Teach Them Cause and Effect (2018) (quantamagazine.org)
146 points by guybedo on Jan 27, 2020 | hide | past | favorite | 104 comments



In the 1980s, when everybody was trying to do AI with some flavor of predicate calculus, he extended that to probabilistic predicate calculus. That helped. But it didn't lead to common sense reasoning. The field is so stuck that few people are even trying.

Working on common sense, defined as predicting what happens next from observations of the current state, is a classic AI problem on which little progress has been made. I used to remark that most of life is avoiding big mistakes in the next 30 seconds. If you can't do that, life will go very badly. Solving that problem is "common sense". It's not an abstraction.

The other classic problem where the field is stuck is robotic manipulation in unstructured situations. McCarthy once thought, in the 1960s, that it was a summer project to do that. He wanted a robot to assemble a Heathkit TV set kit. No way. (The TV set kit was actually purchased, sat around for years, and finally somebody assembled it and put it in a student lounge at Stanford.) 50 years later, unstructured manipulation still works very badly. Watch the DARPA Humanoid Challenge or the DARPA Manipulation Challenge videos from a few years ago.

Great PhD thesis topics for really good people. High-risk; you'll probably fail. Succeed, even partially, and you have a good career ahead.


I worked on AI-via-predicate-calculus, as a successor to Cyc, and I think the main thing I learned is that people are incredibly bad at predicate calculus. Even when we behave "logically", it's an after-the-fact rationalization for a conclusion we arrived at much faster with heuristics.

When we think-about-thinking, or talk-about-thinking, we do so in the language of language, which quickly leads to logic. And that leads us to think that the logic is the thinking. But in fact it's a rare, specialized mode of thought. The primary mode of thought -- the one that keeps us from making big mistakes for a half-minute at a time -- is that irrational one that's very easy to fool if you put effort into it, but which actually gets it right for most of reality (which isn't, generally, trying to trick you).


When we think-about-thinking, or talk-about-thinking, we do so in the language of language, which quickly leads to logic. And that leads us to think that the logic is the thinking. But in fact it's a rare, specialized mode of thought.

Yes. Language is not thinking. Language is I/O.


Language is not thinking. Language is I/O

Actually, I think the key thing is Language is not logic. People are great at simple logic statements and terrible at complex logic.

A line of SQL with 1-2 predicates can indeed seem "easy and language like" but a line of SQL with 5-6 predicates can be utterly opaque, a beast that many programmers would be happy to write an entire 1000-line C program to avoid.

I think a key failing of discussion of cause and effect, logic and so-forth is a failure to really come to terms with the multifaceted qualities of human language expression.

And I'd agree with other posters that the distinction between I/O and thought can be debatable.


I have come to the determination that people are really good at understand, learn and explain two dimensional concepts and most people are good at understanding/learning/explaining 3 dimensional problems.

After 3 dimensions it gets extremely hard and people quickly move to heuristics and short-cuts because most people (i.e. the vast majority including myself) can really only comprehend 3 dimensions of information.

What happens when a human hand grabs and moves an object? At least 5 dimensions that I can think of:

1. x axis 2. y axis 3. z axis (i.e. 3 axis's just to get to the object) 4. grip strength 5. rotation

Those 5 are the absolutely basic, and there are many others, speed, grip texture etc.

I know engineers have programmed robotic hands to grab stuff, but I cannot imagine how many dimensions there are in unstructured problems. That requires engineers to think through so many dimensions, which seems impossible to me.


We probably use habitual trial and error to train our biological neural net to perform our arm and hand movements rather than mathematical formulas. You can watch a baby experiment trying to grab and move items for hours a day. (Poor cat.)


That might depend on what level of supervenience you're thinking about. We could imagine a neural network in which individual neurons communicate with one another over a REST API. Is the REST API I/O, or thinking? My view is that an individual is defined by the constriction of an interface, which is why a severed corpus callosum results in something of a second person inhabiting the same body and doing things that the vocal resident cannot account for (putting items in a shopping cart which the vocal resident does not desire, attempting to strangle the body, etc.). I believe that there are parts of the brain doing behind-the-scenes thinking that the vocal resident is unaware of even when the corpus callosum remains intact. Depending on how small of an interface we require to consider someone an individual we might decide that there are multiple individuals in every brain, or we might decide that connected networks of brains (through language or other means) can form an individual. The level of brains speaking to each other is just one level of supervenience which might not be more or less privileged than any other, simply a level that is useful for conducting human affairs. Language is message passing, and if we believe that no individual neuron thinks then message passing seems necessary for thinking.


What do you think about the language of thought hypothesis? Some people would say internal monologue is how they think.


Ah, what you mean by "is" gets tricky.

You might literally interpret this as "is internal monologue a form of thought" and the answer is clearly yes, on sum logical level, it's brain activity.

You might literally interpret this as "is internal monologue the ONLY form of thought you have" and the answer seems just as clearly no and the question seems simplistic.

But I think people who say that usually mean "is internal monologue the primary-in-some-way form of thought you have" and there the debate gets heated but if you reveal statement, you reveal the confusion is mostly in debating "what's primary in brain activity". But if you think about, lot of debates about human thought is about which part is primary in a fashion we can intuitively feel.


Yes, that's essentially Whorfianism [1].

The easiest argument against it being: have you even struggled to put a feeling into words? The answer, of course, being yes.

[1] https://en.wikipedia.org/wiki/Linguistic_relativity


My thinking goes as follows:

If language is how people think, then you should be able to convey your thoughts to pther people easily and perfectly by using language.

Obviously, that is true for only the most trivial thoughts, thus I personally conclude that language has hardly anything to with actual thinking.

Based on what I think goes around in my head when I think, I would say that language is one of the outputs of thinking. Which may happen as "internal speech",via mouth or pencil or keyboard.


As a fully bilingual person, that is not always true. I can have internal monologue in both language. And then there are things I have trouble expressing naturally (Either internally or externally) using either.


Most of the mammals have similar brains to humans, and some degree of common sense, but little language.


thinking fast thinking slow?


I think so, or at least closely related phenomena. I was looking at somewhat different kinds of thinking than Kahneman is -- mine was, I think, more simplistic, basic deductions and Kahneman more about larger-scale cognitive jumps. But without having done an exhaustive study I suspect that yes, we were looking at the same distinction.


Do you think the challenge in unstructured manipulation tasks are more related to problems in AI, or more related to the incredibly primitive actuators we have at our disposal?


Just by going off the wikipedia page, this seems like a really hairy problem because of the definition

We'd expect some kind of AI-human parity in avoiding obstacles while driving in common sense speak as "don't hit that, or you'll have a wreck", but we don't really expect the car AI to see a bad collision between two other cars on a perpendicular road and call emergency services (as should be common sense for humans).

But if the cars involved in that collision have a detection system to automatically call 911 (any kind of OnStar variant), why should an AI concern itself with that knowing there is a system to handle that task? Would it be common sense for the AI to act as a parallel system and make sure the primary didn't fail to call 911? A human's common sense might be to act as if that system didn't exist because it might have failed and just call anyway (knowing that there's really no penalty for calling twice just to make sure)


I wouldn't say that robotic manipulation in unstructured environments is exactly stuck, just progressing very slowly. There has been some progress since the DARPA Humanoid Challenge and Manipulation Challenge. Robots seem to be getting decent at picking things up[0][1], although manipulation is more than just picking things up. Although, robots still struggle with very basic tasks like motion planning. [0]https://www.youtube.com/watch?v=geub-Nuu-Vw [1]https://spectrum.ieee.org/automaton/robotics/artificial-inte...


Huh. I wonder if you setting up a board game might be an easier unstructured problem. If you've got experience with board games, you can usually figure out most of the rules by just looking at the pieces; where do the same symbols appear, and what other symbols appear with them?

Still sounds super hard, but might be easier than the same (matching corresponding shapes) in 3D.


Here is a video of the DARPA Humanoid Challenge from 2015 https://www.youtube.com/watch?v=8P9geWwi9e0

If you compare these to Boston Dynamics, it is hard to watch. But to be fair, BD robots are remote controlled.


predicting what happens next from observations of the current state, is a classic AI problem on which little progress has been made

How is "predicting what happens next" different from predicting, say, the next word in the sentence using modern DL models?


While the article is a nice Q&A with Pearl about his new book, The Book of Why, there is a very detailed technical tutorial from 2014 at http://research.microsoft.com/apps/video/default.aspx?id=206... that provides a very in depth explanation of causal calculus / coutnerfactuals / etc. and how these tools should be used



Simulating a mind is not the same as simulating mind processes.

I doubt that you can create a mind that's similar to a human mind without the relevant elements that are took for granted when we think of a human being: senses, perception, pain, pleasure, fear, volition... a body! and the real-time feedback loop that connects us to our environment and our peers.

The same could be said about animals' minds. That's why it's still impossible to make even a mosquito brain. It's a question of texture. Making a decision for a human involves a complex cloud of subsystems working in unstable equilibrium, more of a boiling cauldron than an algorithmic checklist. When you're scared, you're not just thinking that somethind is dangerous and rather avoid it, you are feeling something very uncomfortable and you want to stop it.

What if you want to advance in creating some kind of simpler mind now when you still haven't the means to build a complete organism? That's an interesting problem. Would immersing programs in a virtual world be useful? Or would it be better to make robots face the real world directly? I believe that you need, as a minimum, a system that integrates sight with hearing and touching sensors, and some kind of incentive system.

After some results, maybe using machine learning, the emergent organization could be applied as a building block to more complex robots. Meanwhile, trying to teach machines some human capabilities will not lead to generalized IA, but to more of the same we have now, that it's very useful, just not quite qualifies for the label.


>senses, perception, pain, pleasure, fear, volition... a body!

Yes! Almost all neural networks have no self-model and thus no self-awareness because they cannot perceive themselves. They only see the inputs. They do not see the result of their actions.

This makes developing a self-model impossible. They cannot develop an internal model of internal vs external causes. What their "boundary of influence" is. Differentiation between internal and external causes.

They are trained and then used, immutable, unlearning after training. Even if they could perceive their outputs during training and/or evaluation, they cannot perceive themselves otherwise, making it practically impossible to deduce by themselves what they even are. They can't inspect themselves.

The causal loop needs to be closed for all of this to happen.


The "self-model" you're talking about is the agent in the Reinforcement Learning framework. It moves between states in an environment and learns from reward it earns from each action.


Yes, although I wonder if, or how well these self-models develop in practice compared to the world-models. Say, if a 2D-agent has a rectangular body shape, it probably won't develop a high-level representation of that fact unless its actions allow it to perceive it accurately. Purely figuring that out from collisions produced by basic actions (rotate left, rotate right, move forward etc.) seems to be practically infeasible. It has neither sight (observing self-movements) nor self-touch (which would allow it to observe its boundaries and relate it to what it has seen).


Causal inference is the next big leap in AI. Once the relatively(!) low-hanging fruit of pattern recognition are picked to exhaustion, and once we can get more comfortable with symbolic reasoning with respect to theorem proving / hypothesis testing / counterfactuals, "real" reasoning machines will arise.


I’d like to hear more about this. How do you see this coming about?


Definitely a combination of current ML (basically fancy nonlinear regression to MLE targets) with symbolic reasoning. Either alone are insufficient.

Symbolic reasoning is basically learning a lot of "if-then" statements and chaining them to make inference. Causal reasoning consists of defining conditional dependencies of current state on past state, then extrapolating based on the encoded assumptions. It requires some notion of object relation both in a literal sense as well as subtler relationships. Regression techniques are being ham-fisted to fit these roles but the popular ML of today is still just pattern recognition and cannot be called "reasoning" per se.

I don't work directly in this space but I see it following closely the architecture of the human brain for a while before departing to more distilled forms of knowledge management structures.


Neural networks do exactly what you are describing as "symbolic reasoning". It seems to be a common thing recently to dismiss modern ML techniques as curve fitting, but these fundamental models are extremely powerful.

Neural networks are capable of approximating any system to arbitrary precision.


This is theoretically true, but it's like saying "computers can compute any function, given enough time and resources".

There is a need to construct logically-deduced models which impose an inductive bias so that your regression methods are efficient. That's where reasoning comes in, and where automated reasoning methods should be useful.


I'm working extensively on it, but I can´t give so many details yet.


This sounds like going back to AI of the 80s. IMHO, symbolic reasoning is unlikely to lead to progress.


It's not an either-or question. Symbolic reasoning comes up in many places where regression techniques are simply inadequate. A synthesis of the two is likely to see progress.


There are a number of "things" that we should "teach" machines to create ones that are "truly intelligent". Besides this kind of "cause / effect reasoning", one could argue that an intelligent machine needs some baseline levels of what you might call "intuitive metaphysics", and "intuitive epistemology".

You could probably argue that the cause/effect stuff is subsumed by one of these at a certain level of abstraction, but I think it makes sense to treat them as separate.

Related to the idea of "cause/effect" and possibly falling into the overall rubric of "intuitive metaphsyics" is some notion of the passage of time. That is, in human experience we link things as "causal" when they happen in a certain sequence, and within a certain degree of temporal proximity.

Eg, "I touched the hot burner then instantaneously felt excruciating pain" is an experience that we learn from. "I walked through the door and four days later I felt pain in my knee" probably is not.

Our machines probably also need baseline levels of some sort of intuitive versions of Temporal Logic and Modal Logic as well.

https://en.wikipedia.org/wiki/Metaphysics

https://en.wikipedia.org/wiki/Epistemology

https://en.wikipedia.org/wiki/Temporal_logic

https://en.wikipedia.org/wiki/Modal_logic


I'd agree with that and I think winograd schema make this very obvious, take for example:

(1) John took the water bottle out of the backpack so that it would be lighter.

(2) John took the water bottle out of the backpack so that it would be handy

What does it refer to in each sentence? It's very obvious that a machine that solves this must understand physics, have a rudimentary ontology about objects and human intuition and so on.

I think it's straight-up sad how little progress there has been on these very fundamental problems which articulate what common sense and intelligent agents are about.


Hah. I did some experimenting with language model tasks here.

I re-phrased the sentence as "John took the water bottle out of the backpack in order to make lighter|less heavy|handy the" etc.

The only way it would complete with something other than the water bottle was "John took the water bottle out of the backpack in order to lighten the " and it completed with 'load'

The most amusing was a task to fill in the blank with "John took the water bottle out of the backpack so that [MASK] would be lighter"

The model was 98% confident the blank should be 'it' facepalm

Interestingly enough when I change the fill in the blank sentence to:

"John took the water bottle out of the backpack so that the [MASK] would be lighter."

The result was: "23.9% liquid 13.4% water 7.7% contents 6.5% weight" and the last two are pretty close.

I ran these tests on https://demo.allennlp.org/.


Can this be CAPTCHA instead of goddamn crosswalks? Asking for a friend…


As a non-native English speaker I'm struggling here a little bit. I'm guessing that you mean that in the second sentence, "it" refers to the bottle not the backpack? But that certainly wouldn't have been my first answer to this question(in general any sort of captchas that are based on language skills are not great, not everyone who consumes your content speaks your language).


Yep, excellent point. Oh well, back to squinting at traffic lights…


"Teach them cause and effect" ... yep, that's pretty much what everyone's been trying to do since the 60's. The problem is that nobody will touch the core issues of consciousness because it's inherently political. It requires confronting some of the biggest taboos in science: anthropomorphizing animals in biology, discussing consciousness seriously in physics, and looking at how economics and information interact with a skeptical eye toward the standard economic narrative.


I'm not sure that's necessary. Early humans didn't know why a lot of things happened, such as why rubbing sticks makes fire; they just learned to use them from trial and error. The physics of it were beyond them. I see it more as goal-oriented: "I want fire, how can I get it?".

I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.


They understood cause and effect. They didn't know the causal chain in depth, but they did know that rubbing sticks together caused fire. They also knew that dumping water on the ground did not cause it to rain. Thus they could distinguish between correlation and causation.


Did they really understand cause and effect? Primitive cultures frequently used religious ceremonies (cause) to effect changes in the natural world. It didn't actually work, but somehow they fooled themselves into believing that it did.


According to Hume, causality is a habit of mind all humans have from experiencing many situations where A always follows B. And according to Kant, causality is one the categories the mind imposes on the world of sensory impression, like space and time, to make sense of the world.

Early childhood studies should be able to answer the question as to whether humans are wired to expect causal relationships. My understanding is that all children do develop that expectation, as do animals.


People getting umbrellas ready causes it to rain? We take into account models of minds of others etc. into a much more rich understanding of causality than A follows B in time.


Reasoning about cause and effect works both ways. A causes B also implies B can caused by A. Your example is actually strengthening the idea of a simplistic view of cause and effect.

People getting their umbrellas ready is caused by the expectation of rain.

"Why are they taking their umbrellas out?" "Oh it must be because it is about to rain." "Why do they have an expectation of rain?" "Because they saw the weather report and clouds are visible"


However sophisticated it gets, it ultimately boils down to a notion that the exitence of A is somehow responsible for the existence of B.


Let's break it down into sub-categories:

1. There is a time-based relationship between A and B: they tend to happen close together in time

2. There is a time-based relationship between A and B such that B often happens shortly after A.

3. When I (or bot) creates condition A, then B usually happens.

4. When I (or bot) creates condition A, then B usually does not happen.

5. Science or simulations explain how A triggers B, if #2 is observed.

A bot can be programmed to conclude there is a potential causal relationship if #2 happens. If not problematic, the bot can then do experiments to see whether #3 or #4 is the case. If #3 happens, the bot can label the relationship as "likely causal". If #4, label it "probably not causal, but puzzling".

#5 would probably be needed to conclude "most likely causal", and is probably an unrealistic expectation for the first generation of "common sense" AI, although they may have a simple physics simulator built in. The highest "causal" score would be #2, #3, and #5 all true.


I think Hume might respond that #5 is somewhat self-referential or recursive, since science effectively reduces to a set of causal principles.


An AI bot could test that theory by opening bunches of umbrellas. It's not too different from something a toddler might try. A parent would see that, chuckle, and explain that clouds cause rain, not umbrellas. It's a "fact" (rule) a parent instills in their child.


I was recently reading an essay (can't find it this second) about how some religious ceremonies actually introduced helpful randomness.

For example, if you hunt to the east and find good game, you'd keep hunting to the east. Eventually you'd kill everything over there or get them to move, and your hunting would get worse. The optimal approach might be to randomize the direction you hunt, so that game doesn't learn where you're hunting.

The society couldn't say WHY the ceremony was good, but long term if they kept applying the ceremony they'd have better outcomes than societies that didn't.

Sometimes superstition is just that ... but I'd also bet there are unintuitive/surprising benefits behind a lot of it.


I believe this is the essay you mention: https://www.lesswrong.com/posts/fnkbdwckdfHS2H22Q/steelmanni...

It had this context appropriate quote:

"One performs the rain sacrifice and it rains. Why? I say: there is no special reason why. It is the same as when one does not perform the rain sacrifice and it rains anyway. When the sun and moon suffer eclipse, one tries to save them. When Heaven sends drought, one performs the rain sacrifice. One performs divination and only then decides on important affairs. But this is not to be regarded as bringing one what one seeks, but rather is done to give things proper form. Thus, the gentleman regards this as proper form, but the common people regard it as connecting with spirits."


There are psychological benefits. If you believe that things like rain are within your control then you can be confident instead of feeling helpless.

Obviously religions take advantage of this desire for the illusion of control and convince followers that practicing the religion will keep their lives free from external bad influences.


Sure — I think the randomization concept particularly interests me because it's NOT just psychological. There's actual real-world benefit to doing rituals/read entrails/listening to people speaking tongues vs. not because our instinct or normal habits aren't always right.


What is your definition of cause and effect, and how would we go about determining if people in primitive cultures understood it?


Maybe not on the ground, but dumping water on a cat for example was thought to cause it to rain:

https://en.wikipedia.org/wiki/Rainmaking_(ritual)#Thailand


Stone age version of cat videos = real cats.


I suppose that's cause-and-effect in a loose sense, but one doesn't have to view everything as C&E to get similar results. It seems more powerful to think of it as relationships instead of just C&E because then you get a more general relationship processing engine out of it instead of a single-purpose thing. Make C&E a sub-set of relationship processing. If the rest doesn't work, then you still have a C&E engine from it by shutting off some features.

I may be wrong (heck, I'm probably wrong), but I can't help but feel that you're abstracting things out too much. Yes, a "cause / effect relationship" IS-A "relationship", but sometimes the distinctions actually matter. I'd argue that a "cause/effect relationship" (and the associated reasoning) is markedly different in at least one important sense, and that is that it includes time in two senses: direction, and duration. There's a difference between knowing that Thing A and Thing B are "somehow" related, and knowing that "Doing Thing A causes Thing B to happen shortly afterwards" or whatever.

To may way of thinking, this is something like what Pearl is talking about here:

The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever.

That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.


I'm not against tagging a vector between two or more nodes as "probably causal" (cause related) in some sense (or maybe a "weighted causal"). It just shouldn't be the ONLY tag.

Re quote: "The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever."

Back to my original point, humans did just fine at "intelligence" before science came along. That's Step 3, we need step 2 first. Find correlations, and if it seems to be that part-1 happens before part-2, then the bot can infer something equivalent to a causal relationship. That may be the time element you are talking about. Perhaps it's a matter of interpreting what "causal" means. I see it as "finding relationships we can take advantage of to obtain our goals". Whether there is physics or chemistry behind a relationship is an unnecessary distraction (without other advances).

Re: That said, I do like your idea of trying to build the processing engine in such a way that you can turn features on and off, because I don't necessarily hold that "cause/effect" is the only kind of reasoning we need.

So we kind of agree. If it turns out I'm wrong and that we can make most of the bot work right with just causal relationships, then factor out the general purpose "graph processor" for efficiency and make it causal-centric.


To me learning cause-and-effect is a non trivial process, when recognizing relationship often comes first and intrigues "formal" reasoning process afterwards (if any).

The more formal in the later process, the closer we reach to the real "knowledge". So your suggestion is quite practical, in the sense that we can start from that and figure out how to push the engine toward the more formal spectrum.


I think you are underestimating early humans. Friction creates heat, fire is heat...


That doesn't tell one directly if heat causes fire or the other way around. One just knows there is some relationship. Recognizing a relationship itself is often enough to trigger experimentation.


I recently joined a team that does a lot of causal analysis, mostly marketing related, and was wondering what the best resources are to get more familiar with this subject (books, lectures, online courses etc.). I am picking up the author's other book, Causality: Models, Reasoning and Inference, but wondering what other sources people recommend.


Maybe a book or two on Structural Equation Modeling?

https://en.m.wikipedia.org/wiki/Structural_equation_modeling


It's funny that I knew who this would be by or interviewing just from the title. I like Judea Pearl and a lot of his ideas, but at the same time I think he overstates their importance and hypes them up more then he should.


@dang, hate to be that guy, but can we add "[2018]" to the title?


This was kind of passé, even in 2018.


I can build an AI with common sense reasoning in about 9 months. The problem has already been solved. Why do we care so much about making computers more like people? Isn't that excessively cruel? Part of the utility of computing is that computers don't have needs for fulfillment, companionship, communion. We deploy them in awful conditions to do the most horrible, tedious time-waster jobs. Why do such minds need to be human?


AI researchers have been attempting to build common sense into software for decades and have largely failed. So if you think you can do it in 9 months then you're naive or uninformed. But hey if you've figured out a better approach then go ahead and do it, then show us running code. Talk is cheap.


I suspect that by “in 9 months” they meant “by having a child”

Edit: whoops, didn’t see the other reply



Hey, no fair, the "A" in "AI" stands for "artificial"!


9 months - got it. But you're missing the point of what it would mean to have machines sort out cause and effect - not merely like humans, but like billions of humans, simultaneously. And if sentient at all, at least not objecting to their "awful working conditions" (which would probably be far better than those we subject humans to come to think of it).


I wonder how it could even be considered "cruel"? Cruel to other living human beings, perhaps. To the machine or its simulation software? No.

Any human-like AI is still a "fake" - any notion of emotion, pain, empathy etc. we attribute to them is only a simulation. It simply doesn't matter. It amazes and amuses me to think that people might actually give a damn what the machine is "feeling". I think people who truly believe this are out of touch with reality and frankly, with other human beings. The machine doesn't really care about us, it's a bunch of ones and zeroes no matter how you slice and dice it.

Even after training them on cause and effect, they still don't care. I don't buy the "if it looks like a human, sounds like a human, it's human" argument at all.


Cruel is also in the mind of the one doing the cruelty. People worry about being cruel to plant, to pets, to their cars. Its natural and normal, because we are empathetic beings. Not something to try to unlearn or avoid; its a big part of our humanity.


A plant or an animal, yes. A car though? Only reason to worry about being "cruel" to a car is that mistreating it will result in larger repair bills and a need to replace it earlier. Same thing with a computer. But each to their own I guess.


When your car is smart enough to assemble Ikea furniture, it'll also be smart enough to quietly resent you for making it assemble furniture all day.


No, it would probably enjoy it. If you were making it clean the toilet all day then it would start to resent you.


So do you think that the true laws of physics are uncomputable, or that even if a perfectly accurate physical simulation of a person were simulated, that while it would be entirely accurate in predicting what such a person would do if they were physically real, that they would have no internal experience nor moral worth?

Edit: alternatively, perhaps you only mean that any AI that realistically speaking might eventually be made would not be a person, not that no AI that could in theory be made would be?


I apologise in advance, but I have tried to parse your question multiple times, and I am still finding it unintelligible.


I'm in the middle of rewatching Battlestar Galactica, and this subject is at the core of the series.

Where do other mammals fit in your schema? Do you believe in some sort of biological essentialism? Or are only humans with caring about?


Many mammals are delicious? Other than that, I like animals a lot.

Machines deserve no such empathy, just a flick of the power switch as required. And I watched Automata (2014) last night. Didn't feel a single thing for the robots, as much as the film tries to goad you into doing so.


Sure it is. Sure it is. Whatever helps you sleep at night.


What is cruel about that? Has anyone actually ever implemented true pain in a computer AI? Does attaching a speaker with pre recorded screams to a Roomba count as pain? Does a number that keeps increasing count as pain? Is a boolean value pain/no pain enough to implement pain? What does it even mean to be in pain for a computer?


Scaling to meet the workload using machines may be as simple as paying the AWS bill. Scaling up people, even at profitable well-functioning companies, is hardly a solved problem.


If you can that means you must have done so already. Please show your work.


answer for your question: To be able to do the job


I'd say pain. But that's only me.


Isn't reinforcement learning essentially a representation of cause and effect?


No, it's a representation of which actions lead to good outcomes given a set of input data. There is no explicit symbolic reasoning about causal factors or their outcomes involved in classic RL, and it's very unlikely that any such symbolic representation evolves implicitly under the hood. A neural net in an RL system is just a souped-up version of the tabular data used in the earliest RL systems.


The reinforcement learning framework is perfect for representing cause and effect. An agent could learn that in a state of no fire, taking an action of rubbing sticks together would transition into a state of having fire. This concept is formalized as learning the dynamics function.


related, a deepmind-paper i found fascinanting:

https://arxiv.org/pdf/1901.08162v1.pdf


Saw only recently that Judea Pearl was a guest on Sam Harris' podcast: https://samharris.org/podcasts/164-cause-effect/.

The preamble is depressing, since the episode aired right after a mass shooting, but Pearl gives a brief overview of his thinking.


Most people don't even know how to run an experiment to verify causation. They chant the mantra: "correlation does not equal causation" then go back to correlating everything they see in the world.


Interesting formulation - because I think children learn about cause and effect.

any hooo....


I strongly believe there’s overemphasis on AI, artificial intelligence, vs augmented intelligence.


Do you have an indication that they are all that much different? Meaning, would the techniques or strategies used to develop augmented intelligence be that much different than what is going on in AI?


A “truly intelligent machine” is a contradiction of terms. They can not have intelligence like humans (AGI or whatever the current buzzword is). Humans are not solely material.


Not with that attitude.

Seriously. Extraordinary claims etc etc. If you want to claim humans are not solely material, you need to give some sort of evidence of a phenomena beyond the physical. You can't use intelligence per se as your evidence as then your argument is circular.


It has been demonstrated for millennia. And I don’t know what your strange intelligence straw man has to do with anything. Even elementary metaphysics covers this.


This is just neo-geocenterism.

>Of course the Earth is in the center of the solar system. We must occupy a privileged space in this universe. Its been demonstrated for millennia.

Thinking is done with neurons. Neurons are subject to the same physical laws as the rest of the material world. Therefore thinking can be done by a machine (if nothing else, by a physics simulation of neurons).

To refute this logic, you must show some thought is not being done by neurons or that neurons are not subject to physics.


I don't see why you feel the need to keep setting up these straw men.

It might make for nice rhetoric, but it is more than a little disingenuous. Not only was that geocentrism piece not an argument or claim that I've made, but I've never seen anyone make it in that manner either. But perhaps you know that and were intentionally misrepresenting their arguments.

Back on topic, no, your concluding claim is not true. To reach your conclusion you would, at the minimum, need to assume that a machine can simulate arbitrary physical phenomenon, which is not a forgone conclusion. For instance, the "thinking done by neurons" you refer to made be reliant on some facet of the real numbers that is simply not computable. Perhaps at any level of approximation of it, what we discern as AGI may not manifest. Etc.

But, finally, your premises are faulty. What most people really mean when they reference AGI or "truly intelligent" is not intelligence, but wisdom. Computers have been "more intelligent" than humans for a long time now if it simply means arithmetic and recalling trivia. Now, noting that, isn't it possible that such wisdom is dependent on dependent on the will, the soul, etc?

So we arrive back at the true point of contention - you are a materialist. I claim that materialism has been handily refuted for thousands of years. Then you fell back on pretty much every freshman level fallacy in the book. On the other hand, I suspect some of your misrepresenting of "classical" philosophy was not intentional, and just the result of getting most of it second hand from the Kurzweil (AI) and Dawkins (geocentrism) type literature. It is wise not to be so dismissive of pre-"Enlightenment" thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: