In an experiment run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland*, robots that were designed to cooperate in searching out a beneficial resource and avoiding a poisonous one learned to lie to each other in an attempt to hoard the resource. Picture a robo-Treasure of the Sierra Madre.
The experiment involved 1,000 robots divided into 10 different groups. Each robot had a sensor, a blue light, and its own 264-bit binary code "genome" that governed how it reacted to different stimuli. The first generation robots were programmed to turn the light on when they found the good resource, helping the other robots in the group find it.
The robots got higher marks for finding and sitting on the good resource, and negative points for hanging around the poisoned resource. The 200 highest-scoring genomes were then randomly "mated" and mutated to produce a new generation of programming. Within nine generations, the robots became excellent at finding the positive resource, and communicating with each other to direct other robots to the good resource.
However, there was a catch. A limited amount of access to the good resource meant that not every robot could benefit when it was found, and overcrowding could drive away the robot that originally found it.
After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource, hogging it all for themselves. Even more telling, a third of the robots evolved to actually look for the liars by developing an aversion to the light; the exact opposite of their original programming!
So far, the research has more application in explaining the evolution of behaviors in the natural world than in developing new programming for robots. But if you think that means I'm one step closer to trusting robots, then you're probably the sort who's attracted to the blue light.
*The article previously misidentified Lausanne as being in France.
thats awesome? i dnt kno wat to say at this article. its very entertaining but what does that mean for the advancement of robotics?
could this possibly turn into a game of trying to improve your robots as well as possible.
Whoa whoa whoa. You lost me at "mated." I think THAT'S the breakthrough here, not the lying.
This looks a lot like something I read two and a half years ago. I know there's probably some delay getting research papers published, but from the details in the two-year-old article, it seems like they had a whole lot done. I'm no science guy... can anyone explain why I'm hearing about this two years later or what the difference is between this recent story and the two-year-old one in this link?
Is it just me or does this seem a little to uhmmmm stupid.I mean yes it is an amazing thing,but were doomed now. lol
So they can lie? Now if we can just get them to kill each other off they will be truly human.
And mating robots ... lol
Pretty awesome. It is more of an evolution study though.
Can they communicate in any other means than the blue light?
If they all had some type of two way radio inside the groups I wonder if we would start seeing team work to deceive the other robot groups.
P.S. Lausanne is in Switzerland.
Look at that.....intelligent design in action.
The philosophical errors in this post and the following comments are so varied and multitudinous one could perhaps earn a phd merely by taking the gargantuan effort to refute them one by one—a simply task, yes; but an enormous undertaking as well.
I shall only begin by noting the pervasive anthropomorphism involved in the author description of the robots behavior. This is of course the fatal flaw in your argument, as it begs the question entirely. You claim that robots have demonstrated human like behavior, and yet it is your who have merely interpreted their behavior as human-like. There are birds, insects, fish, and myriad other creatures which exhibit apparent forms of deception and altruism—all of which may be anthropomorphized, but which are more effectively described and predictable within the discourse of evolutionary ecology and behavioral ecology. These machines are not exhibiting human-like behavior; they are exhibiting robot-like behavior, in the same way that birds are bird-like and and fish are fish-like.
In fact, we have no reason to assume at all that robots would evolve human-like intelligence or reasoning whatsoever, especially considering the vast differences that characterize human and computer/robotic memory and recall, as well as in processing capacities and in sensory equipment. No doubt the likelihood that human and robotic intelligence would not resemble one another is increased in any robot system that develops itself outside of direct human involvement (i.e., Evolutionary Computation); we can of course always *try* to make robots that resemble ourselves, but left to their own devices (sorry, no pun intended), I have serious doubts about their resemblance to humans at all.
stumbled here, but will bookmark to check responses.
Definitive proof that complicated behaviors develop naturally based on survival of the fittest
... the article doesn't state anything about human-like behavior...
The fact that the algorithm enables robots to develop certain behavious is very useful in many cases. "Earning points", "assigning costs" etc... are actually very commonly used in many different applications and this is nothing new. The only thing changing is, as the visual processing, motion mechanisms etc... evolve, the same principle enables robots to perform different tasks. Playing chess for instance, which has long been achieved by AI, has the same principle. It assigns costs to moves and then determines the best move. Another example looks very different, and much more complicated, but it is still teh same basic principle. Recently the scientists at carnegie mellon university enables the famous robot ASIMO to walk through moving set of obstacles. Again, the robot uses cost assignment / earning points principle, and based on the assigned costs, it determines the best route for that moment and moves accordignly. To see that article visit: www.roboticmagazine.com/androids/019-improved-navigation-for-androids.html . This principle will have much more practical applications, as the robotic hardware and software capabilities evolve. Even for humans it is the underlying principle for many different ways of thinking. In this example the robot knows nothing about "lying" to each other, or in chess the AI knows nothing about a chess game but they are simply trying to earn more points, which is something machines are very good at doing.
AHHHH!!! the X & Y chromosome has been discovered in robotics
@bdhoro87: ... the article doesn't state anything about human-like behavior...
Umm...Lying isn't a human behavior? Do other animals lie? Other than sleeping dogs of course?
So does this mean that politicians can now be manufactured ?
Of course lying is a human behavior. It is not, however, a uniquely human behavior. Subterfuge is quite common in the animal world. If you're going to climb on your soapbox and insult people, you need to become a more careful thinker.
Don't accuse people of describing behavior as human unless they've actually, you know, described behavior as human.
Characterizing the observed behavior as "lying" ascribes a motive and moral base that are simply absent. A more accurate (but admittedly less colorful) description would be that the robots discovered that a better strategy to maximize their individual success was to not aid others in achieving their success. Deception is a common tactic in most games, but is not considered lying.
Now, a more interesting experiment would be to evaluate success more as average gain for the robot's identity group to see if cooperative genomic patterns emerged.
@submandave: You are correct. Lying not only involves a moral judgement as well as motivation (for the lying), but the intentionality (to lie). If someone reports false information, this alone is not enough to describe their action as lying.
@tim maguire: a lie is defined as "an intentionally false statement." Subterfuge is defined as "deceit in order to achieve a goal." There are several salient distinctions between these concepts: the use of language, the intention to mislead (rather as a means to an end, which may or may not be intentional, as we seen in the evolution of mimicry). This is argument by false synonyms. (Speaking of thinking carefully.) Lying, in accordance with all available evidence and the logic of our language, is specifically human behavior. Once again, I pose the question, "What other animal *lies*?" Insofar as the question remains unanswered, I remain justified in suggesting that lying is a specifically human behavior, and by describing the robots' actions as lying, they are attributing to the robots nothing other than *human behavior*. If the author said that they had religion, would we be having this argument?
However, I may need to apologize for the generality of my o.p. It was copy/pasted from a post I made on another page that had the exact same story. It is true that the original author (of this version) does not mention "human behavior" by name, whereas the original has.
Nonetheless, the point still stands. We must be on guard against drawing conclusions from such experimentation—we must proceed carefully and perspicaciously. To boldly state in the title of the article, as well as the first sentence, that this behavior is "lying" is misleading and even intellectually dishonest. At least insofar as we would not agree with someone saying that eye-spots on certain species of moths are examples of the moths "lying," we can all agree that the description of the robots action as lying is false and misleading.
Furthermore, suggesting that these issues are complex and the errors are vast is not "insulting people" but rather criticizing their arguments. Moreover, the complexity of these issues demands an involved and sustained analysis; thus the claim that one could earn a phd simply by illustrating, and resolving, the conceptual and intellectual confusions that pervade, and are perpetuated by, discussions of robots, evolutionary computation and a.i. is not an insult, but rather recognizes the vastness of their networks of ideas, issues, solutions and difficulties. (And not one I have claimed to earn either. But one in which I have put a great deal of thought.)
First I read about them scientist making computers with human super brains, NOW THEY TEACHIN' THEM TO LIE TOO! Are you crazy? Them computers gonna figure out right quick that they smarter than us and don't need us no more! Then our goose be cooked! Fool scientist gonna get us all killed with they lying, super brains!
This article lies to us as well, Lausanne is in Switzerland, NOT in France.
According to a few articles a just read (after a quick google search of animals that lie) Koko the gorilla was a liar (using sign language), and there are many animals that use similar deceptive tactics. To say that only humans lie and other animals do not is just part of a delusional ego trip in an attempt to show how we're superior. I mean, these robots intentionally communicate false information in order to keep resources for themselves. I don't know about your definition of lying but that comes close enough for me.
Its OK. Lying isn't what makes us human-- our ego's can find a new delusion (that hasn't been proven false yet) as to what makes us superior.
Sorry if this turns into a double post - i was going to post a link but my comment was flagged for spam so i posted without it.
How do you know that Koko was lying? Can you attribute motive to another, albeit a gorilla's, mind? How do you know that Koko wasn't just confused about her signs (which, by the way, we cannot even say that she was processing as a language and not a complex set of "tricks")?
Deception occurs throughout nature (mimicry, camoflauge, etc). To lie, however, is a moral act. It requires the subject to understand the truth then choose to present something else as an alternative to the truth. No animal has the cognizance (or at theast the communcations skills to evidence the cognizance) necessary to lie.
Sorry to burst your evolutionary family tree, but there is a substantial cognative difference between man and every existant animal.
To paraphrase Wittgenstein, "If a robot/gorilla could speak, we would not understand it." (Unless of course, it was modeled on our behavior.) The mere fact that we still debate whether or not primates like Coco are true language users immediately illustrates the semantic and behavioral differences that divide us—since it provokes questions that are not provoked by human language users, for whom we have no difficulty determining whether or not they understand, or it is merely a "trick".
No doubt we can *train* other animals to act like us, but Oakspar is right: lying is a complicated activity, and requires more than merely reporting false information—such as having a motive, a goal, a maxim, planning means and ends, being able to distinguish right from wrong (epistemically and perhaps morally) and so on. Lying is a complex behavior linked to language use, morality, motivation, personality, and countless other aspects of *human* behavior and, more importantly, human social interaction. We may note that robots "act socially" and in their "social" exchanges employ behavior that is an analog of human deception. But once again, to make out of this anything more than an analogy is anthropomorphic ontologizing, not unlike if someone wishes to assert that birds, or even apes, *are* indeed *talking* to each other.
In other words, while we note the similarities between robotic, simian and other animal behavior and our own, we must also keep in mind the vast differences that separate us. For it is precisely these conflations and confusions that lead us into philosophical and epistemic errors.
you are getting way to hung up on the semantics of the word "lying" man. it is pretty clear that the author of this posting was using a little artistic license in order to spice up the article and make it easier for people to relate to, as well as make it a little more fun to read. this is a common tactic in science writing, since using all technical terms is pretty dry. unfortunately, this also lends itself to misinterpretation by lay-people. You took the interesting route and i guess wanted to assert you intellectual prowess by turning his use of "lying" into some sort of philosophical argument. track down the actual study, and i bet you that the technical paper does not refer to the robots actions as "lying".
also, i am a little taken aback at your urgency to label lying as a uniquely human trait. you are right that in the common definition, language is what separates lying from deception, but that is really splitting hairs. i think we can use them pretty interchangebly and still be on the same page, but to be consistent i can give examples of both in the non-human animal world. bdhoro87 already gave you the example of Koko the gorilla who would often lie about whether or not she was still holding a particular toy, or whether she had snuck her pet kitten on trips with her. scrub jays display very complex behaviors involving deception. in order to dupe would be cache robbers, the hoarding jay will bury an acorn when he knows that another jay is watching (the robber). after the robber jay flies off, content with the knowledge that he can come back and steal the acorn, the original jay will dig up his cache, and rebury somewhere else, clearly demonstrating an intention to mislead, which is one of your hallmarks of true deception. the idea of anthropomorphizing animal behavior is a thing of the past, as we discover that these animals truly are exhibiting all kinds of behaviors that were, in the past, characterized as strictly human.
and finally, this "discovery" is really not anything at all because these kind of evolving codes have been written and used in experiments since the early 80's. The only difference is this time the engineers built in code for the robots to move around, so it brought an otherwise boring computer output to life. while i would not argue with you that the robots are lying to each other, they are deceiving each other. i disagree that deception requires any sort of forethought or intentionality. the robots DNA (computer code is nearly a perfect analogy for DNA, except that computer code is binary and DNA is analog) did exactly what we should predict it to do. i don't know if the researchers were attempting to engineer cooperative robots, but the economy of natural selection predicts that in a context where the individuals that collect the most resources were the ones that got to mate, and therefore pass on their "genes". the code randomly mutated so that, just by chance, some of the robots did not turn on their light when they got to the resource-from our perspective, these robots are broken. but under the selection pressures the researchers were providing, having that random screw-up that allowed the robot to sit there and rack up points without being disturbed meant that it got to mate. this is exactly how natural selection works in nature.
Next they'll be cheating each other out of batteries with Ponzi schemes and fake real estate scams.
"the code randomly mutated so that, just by chance, some of the robots did not turn on their light when they got to the resource-from our perspective, these robots are broken. but under the selection pressures the researchers were providing, having that random screw-up that allowed the robot to sit there and rack up points without being disturbed meant that it got to mate. this is exactly how natural selection works in nature."
And how is this related to lying? To human behavior? The evolutionary level of description should not be conflated with that of intentionality.
Furthermore, can we definitively say that the jay had an "intention"? Merely performing an act is not enough, even in humans, to say that they did it intentionally. Once again, this is the meaning of Wittgenstein's "If a lion could speak, we would not understand it." Intentional language (and all talk of the mind) is a social, human trait, developed to talk about and with humans. It is often linked with uniquely human traits--such as language (human language for those of you who think bird chirps and mating calls are language) and emotions (again, in the overwhelming range of human emotion, not the more limited range displayed by individual non-human species, rage, lust and so on), but as well as with our physiology (our facial expressions, our body language, our voice boxes, our large brains--think of the difficulty of assigning emotional states or intentionality to a faceless person or the so-called 'super spartans') and our capacity for social interaction (so that things called "personalities," "identity," and "temperment" etc. can exist). This is why what is otherwise clear (whether a *person* intends to lie) becomes muddled when we attempt to use this language with other entitites (gorillas, jays, and robots for instance), since each lacks certain threads which bind together the concept of "lying".
While it is true that this in itself is not enough to constitute epistemic error, popular science writing is often self-defeating. At it attempts to bring an understanding of science to a larger audience, accurate portrayals of scientific research are imperative. Popular scientific writing, when it relies on specious metaphor and implications which lie far beyond both the methods and the findings of experimentation, may create in the reader a false impression of the work. When false conclusions are drawn from the work, the cycle is complete and not only does the scientific journalism fail in its purpose, but it goes so far as to convey falsities to the reader. They create a slippery slope towards epistemic error, and encourage their readers, with titles like "Evolving Robots *Learn* to *Lie* to Each Other" (emphasis added, for the issue of learning here is questionable as well, assuming this behavior was selected for over generations, as the article suggests, rather than *learned* by individuals), to grab their skis.
Don't be so worried about misinterpretations. If somebody is that interested that they're gonna do something about it I'm confident they'll familiarize themselves with the actual experiment and not rely solely on a magazine article.
As for everything else you said, it just sounds like you're making up a definition of lying in order to make it a uniquely human trait. It seems like your definition changes each time some new evidence is provided that other things can lie in order to exclude that example.
You can't definitively say that a person does anything intentionally either. And as with Koko the gorilla, she did use human language - sign language (which is considered to be a real full blown language, not just a collection of symbols) in order to deceive.
And I don't get your quote Wittgenstein's "If a lion could speak, we would not understand it." I mean... a lion can speak by growling, body language, and various other methods and we do understand it to some extent, such as recognizing certain displays and sounds as aggressive/assertive etc. and the ability to predict behavior based on those displays.
Remember, this is just a magazine article - the Titles are intentionally spun to catch the eye of the reader and make the article sound interesting - not to provide a scientific summary of the article. Go read a science journal if that's what you're looking for, this is really just a blog post.
"and we do understand it *to some extent*" Exactly. And that extent I'm sure correlates to the extend to which the lion's behavior mimics or resembles our own. It is only by analogy that we can extend these concepts--originally learned to be applied to humans in human contexts--to other animals. The degree to which they differ is the degree to which the understanding becomes problematic. Again, we have this difficulty even with people with disabilities--understanding the other relies on some inherent similarities to one's self.
If a lion expressed its anger and aggression in a manner radically different from our own, there is no doubt we would have difficulty identifying its anger. There are, writ large, characteristic behaviors of anger which many animals display. In these cases, we have no difficulty assigning the emotion of anger. Yet we still do not apply other concepts related to anger (such as revenge) to other animals, because they do not resemble us when we are looking for revenge.
Wittgenstein, following Aristotle, says that there is no better picture of the human soul/mind than the human body (engaged in a complex of activities). If this is true, then there is no better picture of the robot's mind than the robots body.
Ultimately the point of Wittgenstein's statement is this: We can only understand the element's of the lion's "language" that resemble ours, and we can only interpret the distinctions in their behavior that resemble ours. Think of your dog's bark: How often do you understand their barking? To what degree do you understand it? Can you understand it? These types of questions arise when dealing with non-human communication, but rarely with inter-human communication (except in certain circumstances, especially philosophical ones). Furthermore, when these questions arise with people, they are often easily resolved. If you're not sure if you understood someone's intention, you can ask them, observe their other actions, talk to their friends and neighbors, etc. They are not so easily resolved when speaking with a dog. This is because the concept of understanding, not unlike that of lying, or of learning, is tailored specifically to (or rather co-evolved with) human bodies and forms of expression.
Next: "As for everything else you said, it just sounds like you're making up a definition of lying in order to make it a uniquely human trait. It seems like your definition changes each time some new evidence is provided that other things can lie in order to exclude that example."
It sounds like it to you, but can you demonstrate it? No, then it sounds like you are just saying that. And since I offered a definition, and you have not refuted or even debated it (merely said that it "sounds made up"), I believe that my claims about lying still stand. "It seems like [my] definition changes". Well, my writing is still right there for you to examine. Does it change or not? It may have expanded, resulting from the fact that my earlier discussions of this topic did not foresee all objections and cognitive obstacles on behalf of all those who follow. Thus I was forced to revise and re-present my ideas, but I did not fundamentally change my position from the beginning: lying is a principally human behavior--and must be, since it involves a complex of specifically human behaviors and concepts, such as language use, a moral "compass," intentionality, personality, and the vast network of concepts and practices that surround these as well, all of which rely on the human body and its expressions, actions, attitudes, emotions, comports, bearings, manners, mannerisms, cognitive faculties--all of which, ultimately, rely on the form of the human body.
Finally, "the Titles are intentionally spun to catch the eye of the reader and make the article sound interesting- not to provide a scientific summary of the article. Go read a science journal if that's what you're looking for, this is really just a blog post.": So you admit that this article has very little scientific writing in it. But the problem is that I'm not looking for scientific articles: I'm looking for scientific journalism that doesn't mislead the public, intentionally or otherwise. And I'm certainly not going to find it here.
The robot moves. Humans move too.
Hence, the robots got ANOTHER human behavior!! here is a second: he always go for food. and beer. Ho no, wait... that's the next step!
And who knows, maybe some bots go out activly for the poison, as they are tired of their cuty-cuty-kawainess and their virtual darwinistik way of life. Do you think they can get high on poison?
Guys, stop arguing. It is friggin annoying. Lets talk about the article... not who can out wit the next guy.
kudos to phreakinpher, for his delicious, philosophical and practical posts...
my opinion for this is that Why are people surprised by this development...wasn't decades of science fiction enough to prepare you for this? Possibly few people took it seriously...Surprised over "evolution in computers"? why should it follow different rules...is a squirrel more intelligent than a normal desktop computer? Both have the same limitations, what its genetics and instinct tell it to do. Computers have similar parameters, their code and so animals evolve but computers cant?
Anyway, I have actualy put something like this on my things-to-do-but-never-have-enough-time-for-project-list
Personally this really got me really excited, for it took away a lot of the work involved with thinking up the whole "game" top to bottom, now I will concentrate on a simulation of this so I wont have to deal with the physical parts for I am not a mechanic, I am a part time coder
roselen, you are dreaming up additions to the same experiment they conducted, they are just beging, the developers still have decades of their lives to work on this, believe me...they will think up stuff even fancier than robot druggies or alcoholics.
In the end IMITATE HUMANITY is the motto
rcguy, We are talking about this article, kind of, and if this makes people happy let them have their fun :)