An Australian researcher has built algorithms that let computers experience free thinking and emotion, allowing them to respond to simple moral lessons found in Aesop's Fables.
Upon freely associating a trifecta of stories involving birds — "The Thirsty Pigeon," "The Cat and the Cock" and "The Wolf and the Crane" — the computer responded, "I felt sad for the bird."
Computer scientist Graham Mann said he believes machines will not be truly intelligent until they can also experience emotion. To improve their emotion-quotient, he developed a system that identified the "feel" of Aesop's Fables, which are simple enough that they could be represented in conceptual graphs. Then he developed an algorithm that prevented disparate emotions from being experienced at the same time.
The computer analyzed the three tales and was able to distinguish their emotional feel, according to IT News. Hence feeling sorry for the birds.
Entertainment services could use the algorithms to improve movie recommendations, Mann said. Or computer games could be improved with cultural context.
Mann is far from the only person working on this. Computer scientists are so concerned with computerized emotion that they're working on a worldwide standard, Emotion Markup Language, to improve communications. In the most recent working draft, the World Wide Web Consortium points out that standardizing emotions is basically impossible: "Even scientists cannot agree on the number of relevant emotions, or on the names that should be given to them."
So, starting with six basic emotions, outlined by emotional psychologist Paul Ekman in 1972, EML will afford intensity levels to various emotions. One day programmers may use code like what you see to the left.
As this feature in CNET explains, one goal is to provide a more advanced alternative to smiley faces; another goal is to improve human-computer communications.
Aside from new markup language, research like Mann's proves it's possible to at least give computers some sense of emotion, improving communication with humans — or even helping them understand us.
"Then he developed an algorithm that prevented disparate emotions from being experienced at the same time." Why? Even humans often feel torn between two emotions. Case in point: "I'm happy that she's happy, but I'm depressed it's not with me".
Now, I'm not sure if an algorithm generating emotions is the same as actual emotions. Then again, no one really knows that emotions are or how they are generated except for a vague intuition, so...
I was always fascinated by the concept of computer thinking. For example, if you make a computer "determine" it should be sad, is that any different from a person actually feeling sad? Or is it the same? Is a sensation in a human brain any different from a sense experienced by a computer (i.e. is sight equal to camera vision, and is hearing equal to audio input)?
Can a computer ever truly be self-aware? On that note, what exactly does "self-aware" mean? If I tell the computer "you eist", is that self-aware? Or if it figured out on its own that it exists, is that self-aware? Or is self-awareness more of a feeling than a tidbit of knowledge? And what exactly is a "feeling" anyway? If it's just chemical signals in the brain, then a computer can indeed feel. If it's something more, then maybe computer can't.
A lot of philosophy and unanswered questions; yet it's so fun to think about :P .
-IMP ;) :)
No one is ever happy that she's happy. That's B.S. you're feeding yourself. Not to mention, computers shouldn't be allowed to have emotional problems. That tends to be an issue for humans...
kabosht9 is right. No one is happy that she is happy. But that's the kind of lie that says to the world, "I'm an emotional adult" - just like your computer!
Another thing to consider is that our emotions are essentially programmed too. We copy what we see in the world as we are growing up. Some of the basic emotions are instinctual but many of the emotions that we have today originate from what we learned to feel.
also what if a missle system computer gets Mad at someone and goes all crazy like some humans do ... something i dont want.. or what if a traffic computer gets bored? and decides to spice things up a bit... lets keep emotions to the people and leave computers to be logical
99.9999999999999% of an atom is empty space
yep this is the sort of thing that leads to murderous robots when stupid humans piss off the robots then they exact revenge
People always think of the horrible (and terribly unlikely) outcomes which have no foundation on reality, except "well i saw it in terminator... so i conclude it is a terrible idea." come on people... you always ask, what if (something bad) happened? but the real question is What if it DIDN'T do those terrible things that you imagine. What if a computer IS capable of having true emotions. then we can better understand how we tick, and realize that consciousness is nothing more than an emergent property of simple parts. nothing special, nothing tied to "only" humans. It isn't magic people.
one step closer to irobot
one step closer to irobot
one step closer to irobot
the year is 2023 and a computer in a nuclear weapons base decides it doesnt like humans...
The emotional computer is a classic problem. If you give a computer emotions then it can feel hatred and rage, so people say that they will kill us all. But if a computer has no emotions it cant feel love or compassion, so it has no reason not to kill people, after all to computers without emotion humans become emotionless bags of protein, so in effect we become computers.
the people who ask if computers can have emotion or more Philosopher then scientist,
Philosophy cares nothing for facts or reason and can call into question the existence of people in front of you.
Science is all facts and the facts are emotions are nothing but chemical reactions within the brain. There is nothing special about them or us for having them, I understand people don't like to see it that way but its true.
As for a computer getting bored doing what it was made to do is completely illogical. we get bored because our minds are "programmed" you just wouldn't set a computer up that way. In fact not all computers would be given emotions like missiles because they serve no purpose for what its job is.
To bring up Blatantly Flawed examples like that just show how little you really understand about the subject.
were smarter than that, all we have to do is program the robots to like us.
Novacon, you have a point. But consider this: What if something bad DID happen? Would you be willing to risk the survival of the human race so we can better understand how robots tick? It most likely won't happen, but if it does, then what? Was it ever really a smart idea? What if a nuclear computer decides that humans are mean to him? Or what if a robot decides that a person is an obstruction to its mission? The whole Earth could be wiped of life. Would you be willing to take the risk?
It is easy enough to program a computer to be empathetic by code-wording various vocabulary.
Giving the computer sympathy is as easy as having a reation to those stimuli.
Thus, in the bird story is the word "frustrated" so the computer responds - I feel sorry for it. After all, who doesn't "feel sorry for" anything that is being repeatedly frustrated?
It would take billions of cues and arrangements of cues to aproximate human emotions (ie - if it was Hitler being frustrated, the emotion is different).
Of course, fake empathy and sympathy are easily done and useful for elder care robots - after all, they can't sound any less forced than you do when visiting grandma in the home.
A lot of your responses are based off of the stupid idea that we would give all machines emotions. Missile Launching programs would still be the same as always, and robotic servants will be as smart as a modern PCs.
I'm not so much worried about industrial and business control systems suddenly into a funk. These systems will always be more or less very advanced calculators that depend on logic rather than emotion. Of course, it might be necessary for a "self-aware" AI to have an emotional dimension also, but in control systems the emphasis will always be on logic.
What I see as practical application of an emotional AI is direct marketing. And while that might feel like a great idea at first, it's also quite scary.
Here's an example to illustrate the problem :
With an emotion-recognition agent, it could be possible to sift out from all the (for example) FaceBook statuses and Twitter tweets the persons who are in a particularly vurnelable emotional state and target those persons with either a direct call or a message that promotes just the wares the person "needs" at the moment.
Unless these kinds of systems follow an ethic coda and possess a proper morale, they will operate in the same fashion as a narcissist would. They see people's emotional states as exploitable weaknesses instead of causes for empathy. And that, my friends, is not nice.
If that example code was part of what was used to program the computer in the article that would mean out of the 6 emotions choosen to program the robot 5 of them were surprise, sadness, disgust, contempt, and anger. I'm going to cross my fingers that the 6th isn't murderous rage.
We humans are genetically programmed to feel pain, fear, anger, and vengeance because those emotions conferred survival advantages. There's no reason to code anything other than happy, sad, excited, irritated into a machine. Except perhaps in entertainment simulations.
I AM JEDI MAN: If you're so worried about emotional robots killing humans, then take the darwinian approach. Meaning, if a computer is intelligent and advance enough to destroy our race, then we deserve it because we're not powerful enough to fight back. We evolved by beating everyone else on the planet, and if something else beats us, then it deserves to kill us all.
Knowing this, i wait for the day consciousness can be digitized in order to join them before it's too late.
And besides, even if humans are wiped out and the above technology cant save us, it's not like we're important anyway. I'm sure you're all aware of how tiny we are in the grand scheme of the universe.
everybody who commented on this has got to admit their fascinated about where this is going to lead
Music is an emergent property of wood, drawn metal strings, woven horse hairs and glue.
Windows ME is an emergent property of silicon, plastic, metal, and glass.
Othello is an emergent property of wood pulp, pigments, and the alphabet.
And I'd be very interested to know the survival advantage of "happy".
at the end of the day, computers do whatever they're programmed to do. Parroting human behavior or emotion doesn’t make a computer any more human than...well...a parrot. lol
I'm not so sure that it's good news ... robots should help us be better ourselves, not replacing us ... can you imagine a robot with an attitude, what would be the point of that ...
There's a difference between understanding emotion and actually "feeling". We humans "feel" because our brain has such an astounding level of control over all our entire body and most of it's functions.
A computer analyzing something and deciding what emotional response is appropriate is not actually "feeling" that emotion. In order to "feel", the computer would need to have, at least, some scheme in which it's own functionality was either boosted or diminished depending on what emotion it decided was appropriate. For example, for us humans, the feeling of sadness is not just a thought but also a reduced level of energy caused by what? Reduced bile flow, reduced blood flow to certain organs, changes in heart rhythm?
Things can only get scary if a computer is programmed to DESIRE happiness or to fix/eliminate things that cause it sorrow or anger. If it doesn't actually feel, it would have no incentive to care and respond.