Having a computer for a brain has its perks, but it has its drawbacks as well. Language is a tough concept for robots, as words can convey the abstract as well as the concrete and robots have trouble knowing the difference (and grasping the abstract). That makes human-machine interaction less than intuitive for humans and confusing to 'bots. But Australian researchers are hoping to change that by teaching robots to communicate verbally in a language of their own creation, the same way humans did.
Developed at the University of Queensland and the Queensland University of Technology, these "Lingodroids" are small, wheeled mobile robots that use cameras and laser range finders to navigate an environment. But they also carry microphones and speakers so they can speak to and hear each other. Using their "voices" and "ears," the two Lingodroids play a series of games that require the robots to navigate spatially by creating a shared language between them.
It works something like this. The robots roam their environment, and if one finds itself in an unfamiliar place, it will make up a word to describe it from randomly generated syllables. It communicates that word to other robots it meets there, establishing the name of the locale within the community. From this, a spatial and verbal framework is established to name places on the map.
The robots will also play other games with each other. One will say the name of a place and each will race to the place it associates with that particular made up word. Robots can also ask one another about where they just came from, and which directions it is from where they currently are. Over time, the robots have developed a pretty solid, mutually agreed-upon understanding of what's what inside their shared environment. In other words, they've invented a language through which they can talk about their surroundings.
It's not as nuanced as learning Cantonese, but the researchers hope to teach the Lingodroids to communicate with each other about more elaborate ideas and concepts. The larger aim is to devise a better means for robots to speak verbally with each other--and eventually with human beings.
Why bother with the randomly generated syllables? The robots could just generate their words in machine code (1 and 0's) and transmit the data to each other with a wireless data link. When a human gets involved, the robots demonstrate an action (or concept through actions, robot charades!) and the person says the "human term" and that word is associated with the byte coded "word". Have I drank too much coffee, or does this sound less complex?
Is randomly learning, where another computer has been useful? I would find it more interesting to know why the robots are traveling to whatever destination they decided and how they felt about the experience. Now that would be a leap in computer intelligence, I think!
@BubbaGump: "Is randomly learning...?" Yes, it's called trial and error, most humans gain substantial knowledge by making mistakes. The emotional concept, is well beyond the scope of the project and century (well near future anyway).
Yes it is beyong the scope of the project. This is why it would really be a leap in computer intelligence. Hey, didn't I say that before.
As one robot crosses the floor to speak to the other robot. " Hey did you hear about the new 3D printer ". " Yea man, it can poop in 3D and has human parents. " Gee! I wish I had human parents", " Yea, well I keep crossing all over this floor and I have no way to poop, IT'S NOT FARE! "
that's great, the first step to robots understanding human language in context sensitives commands. "robot, i love those shoes can you get me some?", "robot, my friend just got diagnosed with cancer, can you cure it?", "ROBOT! MAKE ME RICH!"
i can't wait till the day comes when my computer will say <YES SIR!>
@KatieSaucey - Employing and growing emotions may not be as tricky as it seems, and I think it has been within reach of any scientist in the last 50 years. It just takes a mood algorithm. In my understanding, emotions are the collective function of; chemical reactions (which can be simulated in code), pre-coded base behaviours (similar to instincts), and learned patterns of response and effect (habits). Mood is chemically based, then altered by thoughts derived from a personality profile comprised of environmental experiences and pre-code. All single emotional states seem possible to pre-code but balancing them all would have to happen over time (i.e. experience). For example, pre-coding a bot to be afraid or selfish is possible. They are only out for "number 1". Similarly, pre-coding a bot to be ego-less and 100% selfless is possible as well. But neither of those pure states would likely lead to survival of a community of bots (unless Darwinian inspired procreation is incldued in the system), it would take a mix of single-mooded bots or a set of bots with a complexion of moods for a dynamic bot community.
@MerlinGreenberg: Yes I agree emotions can be simulated (or created? depending on point of view), a convincing dog and pony show could be composed. Yes, we are just chemical machines, but I don't believe anything that even comes close to "realistic", or even useful, can be accomplished right now (certainly not with 1960's hardware as you suggest). AI has been long promised to be just around the corner, but programming a learning computer (such as a Cyberdyne Systems T-1000, haha) has proved very difficult. The system needs to put a vast quantity of information into the correct context to generate a "real" response and learn from that response. For example, take this post, and consider how much stored knowledge is required to understand the phrase "a convincing dog and pony show" in the context it is used. Once the system reaches this level of understanding it could probably ace a Turing test. Arguably it could go on to develop a sense of self, maybe based on it's self recognisable differences from others. From there opinions might form and, viola! emotions!
TL;DR You can programme a convincing response, but there is no real emotion without self awareness IMHO.
@katie - You give humans too much credit :-) We are just a series of switches. But it's not the hardware that makes the difference. It is the algorithm and the breakdown of the process. The presence of a successful artificial thought is far more important than the speed at which it was created. The theory of the process is hardware agnostic. Einstein first proved his theories in chalk.
That can happen in 1812 or 2011. Honestly, I have never seen a dog and pony show, and still do not know WHY those words are used to explain the idea, but I know what the idea is because it was explained to me. I believe that computer machines can surpass humans, not just meet their ability. Machines have the advantage of being precise and can be pre-programmed to have constant temperament. Humans are so variable in their moods.
Try this, if you have not already. Meditate and try to analyze your own consciousness, thought by thought. Where do they come from? What is each thought about? What occurs just before each thought? What are you triggers? Etc. You might be able to see the process in action and then define a process that can be simulated. It is very fun and interesting.
One proposes new computers of the future will be built from DNA. So say then we do create a computer based on DNA. Then perhaps we are just pre-wired, pre-program computers too and we do not even realize it. Our destiny has already been chosen. We feel we are in control, but in reality we are all acting out some program that has been decided long long long ago.
yes, we are just biological computers (our brains), i do realise it :), computers are getting better and better at aproximating intelligence, it is just a matter of time, it may come as hardware and biology are combined to create a true AI, but would it then be artificial? a new intelligent life form none the less