A new study that matches words with brain activity patterns could help neuroscientists understand how people think about abstract, complex concepts, researchers say. It lends a physiological definition to the concept of higher thinking, using functional magnetic resonance imaging and a computer program that condensed 3,500 Wikipedia articles.
For the last 10 months, Carnegie Mellon University’s Never-Ending Language Learning System, or NELL, has been continuously searching the web for text patterns and grouping them into different semantic categories, a system that closely mimics the way humans learn. But NELL has adopted another human behavior as well: tweeting everything she does.
It sounds a bit Google-ey, what with all the data mining across the Web and all that, but it’s Microsoft researchers in Beijing that are crafting an online Chinese-to-English dictionary that could become a model for language learning tools bridging any two tongues. Engkoo.com pulls its database from the Web itself, cross-referencing sites that exist in both English and Chinese, searching existing online dictionaries, and mining other sources to create a rich resource for both learning and translation.
Researchers at Eindhoven University of Technology in the Netherlands are working on a spoken language for robots, built with both human brains and robot simplicity in mind. ROILA, or Robot Interaction Language, is intended to be easy for people to learn and easy for robots to understand.
The first Arabic Internet addresses went live this week, in the first major change to the domain name system since its creation. Domain names in Arabic were added for Egypt, Saudi Arabia and the United Arab Emirates, following final approval by the Internet Corporation for Assigned Names and Numbers (ICANN).
Though it's highly uncertain that they would have anything interesting to say, for some reason we humans agonize over what our babies might be communicating with all those non-verbal cues. But though we've golfed on the moon and harnessed controlled nuclear reactions, the various moans, shrieks and squeals of our infant offspring are still more or less a mystery to us. Now a group of Japanese scientists claims to have cracked the infant code. If you're not already skeptical, read on.
The Google Goggles Android app can already copy business cards directly into the address book and provide augmented reality overlays for restaurants. But now, Google has unveiled a prototype of a real-time optical character recognition system, providing the menu translation we Chinese-food-obsessed gwailo have been craving.
A chef and a professor are teaming up to create a dining experience that capitalizes on synaesthetic perception that links tastes to certain sounds. Synaesthesia is the association of different sensory perceptions -- hearing shapes or seeing music.
Food for thought: Your brain is wired to consider various possible meanings for a word before you've even heard the final sound of a word uttered. It's a conclusion scientists at the University of Rochester reached and also proved for the first time using a functional MRI (fMRI)—a tool for brain imaging—to see split-second activity. In the past, scientists postulated that listeners could only follow up to five syllables per second in spoken language by drawing from a small subset of words already known by the listener.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.