How Cynthia Breazeal is teaching robots how to be human (Wired UK): This next generation is intended for use as learning companions for children -- robust, stretchy and squashy, they don't have an official name yet but are referred to as "Tofu". Breazeal is also teaching the droids the rules of human behaviour by crowdsourcing: "If they only interact with a few people, the robots don't get the life experience we take for granted." So she created an online game called Mars Escape, where players and droids collaborate.
The lessons learned are uploaded to a robot that sits in the Boston Museum of Science, talking to visitors.
Showing posts with label affective computing. Show all posts
Showing posts with label affective computing. Show all posts
Monday, March 7, 2011
Friday, February 11, 2011
New Scientist TV: Creepy robotic head mimics a child
New Scientist TV: Creepy robotic head mimics a child: "The robot's face is made from soft silicone that mimics skin and flexible actuators underneath help it achieve childlike expressions. It's also equipped with cameras in its eyeballs, microphones and tactile sensors embedded beneath its skin. The team thinks that this system will allow humans to interact with it more naturally, compared to existing metallic robots that use electric motors and can be dangerous to get too close to."
Friday, January 21, 2011
The human touch, in robots
The human touch, in robots: One of the biggest challenges is to improve robotic ‘attention’, says Li. “In human-to-human interaction, we share a natural concept of communication—we know when the conversation starts and ends, and when we can start talking in a group. We are now trying to facilitate this kind of ability in a robot.” Li’s team is developing new algorithms and cognitive processes that could enable a robot to engage in conversation with both visual and auditory attention, accompanied by natural body language.
Monday, January 3, 2011
Researchers develop interactive, emotion-detecting GPS robot
Researchers develop interactive, emotion-detecting GPS robot: They developed a computer system that tracks feature points on a user's face via a camera and then compares the input with entries in a database of hundreds of predefined mental states to interpret the combinations of gestures as emotions. The system also compares the tempo, pitch and emphasis of the user's voice with the same database. Body movement is also important, so posture and gestures are also interpreted using the same criteria.
The three measures were combined to form an overall picture of the emotional state of the user, and is said to achieve an accuracy of around 70 per cent – about the same as most of us. Robinson, however, also wanted a system capable of expressing itself...
The first step in the process was to record some common facial expressions and then use software to generate an expressive digital character on a computer screen.
The three measures were combined to form an overall picture of the emotional state of the user, and is said to achieve an accuracy of around 70 per cent – about the same as most of us. Robinson, however, also wanted a system capable of expressing itself...
The first step in the process was to record some common facial expressions and then use software to generate an expressive digital character on a computer screen.
Subscribe to:
Posts (Atom)