Mubashar_ali
FULL MEMBER
New Recruit
- Joined
- Jan 18, 2008
- Messages
- 98
- Reaction score
- 0
* 17:23 12 March 2008
* NewScientist.com news service
* Tom Simonite
A neckband that translates thought into speech by picking up nerve signals has been used to demonstrate a "voiceless" phone call for the first time.
With careful training a person can send nerve signals to their vocal cords without making a sound. These signals are picked up by the neckband and relayed wirelessly to a computer that converts them into words spoken by a computerised voice.
A video (right) shows the system being used to place the first public voiceless phone call on stage at a recent conference held by microchip manufacturer Texas Instruments. Michael Callahan, co-founder of Ambient Corporation, which developed the neckband, demonstrates the device, called the Audeo.
Users needn't worry about that the system voicing their inner thoughts though. Callahan says producing signals for the Audeo to decipher requires "a level above thinking". Users must think specifically about voicing words for them to be picked up by the equipment.
The Audeo has previously been used to let people control wheelchairs using their thoughts. Watch a video demonstrating thought control of wheelchairs
"I can still talk verbally at the same time," Callahan told New Scientist. "We can differentiate between when you want to talk silently, and when you want to talk out loud." That could be useful in certain situations, he says, for example when making a private call while out in public.
The system demonstrated at the TI conference can recognise only a limited set of about 150 words and phrases, says Callahan, who likens this to the early days of speech recognition software.
At the end of the year Ambient plans to release an improved version, without a vocabulary limit. Instead of recognising whole words or phrases, it should identify the individual phonemes that make up complete words.
This version will be slower, because users will need to build up what they want to say one phoneme at a time, but it will let them say whatever they want. The phoneme-based system will be aimed at people who have lost the ability to speak due to neurological diseases like ALS also known as motor neurone disease.
* NewScientist.com news service
* Tom Simonite
A neckband that translates thought into speech by picking up nerve signals has been used to demonstrate a "voiceless" phone call for the first time.
With careful training a person can send nerve signals to their vocal cords without making a sound. These signals are picked up by the neckband and relayed wirelessly to a computer that converts them into words spoken by a computerised voice.
A video (right) shows the system being used to place the first public voiceless phone call on stage at a recent conference held by microchip manufacturer Texas Instruments. Michael Callahan, co-founder of Ambient Corporation, which developed the neckband, demonstrates the device, called the Audeo.
Users needn't worry about that the system voicing their inner thoughts though. Callahan says producing signals for the Audeo to decipher requires "a level above thinking". Users must think specifically about voicing words for them to be picked up by the equipment.
The Audeo has previously been used to let people control wheelchairs using their thoughts. Watch a video demonstrating thought control of wheelchairs
"I can still talk verbally at the same time," Callahan told New Scientist. "We can differentiate between when you want to talk silently, and when you want to talk out loud." That could be useful in certain situations, he says, for example when making a private call while out in public.
The system demonstrated at the TI conference can recognise only a limited set of about 150 words and phrases, says Callahan, who likens this to the early days of speech recognition software.
At the end of the year Ambient plans to release an improved version, without a vocabulary limit. Instead of recognising whole words or phrases, it should identify the individual phonemes that make up complete words.
This version will be slower, because users will need to build up what they want to say one phoneme at a time, but it will let them say whatever they want. The phoneme-based system will be aimed at people who have lost the ability to speak due to neurological diseases like ALS also known as motor neurone disease.