An API is being released for download as part of the Eclipse Foundation's Voice Tools Project, which is based on the VoiceXML language for building voice-recognition systems. The API will speed the adoption of VoiceXML applications for phones, handhelds, cars, and the Web, according to IBM. The API was developed by IBM with Tellme and other participating companies.
Acknowledging that users of voice-activated systems are sometimes stifled by systems that do not understand what they are saying, IBM is looking to boost the quality of these applications. "What we're doing with this project is to help people build applications that won't frustrate callers," said Brent Metz, project lead for the Eclipse Voice Tools Project at IBM.
The project is intended to provide a common set of speech tooling; the API allows any vendor with a speech browser to communicate with the tools in a generic way, Metz said. By providing quality tools, developers will be able to build more compelling applications, he said.
Although the tools are available for free, IBM hopes to leverage them to boost sales of its WebSphere Voice Server, which is used for deploying speech recognition applications.
IBM also has released the Multimodal Tools Project for Eclipse on IBM's alphaWorks Web site. The project enables development of multimodal speech-enabled Web applications written in the X+V (XHTML + Voice) markup language. The tools enable developers to ensure that Web sites can be used on small devices with limited input options, such as mobile phones, where voice input and visual output may be preferable. Applications may eventually be built that would enable a user, for example, to ask a cell phone for nearby sushi restaurants, according to IBM.