Patent application number | Description | Published |
20080262847 | USER POSITIONABLE AUDIO ANCHORS FOR DIRECTIONAL AUDIO PLAYBACK FROM VOICE-ENABLED INTERFACES - The present invention discloses a concept and a use of audio anchors within voice-enabled interfaces. Audio anchors can be user configurable points from which audio playback occurs. In the invention, a user can identify an interface position at which an audio anchor is to be established. The computing device can determine an anchor direction setting, with values that include forward playback and backward playback. Interface items can then be audibly enumerated from the audio anchor in a direction indicated by the anchor direction setting. For example, if a set of interface items are alphabetically ordered items and if an audio anchor is set at a first item beginning with a letter “G” and an anchor direction is set to indicate backward playback, then the interface items beginning with letters “A-F” can be audibly played in reverse alphabetical order. Additionally, a rate of audio playback can be user adjustable. | 10-23-2008 |
20080288256 | REDUCING RECORDING TIME WHEN CONSTRUCTING A CONCATENATIVE TTS VOICE USING A REDUCED SCRIPT AND PRE-RECORDED SPEECH ASSETS - The present invention discloses a system and a method for creating a reduced script, which is read by a voice talent to create a concatenative text-to-speech (TTS) voice. The method can automatically process pre-recorded audio to derive speech assets for a concatenative TTS voice. The pre-recording audio can include sets of recorded phrases used by a speech user interface (Sill). A set of unfulfilled speech assets needed for foil phonetic coverage of the concatenative TTS voice can be determined. A reduced script can be constructed that includes a set of phrases, which when read by a voice talent result in a reduced corpus. When the reduced corpus is automatically processed, a reduced set of speech assets result. The reduced set includes each of the unfulfilled speech assets. When this reduced corpus is combined with existing speech assets the result will be a voice with a complete set of speech assets. | 11-20-2008 |
20090041209 | ADJUSTING MUSIC LENGTH TO EXPECTED WAITING TIME WHILE CALLER IS ON HOLD - A method of adjusting music length to expected waiting time while a caller is on hold includes choosing one or more media selections based upon their play duration and matching the selection(s) to the expected waiting time. | 02-12-2009 |
20090043583 | DYNAMIC MODIFICATION OF VOICE SELECTION BASED ON USER SPECIFIC FACTORS - The present invention discloses a solution for customizing synthetic voice characteristics in a user specific fashion. The solution can establish a communication between a user and a voice response system. A data store can be searched for a speech profile associated with the user. When a speech profile is found, a set of speech output characteristics established for the user from the profile can be determined. Parameters and settings of a text-to-speech engine can be adjusted in accordance with the determined set of speech output characteristics. During the established communication, synthetic speech can be generated using the adjusted text-to-speech engine. Thus, each detected user can hear a synthetic speech generated by a different voice specifically selected for that user. When no user profile is detected, a default voice or a voice based upon a user's speech or communication details can be used. | 02-12-2009 |
20090268883 | Dynamically Publishing Directory Information For A Plurality Of Interactive Voice Response Systems - Methods, apparatus, and products are disclosed for dynamically publishing directory information for a plurality of interactive voice response (‘IVR’) systems through an IVR directory service that include: providing a description of a web services publication interface for the IVR directory service; receiving, on behalf of one or more IVR systems, web services publication requests through the publication interface; determining, in response to the web services publication requests, directory information for each IVR system requesting publication; adding the directory information for each IVR system to an IVR system directory; generating a voice mode user interface to reflect the directory information for each IVR system added to the IVR system directory; and interacting, using the voice mode user interface, with a caller to identify a particular IVR system in dependence upon the IVR system directory and query information provided by the caller and to connect the caller with the identified IVR system. | 10-29-2009 |
20090271188 | Adjusting A Speech Engine For A Mobile Computing Device Based On Background Noise - Methods, apparatus, and products are disclosed for adjusting a speech engine for a mobile computing device based on background noise, the mobile computing device operatively coupled to a microphone, that include: sampling, through the microphone, background noise for a plurality of operating environments in which the mobile computing device operates; generating, for each operating environment, a noise model in dependence upon the sampled background noise for that operating environment; and configuring the speech engine for the mobile computing device with the noise model for the operating environment in which the mobile computing device currently operates. | 10-29-2009 |
20090271189 | Testing A Grammar Used In Speech Recognition For Reliability In A Plurality Of Operating Environments Having Different Background Noise - Methods, systems, and products for testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise that include: receiving recorded background noise for each of the plurality of operating environments; generating a test speech utterance for recognition by a speech recognition engine using a grammar; mixing the test speech utterance with each recorded background noise, resulting in a plurality of mixed test speech utterances, each mixed test speech utterance having different background noise; performing, for each of the mixed test speech utterances, speech recognition using the grammar and the mixed test speech utterance, resulting in speech recognition results for each of the mixed test speech utterances; and evaluating, for each recorded background noise, speech recognition reliability of the grammar in dependence upon the speech recognition results for the mixed test speech utterance having that recorded background noise. | 10-29-2009 |
20090271199 | Records Disambiguation In A Multimodal Application Operating On A Multimodal Device - Methods, apparatus, and products are disclosed for record disambiguation in a multimodal application operating on a multimodal device, the multimodal device supporting multiple modes of interaction including at least a voice mode and a visual mode, that include: prompting, by the multimodal application, a user to identify a particular record among a plurality of records; receiving, by the multimodal application in response to the prompt, a voice utterance from the user; determining, by the multimodal application, that the voice utterance ambiguously identifies more than one of the plurality of records; generating, by the multimodal application, a user interaction to disambiguate the records ambiguously identified by the voice utterance in dependence upon record attributes of the records ambiguously identified by the voice utterance; and selecting, by the multimodal application for further processing, one of the records ambiguously identified by the voice utterance in dependence upon the user interaction. | 10-29-2009 |
20090271438 | Signaling Correspondence Between A Meeting Agenda And A Meeting Discussion - Methods, apparatus, and products are disclosed for signaling correspondence between a meeting agenda and a meeting discussion that include: receiving a meeting agenda specifying one or more topics for a meeting; analyzing, for each topic, one or more documents to identify topic keywords for that topic; receiving meeting discussions among participants for the meeting; identifying a current topic for the meeting in dependence upon the meeting agenda; determining a correspondence indicator in dependence upon the meeting discussions and the topic keywords for the current topic, the correspondence indicator specifying the correspondence between the meeting agenda and the meeting discussion; and rendering the correspondence indicator to the participants of the meeting. | 10-29-2009 |
20090299733 | METHODS AND SYSTEM FOR CREATING AND EDITING AN XML-BASED SPEECH SYNTHESIS DOCUMENT - A method for creating and editing an XML-based speech synthesis document for input to a text-to-speech engine is provided. The method includes recording voice utterances of a user reading a pre-selected text and parsing the recorded voice utterances into individual words and periods of silence. The method also includes recording a synthesized speech output generated by a text-to-speech engine, the synthesized speech output being an audible rendering of the pre-selected text, and parsing the synthesized speech output into individual words and periods of silence. The method further includes annotating the XML-based speech synthesis document based upon a comparison of the recorded voice utterances and the recorded synthesized speech output. | 12-03-2009 |
20100299146 | Speech Capabilities Of A Multimodal Application - Improving speech capabilities of a multimodal application including receiving, by the multimodal browser, a media file having a metadata container; retrieving, by the multimodal browser, from the metadata container a speech artifact related to content stored in the media file for inclusion in the speech engine available to the multimodal browser; determining whether the speech artifact includes a grammar rule or a pronunciation rule; if the speech artifact includes a grammar rule, modifying, by the multimodal browser, the grammar of the speech engine to include the grammar rule; and if the speech artifact includes a pronunciation rule, modifying, by the multimodal browser, the lexicon of the speech engine to include the pronunciation rule. | 11-25-2010 |
20100332234 | Dynamically Extending The Speech Prompts Of A Multimodal Application - Dynamically extending the speech prompts of a multimodal application including receiving, by the prompt generation engine, a media file having a metadata container; retrieving, by the prompt generation engine from the metadata container, a speech prompt related to content stored in the media file for inclusion in the multimodal application; and modifying, by the prompt generation engine, the multimodal application to include the speech prompt. | 12-30-2010 |
20120257730 | DYNAMICALLY PUBLISHING DIRECTORY INFORMATION FOR A PLURALITY OF INTERACTIVE VOICE RESPONSE SYSTEMS - Some example embodiments include a method of dynamically publishing directory information for a plurality of interactive voice response (‘IVR’) systems. The method includes receiving, by the IVR directory service on behalf of one of the IVR systems, a web services update request. The method includes determining, by the IVR directory service in response to the web services update request, updated directory information for the IVR system. The method includes updating the IVR system directory with the updated directory information for the IVR system. The method includes generating an updated voice mode user interface to reflect the updated IVR system directory with the updated directory information for the IVR system. The generating includes creating one more voice dialogs in accordance with the directory information, the one or more voice dialogs specifying a call flow defining the interaction between a caller and the IVR directory service. | 10-11-2012 |
20130018658 | DYNAMICALLY EXTENDING THE SPEECH PROMPTS OF A MULTIMODAL APPLICATION - A prompt generation engine operates to dynamically extend prompts of a multimodal application. The prompt generation engine receives a media file having a metadata container. The prompt generation engine operates on a multimodal device that supports a voice mode and a non-voice mode for interacting with the multimodal device. The prompt generation engine retrieves from the metadata container a speech prompt related to content stored in the media file for inclusion in the multimodal application. The prompt generation engine modifies the multimodal application to include the speech prompt. | 01-17-2013 |
20130227417 | SYSTEMS AND METHODS FOR PROMPTING USER SPEECH IN MULTIMODAL DEVICES - A method for prompting user input for a multimodal interface including the steps of providing a multimodal interface to a user, where the interface includes a visual interface having a plurality of input regions, each having at least one input field; selecting an input region and processing a multi-token speech input provided by the user, where the processed speech input includes at least one value for at least one input field of the selected input region; and storing at least one value in at least one input field. | 08-29-2013 |
20130339033 | DYNAMICALLY EXTENDING THE SPEECH PROMPTS OF A MULTIMODAL APPLICATION - A prompt generation engine operates to dynamically extend prompts of a multimodal application. The prompt generation engine receives a media file having a metadata container. The prompt generation engine operates on a multimodal device that supports a voice mode and a non-voice mode for interacting with the multimodal device. The prompt generation engine retrieves from the metadata container a speech prompt related to content stored in the media file for inclusion in the multimodal application. The prompt generation engine modifies the multimodal application to include the speech prompt. | 12-19-2013 |