Patent application number | Description | Published |
20080208584 | Pausing A VoiceXML Dialog Of A Multimodal Application - Pausing a VoiceXML dialog of a multimodal application, including generating by the multimodal application a pause event; responsive to the pause event, temporarily pausing the dialogue by the VoiceXML interpreter; generating by the multimodal application a resume event; and responsive to the resume event, resuming the dialog. Embodiments are implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application is operatively coupled to a VoiceXML interpreter, and the VoiceXML interpreter is interpreting the VoiceXML dialog to be paused. | 08-28-2008 |
20080208585 | Ordering Recognition Results Produced By An Automatic Speech Recognition Engine For A Multimodal Application - Ordering recognition results produced by an automatic speech recognition (‘ASR’) engine for a multimodal application implemented with a grammar of the multimodal application in the ASR engine, with the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to the ASR engine through a VoiceXML interpreter, includes: receiving, in the VoiceXML interpreter from the multimodal application, a voice utterance; determining, by the VoiceXML interpreter using the ASR engine, a plurality of recognition results in dependence upon the voice utterance and the grammar; determining, by the VoiceXML interpreter according to semantic interpretation scripts of the grammar, a weight for each recognition result; and sorting, by the VoiceXML interpreter, the plurality of recognition results in dependence upon the weight for each recognition result. | 08-28-2008 |
20080208586 | Enabling Natural Language Understanding In An X+V Page Of A Multimodal Application - Enabling natural language understanding using an X+V page of a multimodal application implemented with a statistical language model (‘SLM’) grammar of the multimodal application in an automatic speech recognition (‘ASR’) engine, with the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to the ASR engine through a VoiceXML interpreter, including: receiving, in the ASR engine from the multimodal application, a voice utterance; generating, by the ASR engine according to the SLM grammar, at least one recognition result for the voice utterance; determining, by an action classifier for the VoiceXML interpreter, an action identifier in dependence upon the recognition result, the action identifier specifying an action to be performed by the multimodal application; and interpreting, by the VoiceXML interpreter, the multimodal application in dependence upon the action identifier. | 08-28-2008 |
20080208587 | Document Session Replay for Multimodal Applications - Methods, apparatus, and computer program products are described for document session replay for multimodal applications. including identifying, by a multimodal browser in dependence upon a log produced by a Form Interpretation Algorithm (‘FIA’) during a previous document session with a user, a speech prompt provided by a multimodal application in the previous document session; identifying, by a multimodal browser in replay mode in dependence upon the log, a response to the prompt provided by a user of the multimodal application in the previous document session; retrieving, by the multimodal browser in dependence upon the log, an X+V page of the multimodal application associated with the speech prompt and the response; rendering, by the multimodal browser, the visual elements of the retrieved X+V page; replaying, by the multimodal browser, the speech prompt; and replaying, by a multimodal browser, the response. | 08-28-2008 |
20080208588 | Invoking Tapered Prompts In A Multimodal Application - Methods, apparatus, and computer program products are described for invoking tapered prompts in a multimodal application implemented with a multimodal browser and a multimodal application operating on a multimodal device supporting multiple modes of user interaction with the multimodal application, the modes of user interaction including a voice mode and one or more non-voice modes. Embodiments include identifying, by a multimodal browser, a prompt element in a multimodal application; identifying, by the multimodal browser, one or more attributes associated with the prompt element; and playing a speech prompt according to the one or more attributes associated with the prompt element. | 08-28-2008 |
20080208589 | Presenting Supplemental Content For Digital Media Using A Multimodal Application - Presenting supplemental content for digital media using a multimodal application, implemented with a grammar of the multimodal application in an automatic speech recognition (‘ASR’) engine, with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to the ASR engine, includes: rendering, by the multimodal application, a portion of the digital media; receiving, by the multimodal application, a voice utterance from a user; determining, by the multimodal application using the ASR engine, a recognition result in dependence upon the voice utterance and the grammar; identifying, by the multimodal application, supplemental content for the rendered portion of the digital media in dependence upon the recognition result; and rendering, by the multimodal application, the supplemental content. | 08-28-2008 |
20080208590 | Disambiguating A Speech Recognition Grammar In A Multimodal Application - Disambiguating a speech recognition grammar in a multimodal application, the multimodal application including voice activated hyperlinks, the voice activated hyperlinks voice enabled by a speech recognition grammar characterized by ambiguous terminal grammar elements, including maintaining by the multimodal browser a record of visibility of each voice activated hyperlink, the record of visibility including current visibility and past visibility on a display of the multimodal device of each voice activated hyperlink, the record of visibility further including an ordinal indication, for each voice activated hyperlink scrolled off display, of the sequence in which each such voice activated hyperlink was scrolled off display; recognizing by the multimodal browser speech from a user matching an ambiguous terminal element of the speech recognition grammar; selecting by the multimodal browser a voice activated hyperlink for activation, the selecting carried out in dependence upon the recognized speech and the record of visibility. | 08-28-2008 |
20080208591 | Enabling Global Grammars For A Particular Multimodal Application - Methods, apparatus, and computer program products are described for enabling global grammars for a particular multimodal application according to the present invention by loading a multimodal web page; determining whether the loaded multimodal web page is one of a plurality of multimodal web pages of the particular multimodal application. If the loaded multimodal web page is one of the plurality of multimodal web pages of the particular multimodal application, enabling global grammars typically includes loading any currently unloaded global grammars of the particular multimodal application identified in the multimodal web page and maintaining any previously loaded global grammars. If the loaded multimodal web page is not one of the plurality of multimodal web pages of the particular multimodal application, enabling global grammars typically includes unloading any currently loaded global grammars. | 08-28-2008 |
20080208592 | Configuring A Speech Engine For A Multimodal Application Based On Location - Methods, apparatus, and products are disclosed for configuring a speech engine for a multimodal application based on location. The multimodal application operates on a multimodal device supporting multiple modes of user interaction with the multimodal application. The multimodal application is operatively coupled to a speech engine. Configuring a speech engine for a multimodal application based on location includes: receiving a location change notification in a location change monitor from a device location manager, the location change notification specifying a current location of the multimodal device; identifying, by the location change monitor, location-based configuration parameters for the speech engine in dependence upon the current location of the multimodal device, the location-based configuration parameters specifying a configuration for the speech engine at the current location; and updating, by the location change monitor, a current configuration for the speech engine according to the identified location-based configuration parameters. | 08-28-2008 |
20080208593 | Altering Behavior Of A Multimodal Application Based On Location - Methods, apparatus, and products are disclosed for altering behavior of a multimodal application based on location. The multimodal application operates on a multimodal device supporting multiple modes of user interaction with the multimodal application, including a voice mode and one or more non-voice modes. The voice mode of user interaction with the multimodal application is supported by a voice interpreter. Altering behavior of a multimodal application based on location includes: receiving a location change notification in the voice interpreter from a device location manager, the device location manager operatively coupled to a position detection component of the multimodal device, the location change notification specifying a current location of the multimodal device; updating, by the voice interpreter, location-based environment parameters for the voice interpreter in dependence upon the current location of the multimodal device; and interpreting, by the voice interpreter, the multimodal application in dependence upon the location-based environment parameters. | 08-28-2008 |
20080208594 | Effecting Functions On A Multimodal Telephony Device - Methods, apparatus, and computer program products are described for effecting functions on a multimodal telephony device, implemented with the multimodal application operating on a multimodal telephony device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to an automated speech recognition engine. Embodiments include receiving the speech of a telephone call; identifying with the automated speech recognition engine action keywords in the speech of the telephone call; selecting a function of the multimodal telephony device in dependence upon the action keywords; identifying parameters for the function of the multimodal telephony device; and executing the function of the multimodal telephony device using the identified parameters. | 08-28-2008 |
20080228494 | Speech-Enabled Web Content Searching Using A Multimodal Browser - Speech-enabled web content searching using a multimodal browser implemented with one or more grammars in an automatic speech recognition (‘ASR’) engine, with the multimodal browser operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal browser operatively coupled to the ASR engine, includes: rendering, by the multimodal browser, web content; searching, by the multimodal browser, the web content for a search phrase, including yielding a matched search result, the search phrase specified by a first voice utterance received from a user and a search grammar; and performing, by the multimodal browser, an action in dependence upon the matched search result, the action specified by a second voice utterance received from the user and an action grammar. | 09-18-2008 |
20080235021 | Indexing Digitized Speech With Words Represented In The Digitized Speech - Indexing digitized speech with words represented in the digitized speech, with a multimodal digital audio editor operating on a multimodal device supporting modes of user interaction, the modes of user interaction including a voice mode and one or more non-voice modes, the multimodal digital audio editor operatively coupled to an ASR engine, including providing by the multimodal digital audio editor to the ASR engine digitized speech for recognition; receiving in the multimodal digital audio editor from the ASR engine recognized user speech including a recognized word, also including information indicating where, in the digitized speech, representation of the recognized word begins; and inserting by the multimodal digital audio editor the recognized word, in association with the information indicating where, in the digitized speech, representation of the recognized word begins, into a speech recognition grammar, the speech recognition grammar voice enabling user interface commands of the multimodal digital audio editor. | 09-25-2008 |
20080235022 | Automatic Speech Recognition With Dynamic Grammar Rules - Automatic speech recognition implemented with a speech recognition grammar of a multimodal application in an ASR engine, the multimodal application operating on a multimodal device supporting multiple modes of user interaction including a voice mode, the multimodal application operatively coupled to the ASR engine, including: matching by the ASR engine at least one static rule of the speech recognition grammar with at least one word of a voice utterance, yielding a matched value, the matched value specified by the grammar to be required for processing of a dynamic rule of the grammar; and dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value, the dynamic rule comprising a rule that is specified by the grammar as a rule that is not to be processed by the ASR until after the at least one static rule has been matched. | 09-25-2008 |
20080235027 | Supporting Multi-Lingual User Interaction With A Multimodal Application - Methods, apparatus, and products are disclosed for supporting multi-lingual user interaction with a multimodal application, the application including a plurality of VoiceXML dialogs, each dialog characterized by a particular language, supporting multi-lingual user interaction implemented with a plurality of speech engines, each speech engine having a grammar and characterized by a language corresponding to one of the dialogs, with the application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the application operatively coupled to the speech engines through a VoiceXML interpreter, the VoiceXML interpreter: receiving a voice utterance from a user; determining in parallel, using the speech engines, recognition results for each dialog in dependence upon the voice utterance and the grammar for each speech engine; administering the recognition results for the dialogs; and selecting a language for user interaction in dependence upon the administered recognition results. | 09-25-2008 |
20080235029 | Speech-Enabled Predictive Text Selection For A Multimodal Application - Methods, apparatus, and products are disclosed for speech-enabled predictive text selection for a multimodal application, the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to an automatic speech recognition (‘ASR’) engine through a VoiceXML interpreter, including: identifying, by the VoiceXML interpreter, a text prediction event, the text prediction event characterized by one or more predictive texts for a text input field of the multimodal application; creating, by the VoiceXML interpreter, a grammar in dependence upon the predictive texts; receiving, by the VoiceXML interpreter, a voice utterance from a user; and determining, by the VoiceXML interpreter using the ASR engine, recognition results in dependence upon the voice utterance and the grammar, the recognition results representing a user selection of a particular predictive text. | 09-25-2008 |
20080249782 | Web Service Support For A Multimodal Client Processing A Multimodal Application - Web service support for a multimodal client processing a multimodal application, the multimodal client providing an execution environment for the application and operating on a multimodal device supporting multiple modes of user interaction including a voice mode and one or more non-voice modes, the application stored on an application server, includes: receiving, by the server, an application request from the client that specifies the application and device characteristics; determining, by a multimodal adapter of the server, modality requirements for the application; selecting, by the adapter, a modality web service in dependence upon the modality requirements and the characteristics for the device; determining, by the adapter, whether the device supports VoIP in dependence upon the characteristics; providing, by the server, the application to the client; and providing, by the adapter to the client in dependence upon whether the device supports VoIP, access to the modality web service for processing the application. | 10-09-2008 |
20080255850 | Providing Expressive User Interaction With A Multimodal Application - Methods, apparatus, and products are disclosed for providing expressive user interaction with a multimodal application, the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of user interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a speech engine through a VoiceXML interpreter, including: receiving, by the multimodal browser, user input from a user through a particular mode of user interaction; determining, by the multimodal browser, user output for the user in dependence upon the user input; determining, by the multimodal browser, a style for the user output in dependence upon the user input, the style specifying expressive output characteristics for at least one other mode of user interaction; and rendering, by the multimodal browser, the user output in dependence upon the style. | 10-16-2008 |
20080255851 | Speech-Enabled Content Navigation And Control Of A Distributed Multimodal Browser - Speech-enabled content navigation and control of a distributed multimodal browser is disclosed, the browser providing an execution environment for a multimodal application, the browser including a graphical user agent (‘GUA’) and a voice user agent (‘VUA’), the GUA operating on a multimodal device, the VUA operating on a voice server, that includes: transmitting, by the GUA, a link message to the VUA, the link message specifying voice commands that control the browser and an event corresponding to each voice command; receiving, by the GUA, a voice utterance from a user, the voice utterance specifying a particular voice command; transmitting, by the GUA, the voice utterance to the VUA for speech recognition by the VUA; receiving, by the GUA, an event message from the VUA, the event message specifying a particular event corresponding to the particular voice command; and controlling, by the GUA, the browser in dependence upon the particular event. | 10-16-2008 |
20090292580 | AMBIENT PROJECT MANAGEMENT - A computer-implemented method of ambient ad hoc project management can include defining a project and associating a project decay function with the project, wherein the project decay function regulates a rate at which project health declines. Responsive to detecting a project event, one or more parameters of the project decay function can be determined from the project event. Project health can be calculated according to the project decay function using the parameter(s). An indication of the project health can be output. | 11-26-2009 |
20100031151 | Enabling speech within a multimodal program using markup - A method for speech enabling an application can include the step of specifying a speech input within a speech-enabled markup. The speech-enabled markup can also specify an application operation that is to be executed responsive to the detection of the speech input. After the speech input has been defined within the speech-enabled markup, the application can be instantiated. The specified speech input can then be detected and the application operation can be responsively executed in accordance with the specified speech-enabled markup. | 02-04-2010 |
20120011443 | ENABLING SPEECH WITHIN A MULTIMODAL PROGRAM USING MARKUP - A method for speech enabling an application can include the step of specifying a speech input within a speech-enabled markup. The speech-enabled markup can also specify an application operation that is to be executed responsive to the detection of the speech input. After the speech input has been defined within the speech-enabled markup, the application can be instantiated. The specified speech input can then he detected and the application operation can be responsively executed in accordance with the specified speech-enabled markup. | 01-12-2012 |
20140278422 | INDEXING DIGITIZED SPEECH WITH WORDS REPRESENTED IN THE DIGITIZED SPEECH - Indexing digitized speech with words represented in the digitized speech, with a multimodal digital audio editor operating on a multimodal device supporting modes of user interaction, the modes of user interaction including a voice mode and one or more non-voice modes, the multimodal digital audio editor operatively coupled to an ASR engine, including providing by the multimodal digital audio editor to the ASR engine digitized speech for recognition; receiving in the multimodal digital audio editor from the ASR engine recognized user speech including a recognized word, also including information indicating where, in the digitized speech, representation of the recognized word begins; and inserting by the multimodal digital audio editor the recognized word, in association with the information indicating where, in the digitized speech, representation of the recognized word begins, into a speech recognition grammar, the speech recognition grammar voice enabling user interface commands of the multimodal digital audio editor. | 09-18-2014 |