Patent application number | Description | Published |
20120109652 | Leveraging Interaction Context to Improve Recognition Confidence Scores - On a computing device a speech utterance is received from a user. The speech utterance is a section of a speech dialog that includes a plurality of speech utterances. One or more features from the speech utterance are identified. Each identified feature from the speech utterance is a specific characteristic of the speech utterance. One or more features from the speech dialog are identified. Each identified feature from the speech dialog is associated with one or more events in the speech dialog. The one or more events occur prior to the speech utterance. One or more identified features from the speech utterance and one or more identified features from the speech dialog are used to calculate a confidence score for the speech utterance. | 05-03-2012 |
20130080150 | Automatic Semantic Evaluation of Speech Recognition Results - A semantic error rate calculation may be provided. After receiving a spoken query from a user, the spoken query may be converted to text according to a first speech recognition hypothesis. A plurality of results associated with the converted query may be received and compared to a second plurality of results associated with the converted query. | 03-28-2013 |
20130080162 | User Query History Expansion for Improving Language Model Adaptation - Query history expansion may be provided. Upon receiving a spoken query from a user, an adapted language model may be applied to convert the spoken query to text. The adapted language model may comprise a plurality of queries interpolated from the user's previous queries and queries associated with other users. The spoken query may be executed and the results of the spoken query may be provided to the user. | 03-28-2013 |
20140350931 | LANGUAGE MODEL TRAINED USING PREDICTED QUERIES FROM STATISTICAL MACHINE TRANSLATION - A Statistical Machine Translation (SMT) model is trained using pairs of sentences that include content obtained from one or more content sources (e.g. feed(s)) with corresponding queries that have been used to access the content. A query click graph may be used to assist in determining candidate pairs for the SMT training data. All/portion of the candidate pairs may be used to train the SMT model. After training the SMT model using the SMT training data, the SMT model is applied to content to determine predicted queries that may be used to search for the content. The predicted queries are used to train a language model, such as a query language model. The query language model may be interpolated other language models, such as a background language model, as well as a feed language model trained using the content used in determining the predicted queries. | 11-27-2014 |
20140365218 | LANGUAGE MODEL ADAPTATION USING RESULT SELECTION - A received utterance is recognized using different language models. For example, recognition of the utterance is independently performed using a baseline language model (BLM) and using an adapted language model (ALM). A determination is made as to what results from the different language model are more likely to be accurate. Different features may be used to assist in making the determination (e.g. language model scores, recognition confidences, acoustic model scores, quality measurements, . . . ) may be used. A classifier may be trained and then used in determining whether to select the results using the BLM or to select the results using the ALM. A language model may be automatically trained or re-trained that adjusts a weight of the training data used in training the model in response to differences between the two results obtained from applying the different language models. | 12-11-2014 |
20150046163 | LEVERAGING INTERACTION CONTEXT TO IMPROVE RECOGNITION CONFIDENCE SCORES - On a computing device a speech utterance is received from a user. The speech utterance is a section of a speech dialog that includes a plurality of speech utterances. One or more features from the speech utterance are identified. Each identified feature from the speech utterance is a specific characteristic of the speech utterance. One or more features from the speech dialog are identified. Each identified feature from the speech dialog is associated with one or more events in the speech dialog. The one or more events occur prior to the speech utterance. One or more identified features from the speech utterance and one or more identified features from the speech dialog are used to calculate a confidence score for the speech utterance. | 02-12-2015 |
20150269136 | CONTEXT-AWARE RE-FORMATING OF AN INPUT - Various components provide options to re-format an input based on one or more contexts. The input is received that has been submitted to an application (e.g., messaging application, mobile application, word-processing application, web browser, search tool, etc.), and one or more outputs are identified that are possibilities to be provided as options for re-formatting. A respective score of each output is determined by applying a statistical model to a respective combination of the input and each output, the respective score comprising a plurality of context scores that quantify a plurality of contexts of the respective combination. Exemplary contexts include historical-user contexts, domain contexts, and general contexts. One or more suggested outputs are selected from among the one or more outputs based on the respective scores and are provided as options to re-format the input. | 09-24-2015 |
20150269949 | INCREMENTAL UTTERANCE DECODER COMBINATION FOR EFFICIENT AND ACCURATE DECODING - An incremental speech recognition system. The incremental speech recognition system incrementally decodes a spoken utterance using an additional utterance decoder only when the additional utterance decoder is likely to add significant benefit to the combined result. The available utterance decoders are ordered in a series based on accuracy, performance, diversity, and other factors. A recognition management engine coordinates decoding of the spoken utterance by the series of utterance decoders, combines the decoded utterances, and determines whether additional processing is likely to significantly improve the recognition result. If so, the recognition management engine engages the next utterance decoder and the cycle continues. If the accuracy cannot be significantly improved, the result is accepted and decoding stops. Accordingly, a decoded utterance with accuracy approaching the maximum for the series is obtained without decoding the spoken utterance using all utterance decoders in the series, thereby minimizing resource usage. | 09-24-2015 |