Patent application number | Description | Published |
20090217804 | MUSIC STEERING WITH AUTOMATICALLY DETECTED MUSICAL ATTRIBUTES - Described is a technology by which a playback list comprising similar songs is automatically built based on automatically detected/generated song attributes, such as by extracting numeric features of each song. The attributes may be downloaded from a remote connection, and/or may be locally generated on the playback device. To build a playlist, a seed song's attributes may be compared against attributes of other songs to determine which other songs are similar to the seed song and thus included in the playlist. Another way to build a playlist is based on similarity of songs to a set of user provided-attributes, such as corresponding to moods or usage modes such as “resting” “reading” “jogging” or “driving” moods/modes. The playlist may be dynamically adjusted based on user interaction with the device, such as when a user skips a song, queues a song, or dequeues a song. | 09-03-2009 |
20100268534 | TRANSCRIPTION, ARCHIVING AND THREADING OF VOICE COMMUNICATIONS - Described is a technology that provides highly accurate speech-recognized text transcripts of conversations, particularly telephone or meeting conversations. Speech is received for recognition when it is at a high quality and separate for each user, that is, independent of any transmission. Moreover, because the speech is received separately, a personalized recognition model adapted to each user's voice and vocabulary may be used. The separately recognized text is then merged into a transcript of the communication. The transcript may be labeled with the identity of each user that spoke the corresponding speech. The output of the transcript may be dynamic as the conversation takes place, or may occur later, such as contingent upon each user agreeing to release his or her text. The transcript may be incorporated into the text or data of another program, such as to insert it as a thread in a larger email conversation or the like. | 10-21-2010 |
20120221330 | LEVERAGING SPEECH RECOGNIZER FEEDBACK FOR VOICE ACTIVITY DETECTION - A voice activity detection (VAD) module analyzes a media file, such as an audio file or a video file, to determine whether one or more frames of the media file include speech. A speech recognizer generates feedback relating to an accuracy of the VAD determination. The VAD module leverages the feedback to improve subsequent VAD determinations. The VAD module also utilizes a look-ahead window associated with the media file to adjust estimated probabilities or VAD decisions for previously processed frames. | 08-30-2012 |
20120226696 | Keyword Generation for Media Content - In various embodiments, a transcript that represents a media file is created. Keyword candidates that may represent topics and/or content associated with the media content are then be extracted from the transcript. Furthermore, a keyword set may be generated for the media content utilizing a mutual information criteria. In other embodiments, one or more queries may be generated based at least in part on the transcript, and a plurality of web documents may be retrieved based at least in part on the one or more queries. Additional keyword candidates may be extracted from each web document and then ranked. A subset of the keyword candidates may then be selected to form a keyword set associated with the media content. | 09-06-2012 |
20130138436 | DISCRIMINATIVE PRETRAINING OF DEEP NEURAL NETWORKS - Discriminative pretraining technique embodiments are presented that pretrain the hidden layers of a Deep Neural Network (DNN). In general, a one-hidden-layer neural network is trained first using labels discriminatively with error back-propagation (BP). Then, after discarding an output layer in the previous one-hidden-layer neural network, another randomly initialized hidden layer is added on top of the previously trained hidden layer along with a new output layer that represents the targets for classification or recognition. The resulting multiple-hidden-layer DNN is then discriminatively trained using the same strategy, and so on until the desired number of hidden layers is reached. This produces a pretrained DNN. The discriminative pretraining technique embodiments have the advantage of bringing the DNN layer weights close to a good local optimum, while still leaving them in a range with a high gradient so that they can be fine-tuned effectively. | 05-30-2013 |
20130138589 | EXPLOITING SPARSENESS IN TRAINING DEEP NEURAL NETWORKS - Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training. | 05-30-2013 |
20130191126 | Subword-Based Multi-Level Pronunciation Adaptation for Recognizing Accented Speech - Techniques are described for training a speech recognition model for accented speech. A subword parse table is employed that models mispronunciations at multiple subword levels, such as the syllable, position-specific cluster, and/or phone levels. Mispronunciation probability data is then generated at each level based on inputted training data, such as phone-level annotated transcripts of accented speech. Data from different levels of the subword parse table may then be combined to determine the accented speech model. Mispronunciation probability data at each subword level is based at least in part on context at that level. In some embodiments, phone-level annotated transcripts are generated using a semi-supervised method. | 07-25-2013 |
20140142929 | DEEP NEURAL NETWORKS TRAINING FOR SPEECH AND PATTERN RECOGNITION - The use of a pipelined algorithm that performs parallelized computations to train deep neural networks (DNNs) for performing data analysis may reduce training time. The DNNs may be one of context-independent DNNs or context-dependent DNNs. The training may include partitioning training data into sample batches of a specific batch size. The partitioning may be performed based on rates of data transfers between processors that execute the pipelined algorithm, considerations of accuracy and convergence, and the execution speed of each processor. Other techniques for training may include grouping layers of the DNNs for processing on a single processor, distributing a layer of the DNNs to multiple processors for processing, or modifying an execution order of steps in the pipelined algorithm. | 05-22-2014 |
20140149468 | MUSIC STEERING WITH AUTOMATICALLY DETECTED MUSICAL ATTRIBUTES - Described is a technology by which a playback list comprising similar songs is automatically built based on automatically detected/generated song attributes, such as by extracting numeric features of each song. The attributes may be downloaded from a remote connection, and/or may be locally generated on the playback device. To build a playlist, a seed song's attributes may be compared against attributes of other songs to determine which other songs are similar to the seed song and thus included in the playlist. Another way to build a playlist is based on similarity of songs to a set of user provided-attributes, such as corresponding to moods or usage modes such as “resting” “reading” “jogging” or “driving” moods/modes. The playlist may be dynamically adjusted based on user interaction with the device, such as when a user skips a song, queues a song, or dequeues a song. | 05-29-2014 |