Patent application number | Description | Published |
20130166587 | User Interface for Viewing Targeted Segments of Multimedia Content Based on Time-Based Metadata Search Criteria - A system and method for navigating digital media assets including a navigation system configured to receive a search query in response to a user input and process the search query by applying the search query to a search index of digital media asset conventional and time-based metadata and determining search results of titles of and start points in time within digital media assets that satisfy the search query. The navigation system may then display the search results to the user through the user interface. The search results may be displayed in a hierarchical format, wherein the title of the digital media asset is displayed and upon selecting the title of the digital media asset, the start points in time within that digital media asset are displayed or played as a video to the user through the user interface. | 06-27-2013 |
20130259375 | Systems and Methods for Semantically Classifying and Extracting Shots in Video - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 10-03-2013 |
20130259390 | Systems and Methods for Semantically Classifying and Normalizing Shots in Video - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 10-03-2013 |
20140223480 | Ranking User Search and Recommendation Results for Multimedia Assets Using Metadata Analysis - Methods and system for presenting a user with multimedia digital content available to and having a high correlation of potential viewing interest to the user, comprises determining which multimedia assets are available to the user; ranking the available multimedia assets as a correlation between the user's interests and demographic information and metadata associated with each multimedia asset. Higher ranking and relevance of each respective multimedia asset is indicative of a higher likelihood of viewing interest to the user. The ranked and available multimedia assets are presented to the user on an interactive display screen, where the higher ranked multimedia assets are featured more prominently to the user on the interactive display screen. The user is then able to take further action with respect to each presented multimedia asset. | 08-07-2014 |
20140321746 | Systems and Methods for Semantically Classifying and Extracting Shots in Video - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 10-30-2014 |
20150356354 | SYSTEMS AND METHODS FOR SEMANTICALLY CLASSIFYING AND NORMALIZING SHOTS IN VIDEO - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 12-10-2015 |
Patent application number | Description | Published |
20090092375 | Systems and Methods For Robust Video Signature With Area Augmented Matching - Systems and methods are provided for generating unique signatures for digital video files to locate video sequences within a video file comprising calculating a frame signature for each frame of a first video; and for a second video: calculating a frame signature for each frame of the second video for corresponding first video frame signatures, calculating a frame distance between each of the corresponding video frame signatures, determining video signature similarity between the videos, and searching within a video signature similarity curve to determine a maximum corresponding to the first video within the second video. The method further applies area augmentation to the video signature similarity curve to determine a maximum from among a plurality of maxima corresponding to the first video file within the second video file. | 04-09-2009 |
20090094113 | Systems and Methods For Using Video Metadata to Associate Advertisements Therewith - A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip. | 04-09-2009 |
20090141940 | Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking - The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images. | 06-04-2009 |
20090208106 | SYSTEMS AND METHODS FOR SEMANTICALLY CLASSIFYING SHOTS IN VIDEO - The present disclosure relates to systems and methods for classifying videos based on video content. For a given video file including a plurality of frames, a subset of frames is extracted for processing. Frames that are too dark, blurry, or otherwise poor classification candidates are discarded from the subset. Generally, material classification scores that describe type of material content likely included in each frame are calculated for the remaining frames in the subset. The material classification scores are used to generate material arrangement vectors that represent the spatial arrangement of material content in each frame. The material arrangement vectors are subsequently classified to generate a scene classification score vector for each frame. The scene classification results are averaged (or otherwise processed) across all frames in the subset to associate the video file with one or more predefined scene categories related to overall types of scene content of the video file. | 08-20-2009 |
20090235150 | SYSTEMS AND METHODS FOR DYNAMICALLY CREATING HYPERLINKS ASSOCIATED WITH RELEVANT MULTIMEDIA CONTENT - The present disclosure relates to systems and methods for dynamically creating hyperlinks associated with relevant multimedia content in a computer network. A hyperlink generation module receives an electronic text file from a server. The module searches the text file to identify keywords present in the file. Once the keywords have been identified, a database is queried to identify multimedia content that is related to the keywords. Generally, multimedia content is associated with metadata to enable efficient searching of the multimedia content. Typically, the multimedia content is contextually relevant to both the identified keywords and text file. One or more hyperlinks corresponding to the keywords are then generated and inserted into the text file. The hyperlinks provide pointers to the identified multimedia content. After insertion into the text file, the hyperlinks may be clicked by a user or viewer of the file to retrieve and display the identified multimedia content. | 09-17-2009 |
20090285551 | Systems and Methods for Identifying Pre-Inserted and/or Potential Advertisement Breaks in a Video Sequence - The present disclosure relates to systems and methods for identifying advertisement breaks in digital video files. Generally, an advertisement break identification module receives a digital video file and generates an edge response for each of one or more frames extracted from the video file. If one of the generated edge responses for a particular frame is less than a predefined threshold, then the module identifies the particular frame as the start of an advertisement break. The module then generates further edge responses for frames subsequent to the identified particular frame. Once an edge response is generated for a particular subsequent frame that is greater than the threshold, it is identified as the end of the advertisement break. The video file may then be manipulated or transformed, such as by associating metadata with the advertisement break for a variety of uses, removing the advertisement break from the video file, etc. Optionally, various time and/or frame thresholds, as well as an audio verification process, are used to validate the identified advertisement break. | 11-19-2009 |
20100162286 | SYSTEMS AND METHODS FOR ANALYZING TRENDS IN VIDEO CONSUMPTION BASED ON EMBEDDED VIDEO METADATA - Systems and methods are described for analyzing video content in conjunction with historical video consumption data, and identifying and generating relationships, rules, and correlations between the video content and viewer behavior. According to one aspect, a system receives video consumption data associated with one or more output states for one or more videos. The output states generally comprise tracked and recorded viewer behaviors during videos such as pausing, rewinding, fast-forwarding, clicking on an advertisement (for Internet videos), and other similar actions. Next, the system receives metadata associated with the content of one or more videos. The metadata is associated with video content such as actors, places, objects, dialogue, etc. The system then analyzes the received video consumption data and metadata via a multivariate analysis engine to generate an output analysis of the data. The output may be a scatter plot, chart, list, or other similar type of output that is used to identify patterns associated with the metadata and the one or more output states. Finally, the system generates one or more rules incorporating the identified patterns, wherein the one or more rules define relationships between the video content (i.e. metadata) and viewer behavior (i.e. output states). | 06-24-2010 |
20110134321 | Timeline Alignment for Closed-Caption Text Using Speech Recognition Transcripts - Method, systems, and computer program products for synchronizing text with audio in a multimedia file, wherein the multimedia file is defined by a timeline having a start point and end point and respective points in time therebetween, wherein an N-gram analysis is used to compare each word of a closed-captioned text associated with the multimedia file with words generated by an automated speech recognition (ASR) analysis of the audio of the multimedia file to create an accurate, time-based metadata file in which each closed-captioned word is associated with a respective point on the timeline corresponding to the same point in time on the timeline in which the word is actually spoken in the audio and occurs within the video. | 06-09-2011 |
20150245111 | SYSTEMS AND METHODS FOR USING VIDEO METADATA TO ASSOCIATE ADVERTISEMENTS THEREWITH - A system for using metadata from a video signal to associate advertisements therewith, comprising (i) a segmentation system to divide the video signal into video clips, (ii) a digitizing system for digitizing the video clips, (iii) a feature extraction system for extracting audio and video features from each video clip, associating each audio feature with respective video clips, associating each video feature with respective video clips, and saving the audio and video features into an associated metadata file, (iv) a web interface to the feature extraction system for receiving the video clips, and (v) a database, wherein video signals and associated metadata files are stored and indexed, wherein the associated metadata file is provided when a video player requests the corresponding video signal, enabling selection of a relevant advertisement for presentment in conjunction with respective video clips based on the associated audio and video features of the respective video clip. | 08-27-2015 |