Patent application number | Description | Published |
20100282045 | APPARATUS AND METHOD FOR DETERMINING A PROMINENT TEMPO OF AN AUDIO WORK - The prominent tempo of an audio data is determined by detecting a plurality of beat rates of the audio data. One or more audio data characteristics are used to filter through the beat rates to determine the prominent tempo. Systems, methods, and apparatuses to determine the prominent tempo are discussed herein. | 11-11-2010 |
20100325135 | METHODS AND APPARATUS FOR DETERMINING A MOOD PROFILE ASSOCIATED WITH MEDIA DATA - In an embodiment, a method is provided for determining a mood profile of media data. In this method, mood is determined across multiple elements of mood for the media data to create a mood profile associated with the media data. In some embodiments, the mood profile is then used to determine congruencies between one or more pieces of media data. | 12-23-2010 |
20120033132 | DERIVING VISUAL RHYTHM FROM VIDEO SIGNALS - A method and a system for deriving visual rhythm from a video signal are described. A feature extraction module receives the video signal and extracts a two-dimensional feature from the video signal. A one-dimensional video feature computation module derives a one-dimensional feature from the extracted two-dimensional feature. A visual rhythm detector module detects a visual beat and a visual tempo from the one-dimensional feature. | 02-09-2012 |
20130178962 | USER INTERFACE TO MEDIA FILES - A user interface generator is configured to access a media file that stores acoustic data representative of sounds. The user interface generator determines a mood category of the media file, based on a mood vector calculated from the acoustic data. The mood category characterizes the media file as being evocative of a mood described by the mood category. The user interface generator generates a user interface that depicts a grid or map (e.g., a “mood grid” or a “mood map”) of multiple zones. One of the zones may occupy a position in the grid or map that corresponds to the mood category. The user interface may then be presented by the user interface generator (e.g., to a user). In the presented user interface, the zone that corresponds to the mood category may be operable (e.g., by the user) to perform one or more actions pertinent to the mood category. | 07-11-2013 |
20140074839 | USER PROFILE BASED ON CLUSTERING TIERED DESCRIPTORS - A user of a network-based system may correspond to a user profile that describes the user. The user profile may describe the user using one or more descriptors of items that correspond to the user (e.g., items owned by the user, items liked by the user, or items rated by the user). In some situations, such a user profile may be characterized as a “taste profile” that describes an array or distribution of one or more tastes, preferences, or habits of the user. Accordingly, the user profile machine within the network-based system may generate the user profile by accessing descriptors of items that correspond to the user, clustering one or more of the descriptors, and generating the user profile based on one or more clusters of the descriptors. | 03-13-2014 |
20140160352 | DERIVING VISUAL RHYTHM FROM VIDEO SIGNALS - A method and a system for deriving visual rhythm from a video signal are described. A feature extraction module receives the video signal and extracts a two-dimensional feature from the video signal. A one-dimensional video feature computation module derives a one-dimensional feature from the extracted two-dimensional feature. A visual rhythm detector module detects a visual beat and a visual tempo from the one-dimensional feature. | 06-12-2014 |
20140330848 | METHODS AND APPARATUS FOR DETERMINING A MOOD PROFILE ASSOCIATED WITH MEDIA DATA - In an embodiment, a method is provided for determining a mood profile of media data. In this method, mood is determined across multiple elements of mood for the media data to create a mood profile associated with the media data. In some embodiments, the mood profile is then used to determine congruencies between one or more pieces of media data. | 11-06-2014 |
20150052436 | USER INTERFACE TO MEDIA FILES - A user interface generator is configured to access a media file that stores acoustic data representative of sounds. The user interface generator determines a mood category of the media file, based on a mood vector calculated from the acoustic data. The mood category characterizes the media file as being evocative of a mood described by the mood category. The user interface generator generates a user interface that depicts a grid or map (e.g., a “mood grid” or a “mood map”) of multiple zones. One of the zones may occupy a position in the grid or map that corresponds to the mood category. The user interface may then be presented by the user interface generator (e.g., to a user). In the presented user interface, the zone that corresponds to the mood category may be operable (e.g., by the user) to perform one or more actions pertinent to the mood category. | 02-19-2015 |