Patent application number | Description | Published |
20080278479 | Creating optimized gradient mesh of a vector-based image from a raster-based image - A method for creating an optimized gradient mesh of a vector-based image from a raster-based image. In one implementation, a set of boundaries for an object on a raster-based image may be received. An initial gradient mesh of the object may be created. A residual energy between the object on the raster-based image and a rendered initial gradient mesh may be minimized to generate an optimized gradient mesh. | 11-13-2008 |
20080298766 | Interactive Photo Annotation Based on Face Clustering - An interactive photo annotation method uses clustering based on facial similarities to improve annotation experience. The method uses a face recognition algorithm to extract facial features of a photo album and cluster the photos into multiple face groups based on facial similarity. The method annotates a face group collectively using annotations, such as name identifiers, in one operation. The method further allows merging and splitting of face groups. Special graphical user interfaces, such as displays in a group view area and a thumbnail area and drag-and-drop features, are used to further improve the annotation experience. | 12-04-2008 |
20080304735 | Learning object cutout from a single example - Systems and methods are described for learning visual object cutout from a single example. In one implementation, an exemplary system determines the color context near each block in a model image to create an appearance model. The system also learns color sequences that occur across visual edges in the model image to create an edge profile model. The exemplary system then infers segmentation boundaries in unknown images based on the appearance model and edge profile model. In one implementation, the exemplary system minimizes the energy in a graph-cut model where the appearance model is used for data energy and the edge profile is used to modulate edges. The system is not limited to images with nearly identical foregrounds or backgrounds. Some variations in scale, rotation, and viewpoint are allowed. | 12-11-2008 |
20080304755 | Face Annotation Framework With Partial Clustering And Interactive Labeling - Systems and methods are described for a face annotation framework with partial clustering and interactive labeling. In one implementation, an exemplary system automatically groups some images of a collection of images into clusters, each cluster mainly including images that contain a person's face associated with that cluster. After an initial user-labeling of each cluster with the person's name or other label, in which the user may also delete/label images that do not belong in the cluster, the system iteratively proposes subsequent clusters for the user to label, proposing clusters of images that when labeled, produce a maximum information gain at each iteration and minimize the total number of user interactions for labeling the entire collection of images. | 12-11-2008 |
20090087035 | Cartoon Face Generation - A face cartooning system is described. In one implementation, the system generates an attractive cartoon face or graphic of a user's facial image. The system extracts facial features separately and applies pixel-based techniques customized to each facial feature. The style of cartoon face achieved resembles the likeness of the user more than cartoons generated by conventional vector-based cartooning techniques. The cartoon faces thus achieved provide an attractive facial appearance and thus have wide applicability in art, gaming, and messaging applications in which a pleasing degree of realism is desirable without exaggerated comedy or caricature. | 04-02-2009 |
20090109236 | LOCALIZED COLOR TRANSFER - Techniques for providing localized color transfer are disclosed. In some aspects, a user may select a source region of a source image and a destination region of a destination image. The source region and the destination region may be associated by a designator to create a color transfer pair. A localized color transfer based on the color style of the source region may be implemented to modify the destination region color style. Further aspects may include optimizing the destination image to reduce discontinuities resulting from the color transfer and enabling the user to select regions of the destination image which are not modified by localized color transfer. | 04-30-2009 |
20090210939 | SKETCH-BASED PASSWORD AUTHENTICATION - A graphical password authentication method is based on sketches drawn by user. The method extracts a template edge orientation pattern from an initial sketch of the user and an input edge orientation pattern from an input sketch of the user, compares the similarity between the two edge orientation patterns, and makes an authentication decision based on the similarity. The edge orientations are quantized, and each edge orientation pattern includes a set of quantized orientation patterns each corresponding to one of the quantized edge orientations. The number of quantized edge orientations, as well as other parameters such as the dimension of the final orientation patterns and acceptance threshold, can be optimized either globally or user-specifically. | 08-20-2009 |
20090252435 | CARTOON PERSONALIZATION - Embodiments that provide cartoon personalization are disclosed. In accordance with one embodiment, cartoon personalization includes selecting a face image having a pose orientation that substantially matches an original pose orientation of a character in a cartoon image. The method also includes replacing a face of the character in the cartoon image with the face image. The method further includes blending the face image with a remainder of the character in the cartoon image. | 10-08-2009 |
20090254539 | User Intention Modeling For Interactive Image Retrieval - A system performs user intention modeling for interactive image retrieval. In one implementation, the system uses a three stage iterative technique to retrieve images from a database without using any image tags or text descriptors. First, the user submits a query image and the system models the user's search intention and configures a customized search to retrieve relevant images. Then, the system extends a user interface for the user to designate visual features across the retrieved images. The designated visual features refine the intention model and reconfigure the search to retrieve images that match the remodeled intention. Third, the system extends another user interface through which the user can give natural feedback about the retrieved images. The three stages can be iterated to quickly assemble a set of images that accurately fulfills the user's search intention. They system can be used for image searching without text tags, can be used for initial text tag generation, or can be used to complement a conventional tagged-image platform. | 10-08-2009 |
20090313239 | Adaptive Visual Similarity for Text-Based Image Search Results Re-ranking - Described is a technology in which images initially ranked by some relevance estimate (e.g., according to text-based similarities) are re-ranked according to visual similarity with a user-selected image. A user-selected image is received and classified into an intention class, such as a scenery class, portrait class, and so forth. The intention class is used to determine how visual features of other images compare with visual features of the user-selected image. For example, the comparing operation may use different feature weighting depending on which intention class was determined for the user-selected image. The other images are re-ranked based upon their computed similarity to the user-selected image, and returned as query results. Retuning of the feature weights using actual user-provided relevance feedback is also described. | 12-17-2009 |
20100086214 | FACE ALIGNMENT VIA COMPONENT-BASED DISCRIMINATIVE SEARCH - Described is a technology in which face alignment data is obtained by processing an image using a component-based discriminative search algorithm. For each facial component, the search is guided by an associated directional classifier that determines how to move the facial component (if at all) to achieve better alignment relative to its corresponding facial component in the image. Also described is training of the classifiers. | 04-08-2010 |
20110179021 | DYNAMIC KEYWORD SUGGESTION AND IMAGE-SEARCH RE-RANKING - A content-based re-ranking (CBR) process may be performed on query results based on a selected keyword that is extracted from previous query results, and thereby increase a relevancy of search results. A search engine may perform the CBR process using a target image that is selected from a plurality of image search results, the CBR to identify re-ranked image search results. Keywords may be extracted from the re-ranked image search results. A portion of the keywords may be outputted as suggested keywords and made selectable by a user. Finally, a refined CBR process may be performed based on the target image and a received selection a suggested keyword, the refined CBR to output the refined image search results. | 07-21-2011 |
20120078936 | VISUAL-CUE REFINEMENT OF USER QUERY RESULTS - Methods and computer-storage media having computer-executable instructions embodied thereon that facilitate refining query results using visual cues are provided. Query results are determined in response to an indication of a user query. One or more groups of query results are generated from the query results based on categories of query results that share similar features. Visual cues are associated with each of the query result groups. Visual cues, in association with query result groups, are presented to a user. Query results associated with a selected visual cue may be presented to a user. A refined user query may be generated based on a selected visual cue. | 03-29-2012 |
20120106853 | IMAGE SEARCH - Image search techniques are described. In one or more implementations, images in a search result are ordered based at least in part on similarity of the images, one to another. The search result having the ordered images is provided in response to a search request. | 05-03-2012 |
20120294540 | RANK ORDER-BASED IMAGE CLUSTERING - Rank ordered-based object image clustering may facilitate robust clustering of digital images. The rank order-based clustering of object images may include defining asymmetric distances between each object image and one or more other object images in a set of multiple object images using generated ordered lists. The rank order-based clustering may further include obtaining a rank order distance for each pairing of object images by normalizing the asymmetric distances of corresponding object images. The multiple object images are further clustered into object image clusters based on the rank order distances and adaptive absolute distance. | 11-22-2012 |
20120301024 | DUAL-PHASE RED EYE CORRECTION - A dual-phase approach to red eye correction may prevent overly aggressive or overly conservative red eye reduction. The dual-phase approach may include detecting an eye portion in a digital image. Once the eye portion is detected, the dual-phase approach may include the performance of a strong red eye correction for the eye portion when the eye portion includes a strong red eye. Otherwise, the dual-phase approach may include the performance of a weak red eye correction for the eye portion when the eye portion includes a weak red eye. The weak red eye may be distinguished from the strong red eye based a redness threshold that shows the weak red eye as having less redness hue than the strong red eye. | 11-29-2012 |
20130251244 | REAL TIME HEAD POSE ESTIMATION - Methods are provided for generating a low dimension pose space and using the pose space to estimate one or more head rotation angles of a user head. In one example, training image frames including a test subject head are captured under a plurality of conditions. For each frame an actual head rotation angle about a rotation axis is recorded. In each frame a face image is detected and converted to an LBP feature vector. Using principal component analysis a PCA feature vector is generated. Pose classes related to rotation angles about a rotation axis are defined. The PCA feature vectors are grouped into a pose class that corresponds to the actual rotation angle associated with the PCA feature vector. Linear discriminant analysis is applied to the pose classes to generate the low dimension pose space. | 09-26-2013 |
20140185924 | Face Alignment by Explicit Shape Regression - A two-level boosted regression function is learned using shape-indexed image features and correlation-based feature selection. The regression function is learned by explicitly minimizing the alignment errors over the training data. Image features are indexed based on a previous shape estimate, and features are selected based on correlation to a random projection. The learned regression function enforces non-parametric shape constraint. | 07-03-2014 |
20140341443 | JOINT MODELING FOR FACIAL RECOGNITION - This disclosure describes a system for jointly modeling images for use in performing facial recognition. A facial recognition system may jointly model a first image and a second image using a face prior to generate a joint distribution. Conditional joint probabilities are determined based on the joint distribution. A log likelihood ratio of the first image and the second image are calculated based on the conditional joint probabilities and the subject of the first image and the second image are verified as the same person or as different people based on results of the log likelihood ratio. | 11-20-2014 |