Patent application number | Description | Published |
20080244587 | Thread scheduling on multiprocessor systems - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors. | 10-02-2008 |
20090083790 | Video scene segmentation and categorization - In one embodiment of the invention, an apparatus and method for video browsing, summarization, and/or retrieval, based on video scene segmentation and categorization is disclosed. Video shots may be detected from video data. Key frames may be selected from the shots. A shot similarity graph may be composed based on the key frames. Using normalized cuts on the graph, scenes may be segmented. The segmented scenes may be categorized based on whether the segmented scene is a parallel or serial scene. One or more representative key frames may be selected based on the scene categorization. | 03-26-2009 |
20090147992 | THREE-LEVEL SCHEME FOR EFFICIENT BALL TRACKING - A three-level ball detection and tracking method is disclosed. The ball detection and tracking method employs three levels to generate multiple ball candidates rather than a single one. The ball detection and tracking method constructs multiple trajectories using candidate linking, then uses optimization criteria to determine the best ball trajectory. | 06-11-2009 |
20090161967 | METHOD AND APPARATUS FOR OBTAINING AND PROCESSING IMAGE FEATURES - Machine-readable media, methods, apparatus and system for obtaining and processing image features are described. In some embodiments, a Gabor representation of an image may be obtained by using a Gabor filter. A region may be determined from the Gabor representation, wherein the region comprises a plurality of Gabor pixels of the Gabor representation; and, a sub-region may be determined from the region, wherein the sub-region comprises more than one of the plurality of Gabor pixels. Then, a Gabor feature may be calculated based upon a magnitude calculation related to the sub-region and the region. | 06-25-2009 |
20090169065 | Detecting and indexing characters of videos by NCuts and page ranking - Apparatuses, systems, and computer program products that detect and/or index characters of videos are disclosed. One or more embodiments comprise an apparatus an apparatus having a feature extraction module and a cast indexing module. The feature extraction module may extract features of a scale invariant feature transform (SIFT) for face sets of a video and the cast indexing module may detect one or more characters of the video via one or more associations of clusters of the features. Some alternative embodiments may include a cast ranking module to sort characters of the video, considering such factors as appearance times of the characters, appearance frequencies of the characters, and page rankings of the characters. The apparatus may associate or partition the clusters based on a normalized cut process, as well as detect the characters based on measures of distances of nodes associated with the features. Numerous embodiments may detect the characters based upon partitioning the clusters via solutions for eigenvalue systems for matrices of nodes of the clusters. | 07-02-2009 |
20090169130 | ACCELERATING THE HOUGH TRANSFORM - The present disclosure describes a method and apparatus for accelerating computation of a Hough transform of a plurality of digital images of known width and height dimensions. The method includes determining a plurality of Hough values for each pixel location based on the width and height dimensions. The method further includes generating a lookup table comprising an array of Hough values corresponding to one or more Hough parameters of at least one geometric shape in at least one digital image. Each element in the array of Hough values may be based on a value of one or more Hough parameters and at least one of a height value or a width value. The method may include receiving a plurality of digital images having known width and height dimensions. The method may further include selecting, for at least one nonzero pixel of at least one of the plurality of digital images, the Hough values from the lookup table. Of course, many alternatives, variations and modifications are possible without departing from this embodiment. | 07-02-2009 |
20090269022 | DEVICE, SYSTEM, AND METHOD FOR INDEXING DIGITAL IMAGE FRAMES - A method, apparatus and system for, for each of a plurality of image frames, assigning a pattern number to each of a set of pixel neighborhoods within the frame and assigning a relationship number to each of a plurality of sets of pattern numbers based on a probability of transitioning between different pattern numbers in the set of pattern numbers when transitioning between different pixel neighborhoods. For a subset of the plurality of frames, the subset of frames may be determined to be similar, for example, based on the similarity of the relationship numbers of the subset of the plurality of frames. Other embodiments are described and claimed. | 10-29-2009 |
20090285473 | METHOD AND APPARATUS FOR OBTAINING AND PROCESSING IMAGE FEATURES - Machine-readable media, methods, apparatus and system for obtaining and processing image features are described. In some embodiments, groups of training features derived from regions of training images may be trained to obtain a plurality of classifiers, each classifier corresponding to each group of training features. The plurality of classifiers may be used to classify groups of validation features derived from regions of validation images to obtain a plurality of weights, wherein each weight corresponds to each region of the validation images and indicates how important the each region of the validation images is. Then, a weight may be discarded from the plurality of weights based upon a certain criterion. | 11-19-2009 |
20100067863 | VIDEO EDITING METHODS AND SYSTEMS - Video editing methods and systems, including methods and systems to identify video clips having similar visual characteristics. Video clips may correspond to first and second videos, which may include a professional music video and a personal video, respectively. Identified video clips of the personal video may be combined into a new video clip, and music corresponding to visually similar video clips of the music video may be associated with the corresponding video clips of the new video. Video frames of the video clips may be characterized with respect to one or more visual features, which may include one or more of facial and/or body features, salient objects, camera motion, and image quality. Characterizations may be compared between video clips on an incremental basis. Characterization of a music video may implicitly model an underlying correlation between music rhythm and changes in visual appearance. | 03-18-2010 |
20100161911 | METHOD AND APPARATUS FOR MPI PROGRAM OPTIMIZATION - Machine readable media, methods, apparatus and system for MPI program optimization. In some embodiments, shared data may be retrieved from a message passing interface (MPI) program, wherein the shared data is sharable by a plurality of processes. Then, the shared data may be allocated to a shared memory, wherein the shared memory is accessible by the plurality of processes. A single copy of the shared data may be maintained in the shared data in a global buffer of the processes of the plurality of processes can read or write the single copy of the shared data from or to the shared memory. | 06-24-2010 |
20110150275 | MODEL-BASED PLAY FIELD REGISTRATION - A method, apparatus, and system are described for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a homograph matrix of a final video image based on the direct estimation is generated. | 06-23-2011 |
20110261187 | Extracting and Mapping Three Dimensional Features from Geo-Referenced Images - Mobile Internet devices may be used to generate Mirror World depictions. The mobile Internet devices may use inertial navigation system sensor data, combined with camera images, to develop three dimensional models. The con of an input geometric model may be aligned with edge features of the input camera images instead of using point features of images or laser scan data. | 10-27-2011 |
20120124587 | THREAD SCHEDULING ON MULTIPROCESSOR SYSTEMS - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors. | 05-17-2012 |
20120131010 | TECHNIQUES TO DETECT VIDEO COPIES - Some embodiments include a video copy detection approach based on speeded up robust features (SURF) trajectory building, local sensitive hash (LSH) indexing, and spatial-temporal-scale registration. First, interesting points' trajectories are extracted by SURF. Next, an efficient voting based spatial-temporal-scale registration approach is applied to estimate the optimal transformation parameters (shift and scale) and achieve the final video copy detection results by propagations of video segments in both spatial-temporal and scale directions. To speed up the detection speed, local sensitive hash (LSH) indexing is used to index trajectories for fast queries of candidate trajectories. | 05-24-2012 |
20120189197 | DEVICE, SYSTEM, AND METHOD FOR INDEXING DIGITAL IMAGE FRAMES - Methods and apparatus are disclosed to index digital frames. An example method includes identifying channel types associated with a plurality of image frames, splitting each one of the plurality of image frames into a respective color channel based on the identified channel types, applying a local binary pattern to each of the respective color channels to generate a respective pattern number, generating a spatial representation of each respective pattern number to determine transition probabilities for each channel type, and identifying a degree of similarity between the plurality of image frames based on the transition probabilities. | 07-26-2012 |
20130009943 | MULTI-CORE PROCESSOR SUPPORTING REAL-TIME 3D IMAGE RENDERING ON AN AUTOSTEREOSCOPIC DISPLAY - A multi-core processor system may support 3D image rendering on an autostereoscopic display. The 3D image rendering includes pre-processing of depth map and 3D image wrapping tasks. The pre-processing of depth map may include a foreground prior depth image smoothing technique, which may perform a depth gradient detection and a smoothing task. The depth gradient detection task may detect areas with large depth gradient and the smoothing task may transform the large depth gradient into a linearly changing shape using low-strength, low-pass filtering techniques. The 3D image wrapping may include vectorizing the code for 3D image wrapping of row pixels using an efficient single instruction multiple data (SIMD) technique. After vectorizing, an API such as OpenMP may be used to parallelize the 3D image wrapping procedure. The 3D image wrapping using OpenMP may be performed on rows of the 3D image and on images of the multiple view images. | 01-10-2013 |
20130201187 | IMAGE-BASED MULTI-VIEW 3D FACE GENERATION - Systems, devices and methods are described including recovering camera parameters and sparse key points for multiple 2D facial images and applying a multi-view stereo process to generate a dense avatar mesh using the camera parameters and sparse key points. The dense avatar mesh may then be used to generate a 3D face model and multi-view texture synthesis may be applied to generate a texture image for the 3D face model. | 08-08-2013 |
20130271451 | PARAMETERIZED 3D FACE GENERATION - Systems, devices and methods are described including receiving a semantic description and associated measurement criteria for a facial control parameter, obtaining principal component analysis (PCA) coefficients, generating 3D faces in response to the PCA coefficients, determining a measurement value for each of the 3D faces based on the measurement criteria, and determining a regression parameters for the facial control parameter based on the measurement values. | 10-17-2013 |
20130272575 | OBJECT DETECTION USING EXTENDED SURF FEATURES - Systems, apparatus and methods are described including generating gradient images from an input image, where the gradient images include gradient images created using 2D filter kernels. Feature descriptors are then generated from the gradient images and object detection performed by applying the descriptors to a boosting cascade classifier that includes logistic regression base classifiers. | 10-17-2013 |
20130297650 | Using Multimedia Search to Identify Products - A product in television program currently being watched can be identified by extracting at least one decoded frame from a television transmission. The frame can be transmitted to a separate mobile device for requesting an image search and for receiving the search results. The search results can be used to identify the product. | 11-07-2013 |
20130332834 | ANNOTATION AND/OR RECOMMENDATION OF VIDEO CONTENT METHOD AND APPARATUS - Methods, apparatuses and storage medium associated with cooperative annotation and/or recommendation by shared and personal devices. In various embodiments, at least one non-transitory computer-readable storage medium may include a number of instructions configured to enable a personal device (PD) of a user, in response to execution of the instructions by the personal device, to receive a user input selecting performance of a user function in association with a video stream being rendered on a shared video device (SVD) configured for use by multiple users, render an image frame of the video stream rendered on the shared video device at a time proximate to a time of the user input, and facilitate performance of the user function, which may include annotation of video objects. Other embodiments, including recommendation of video content, may be disclosed or claimed. | 12-12-2013 |
20140003662 | REDUCED IMAGE QUALITY FOR VIDEO DATA BACKGROUND REGIONS | 01-02-2014 |
20140026157 | FACE RECOGNITION CONTROL AND SOCIAL NETWORKING - Methods, apparatuses, and articles associated with face recognition login, social network and video chat are disclosed herein. In various embodiments, an apparatus may include a networking interface, and a face recognition based controller configured to determine whether a user is watching a television, based on image frames of a video signal generated by a camera. The controller may be further configured to transmit a login request, via the network interface, to a server associated with a social network, on determination that the user is watching the television, to log the user into the social network, and enabling video chat. Other embodiments may be disclosed and/or claimed. | 01-23-2014 |
20140035934 | Avatar Facial Expression Techniques - A method and apparatus for capturing and representing 3D wire-frame, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequences corresponding to one of the plurality of facial expressions; and retrieving one of the feature data sequences based on user facial feature data; and mapping the retrieved feature data sequence to an avatar face. The method may advantageously provide improvements in execution speed and communications bandwidth. | 02-06-2014 |
20140037134 | GESTURE RECOGNITION USING DEPTH IMAGES - Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed. | 02-06-2014 |
20140043329 | METHOD OF AUGMENTED MAKEOVER WITH 3D FACE MODELING AND LANDMARK ALIGNMENT - Generation of a personalized 3D morphable model of a user's face may be performed first by capturing a 2D image of a scene by a camera. Next, the user's face may be detected in the 2D image and 2D landmark points of the user's face may be detected in the 2D image. Each of the detected 2D landmark points may be registered to a generic 3D face model. Personalized facial components may be generated in real time to represent the user's face mapped to the generic 3D face model to form the personalized 3D morphable model. The personalized 3D morphable model may be displayed to the user. This process may be repeated in real time for a live video sequence of 2D images from the camera. | 02-13-2014 |
20140050358 | METHOD OF FACIAL LANDMARK DETECTION - Detecting facial landmarks in a face detected in an image may be performed by first cropping a face rectangle region of the detected face in the image and generating an integral image based at least in part on the face rectangle region. Next, a cascade classifier may be executed for each facial landmark of the face rectangle region to produce one response image for each facial landmark based at least in part on the integral image. A plurality of Active Shape Model (ASM) initializations may be set up. ASM searching may be performed for each of the ASM initializations based at least in part on the response images, each ASM search resulting in a search result having a cost. Finally, a search result of the ASM searches having a lowest cost function may be selected, the selected search result indicating locations of the facial landmarks in the image. | 02-20-2014 |
20140055554 | SYSTEM AND METHOD FOR COMMUNICATION USING INTERACTIVE AVATAR - A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, determining facial characteristics from the face, including eye movement and eyelid movement of a user indicative of direction of user gaze and blinking, respectively, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. | 02-27-2014 |
20140147035 | HAND GESTURE RECOGNITION SYSTEM - A cost-effective and computationally efficient hand gesture recognition system for detecting and/or tracking a face region and/or a hand region in a series of images. A skin segmentation model is updated with skin pixel information from the face and iteratively applied to the pixels in the hand region, to more accurately identify the pixels in the hand region given current lighting conditions around the image. Shape features are then extracted from the image, and based on the shape features, a hand gesture is identified in the image. The identified hand gesture may be used to generate a command signal to control the operation of an application or system. | 05-29-2014 |
20140152758 | COMMUNICATION USING INTERACTIVE AVATARS - Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters. | 06-05-2014 |
20140156398 | PERSONALIZED ADVERTISEMENT SELECTION SYSTEM AND METHOD - A system and method for selecting an advertisement to present to a consumer includes detecting facial regions in the image, identifying one or more consumer characteristics (mood, gender, age, etc.) of said consumer in the image, identifying one or more advertisements to present to the consumer based on a comparison of the consumer characteristics with an advertisement database including a plurality of advertisement profiles, and presenting a selected one of the identified advertisement to the consumer on a media device. | 06-05-2014 |
20140195983 | 3D GRAPHICAL USER INTERFACE - Systems, apparatus, articles, and methods are described including operations for a 3D graphical user interface. | 07-10-2014 |
20140198121 | SYSTEM AND METHOD FOR AVATAR GENERATION, RENDERING AND ANIMATION - A video communication system that replaces actual live images of the participating users with animated avatars. The system allows generation, rendering and animation of a two-dimensional (2-D) avatar of a user's face. The 2-D avatar represents a user's basic face shape and key facial characteristics, including, but not limited to, position and shape of the eyes, nose, mouth, and face contour. The system further allows adaptive rendering for displaying allow different scales of the 2-D avatar to be displayed on associated different sized displays of user devices. | 07-17-2014 |
20140214424 | VEHICLE BASED DETERMINATION OF OCCUPANT AUDIO AND VISUAL INPUT - Systems, apparatus, articles, and methods are described including operations to receive audio data and visual data from one or more occupants of a vehicle. A determination may be made regarding which of the one or more occupants of the vehicle to associate with the received audio data based at least in part on the received visual data. | 07-31-2014 |
20140218371 | FACIAL MOVEMENT BASED AVATAR ANIMATION - Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently. | 08-07-2014 |
20140218459 | COMMUNICATION USING AVATAR - Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. | 08-07-2014 |
20140223474 | INTERACTIVE MEDIA SYSTEMS - Generally this disclosure describes interactive media methods and systems. A method may include capturing an image, detecting at least one face in the image, determining an identity and expression corresponding to the at least one face, generating an icon for the at least one face based on the corresponding expression, and displaying the icon on a video monitor. | 08-07-2014 |
20140241574 | TRACKING AND RECOGNITION OF FACES USING SELECTED REGION CLASSIFICATION - Methods, apparatuses, and articles associated with facial tracking and recognition are disclosed. In embodiments, facial images may be detected in video or still images and tracked. After normalization of the facial images, feature data may be extracted from selected regions of the faces to compare to associated feature data in known faces. The selected regions may be determined using a boosting machine learning processes over a set of known images. After extraction, individual two-class comparisons may be performed between corresponding feature data from regions on the tested facial images and from the known facial image. The individual two-class classifications may then be combined to determine a similarity score for the tested face and the known face. If the similarity score exceeds a threshold, an identification of the known face may be output or otherwise used. Additionally, tracking with voting may be performed on faces detected in video. After a threshold of votes is reached, a given tracked face may be associated with a known face. | 08-28-2014 |
20140267413 | ADAPTIVE FACIAL EXPRESSION CALIBRATION - Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user. | 09-18-2014 |
20140267544 | SCALABLE AVATAR MESSAGING - Technologies for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user include capturing real-time video of a user of a local computing device. The computing device extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to a server. The server generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device. | 09-18-2014 |
20140289176 | Method and Apparatus for Extracting Entity Names and Their Relations - According to one embodiment of the invention, a method includes generating a person-name Information Gain (IG)-Tree and a relation IG-Tree from annotated data. The method also includes tagging and partial parsing of an input document. The names of the persons are extracted within the input document using the person-name IG-tree. Additionally, names of organizations are extracted within the input document. The method also includes extracting entity names that are not names of persons and organizations within the input document. Further, the relations between the identified entity names are extracted using the relation-IG-tree. | 09-25-2014 |
20140300539 | GESTURE RECOGNITION USING DEPTH IMAGES - Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed. | 10-09-2014 |
20140369608 | IMAGE PROCESSING INCLUDING ADJOIN FEATURE BASED OBJECT DETECTION, AND/OR BILATERAL SYMMETRIC OBJECT SEGMENTATION - Apparatuses, methods and storage medium associated with processing an image are disclosed herein. In embodiments, a method for processing one or more images may include generating a plurality of pairs of keypoint features for a pair of images. Each pair of keypoint features may include a keypoint feature from each image. Further, for each pair of keypoint features, corresponding adjoin features may be generated. Additionally, for each pair of keypoint features, whether the adjoin features are similar may be determined. Whether the pair of images have at least one similar object may also be determined, based at least in part on a result of the determination of similarity between the corresponding adjoin features. Other embodiments may be described and claimed. | 12-18-2014 |