Patent application number | Description | Published |
20090147992 | THREE-LEVEL SCHEME FOR EFFICIENT BALL TRACKING - A three-level ball detection and tracking method is disclosed. The ball detection and tracking method employs three levels to generate multiple ball candidates rather than a single one. The ball detection and tracking method constructs multiple trajectories using candidate linking, then uses optimization criteria to determine the best ball trajectory. | 06-11-2009 |
20090328047 | DEVICE, SYSTEM, AND METHOD OF EXECUTING MULTITHREADED APPLICATIONS - Device, system, and method of executing multithreaded applications. Some embodiments include a task scheduler to receive application information related to one or more parameters of at least one multithreaded application to be executed by a multi-core processor including a plurality of cores and, based on the application information and based on architecture information related to an arrangement of the plurality of cores, to assign one or more tasks of the multithreaded application to one or more cores of the plurality of cores. Other embodiments are described and claimed. | 12-31-2009 |
20100329560 | Human pose estimation in visual computing - The present invention discloses a method of estimating human pose comprising: modeling a human body as a tree structure; optimizing said tree structure through importance proposal probabilities and part priorities; performing foreground detection to create image region observation; and performing image segmentation to provide image edge observations. | 12-30-2010 |
20130201187 | IMAGE-BASED MULTI-VIEW 3D FACE GENERATION - Systems, devices and methods are described including recovering camera parameters and sparse key points for multiple 2D facial images and applying a multi-view stereo process to generate a dense avatar mesh using the camera parameters and sparse key points. The dense avatar mesh may then be used to generate a 3D face model and multi-view texture synthesis may be applied to generate a texture image for the 3D face model. | 08-08-2013 |
20130271451 | PARAMETERIZED 3D FACE GENERATION - Systems, devices and methods are described including receiving a semantic description and associated measurement criteria for a facial control parameter, obtaining principal component analysis (PCA) coefficients, generating 3D faces in response to the PCA coefficients, determining a measurement value for each of the 3D faces based on the measurement criteria, and determining a regression parameters for the facial control parameter based on the measurement values. | 10-17-2013 |
20130276007 | Facilitating Television Based Interaction with Social Networking Tools - Video analysis may be used to determine who is watching television and their level of interest in the current programming. Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver. | 10-17-2013 |
20130276029 | Using Gestures to Capture Multimedia Clips - In response to a gestural command, a video currently being watched can be identified by extracting at least one decoded frame from a television transmission. The frame can be transmitted to a separate mobile device for requesting an image search and for receiving the search results. The search results can be used to obtain more information. The user's social networking friends can also be contacted to obtain more information about the clip. | 10-17-2013 |
20130278504 | DYNAMIC GESTURE BASED SHORT-RANGE HUMAN-MACHINE INTERACTION - Systems, devices and methods are described including starting a gesture recognition engine in response to detecting an initiation gesture and using the gesture recognition engine to determine a hand posture and a hand trajectory in various depth images. The gesture recognition engine may then use the hand posture and the hand trajectory to recognize a dynamic hand gesture and provide corresponding user interface command. | 10-24-2013 |
20130297650 | Using Multimedia Search to Identify Products - A product in television program currently being watched can be identified by extracting at least one decoded frame from a television transmission. The frame can be transmitted to a separate mobile device for requesting an image search and for receiving the search results. The search results can be used to identify the product. | 11-07-2013 |
20130332834 | ANNOTATION AND/OR RECOMMENDATION OF VIDEO CONTENT METHOD AND APPARATUS - Methods, apparatuses and storage medium associated with cooperative annotation and/or recommendation by shared and personal devices. In various embodiments, at least one non-transitory computer-readable storage medium may include a number of instructions configured to enable a personal device (PD) of a user, in response to execution of the instructions by the personal device, to receive a user input selecting performance of a user function in association with a video stream being rendered on a shared video device (SVD) configured for use by multiple users, render an image frame of the video stream rendered on the shared video device at a time proximate to a time of the user input, and facilitate performance of the user function, which may include annotation of video objects. Other embodiments, including recommendation of video content, may be disclosed or claimed. | 12-12-2013 |
20130336556 | Human Pose Estimation in Visual Computing - The present invention discloses a method of estimating human pose comprising: modeling a human body as a tree structure; optimizing said tree structure through importance proposal probabilities and part priorities; performing foreground detection to create image region observation; and performing image segmentation to provide image edge observations. | 12-19-2013 |
20140035934 | Avatar Facial Expression Techniques - A method and apparatus for capturing and representing 3D wire-frame, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequences corresponding to one of the plurality of facial expressions; and retrieving one of the feature data sequences based on user facial feature data; and mapping the retrieved feature data sequence to an avatar face. The method may advantageously provide improvements in execution speed and communications bandwidth. | 02-06-2014 |
20140037134 | GESTURE RECOGNITION USING DEPTH IMAGES - Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed. | 02-06-2014 |
20140055554 | SYSTEM AND METHOD FOR COMMUNICATION USING INTERACTIVE AVATAR - A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, determining facial characteristics from the face, including eye movement and eyelid movement of a user indicative of direction of user gaze and blinking, respectively, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. | 02-27-2014 |
20140147035 | HAND GESTURE RECOGNITION SYSTEM - A cost-effective and computationally efficient hand gesture recognition system for detecting and/or tracking a face region and/or a hand region in a series of images. A skin segmentation model is updated with skin pixel information from the face and iteratively applied to the pixels in the hand region, to more accurately identify the pixels in the hand region given current lighting conditions around the image. Shape features are then extracted from the image, and based on the shape features, a hand gesture is identified in the image. The identified hand gesture may be used to generate a command signal to control the operation of an application or system. | 05-29-2014 |
20140152758 | COMMUNICATION USING INTERACTIVE AVATARS - Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters. | 06-05-2014 |
20140198121 | SYSTEM AND METHOD FOR AVATAR GENERATION, RENDERING AND ANIMATION - A video communication system that replaces actual live images of the participating users with animated avatars. The system allows generation, rendering and animation of a two-dimensional (2-D) avatar of a user's face. The 2-D avatar represents a user's basic face shape and key facial characteristics, including, but not limited to, position and shape of the eyes, nose, mouth, and face contour. The system further allows adaptive rendering for displaying allow different scales of the 2-D avatar to be displayed on associated different sized displays of user devices. | 07-17-2014 |
20140218371 | FACIAL MOVEMENT BASED AVATAR ANIMATION - Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently. | 08-07-2014 |
20140267413 | ADAPTIVE FACIAL EXPRESSION CALIBRATION - Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user. | 09-18-2014 |
20140267544 | SCALABLE AVATAR MESSAGING - Technologies for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user include capturing real-time video of a user of a local computing device. The computing device extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to a server. The server generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device. | 09-18-2014 |
20140300539 | GESTURE RECOGNITION USING DEPTH IMAGES - Methods, apparatuses, and articles associated with gesture recognition using depth images are disclosed herein. In various embodiments, an apparatus may include a face detection engine configured to determine whether a face is present in one or more gray images of respective image frames generated by a depth camera, and a hand tracking engine configured to track a hand in one or more depth images generated by the depth camera. The apparatus may further include a feature extraction and gesture inference engine configured to extract features based on results of the tracking by the hand tracking engine, and infer a hand gesture based at least in part on the extracted features. Other embodiments may also be disclosed and claimed. | 10-09-2014 |