Patent application number | Description | Published |
20080316214 | PREFIX SUM PASS TO LINEARIZE A-BUFFER STORAGE - The architecture implements A-buffer in hardware by extending hardware to efficiently store a variable amount of data for each pixel. In operation, a prepass is performed to generate the counts of the fragments per pixel in a count buffer, followed by a prefix sum pass on the generated count buffer to calculate locations in a fragment buffer in which to store all the fragments linearly. An index is generated for a given pixel in the prefix sum pass and stored in a location buffer. Access to the pixel fragments is then accomplished using the index. Linear storage of the data allows for a fast rendering pass that stores all the fragments to a memory buffer without needing to look at the contents of the fragments. This is then followed by a resolve pass on the fragment buffer to generate the final image. | 12-25-2008 |
20100197390 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source an observed depth image of a scene including the target. Each pixel of the observed depth image is labeled as either a foreground pixel belonging to the target or a background pixel not belonging to the target. Each foreground pixel is labeled with body part information indicating a likelihood that that foreground pixel belongs to one or more body parts of the target. The target is modeled with a skeleton including a plurality of skeletal points, each skeletal point including a three dimensional position derived from body part information of one or more foreground pixels. | 08-05-2010 |
20110080336 | Human Tracking System - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 04-07-2011 |
20110080475 | Methods And Systems For Determining And Tracking Extremities Of A Target - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined. | 04-07-2011 |
20110081044 | Systems And Methods For Removing A Background Of An Image - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed. | 04-07-2011 |
20110081045 | Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 04-07-2011 |
20110102438 | Systems And Methods For Processing An Image For Target Tracking - An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be processed. For example, the image may be downsampled, a shadow, noise, and/or a missing potion in the image may be determined, pixels in the image that may be outside a range defined by a capture device associated with the image may be determined, a portion of the image associated with a floor may be detected. Additionally, a target in the image may be determined and scanned. A refined image may then be rendered based on the processed image. The refined image may then be processed to, for example, track a user. | 05-05-2011 |
20110150271 | MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera. | 06-23-2011 |
20110234589 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 09-29-2011 |
20120038657 | GPU TEXTURE TILE DETAIL CONTROL - Systems and associated methods for processing textures in a graphical processing unit (GPU) are disclosed. Textures may be managed on a per region (e.g., tile) basis, which allows efficient use of texture memory. Moreover, very large textures may be used. Techniques provide for both texture streaming, as well as sparse textures. A GPU texture unit may be used to intelligently clamp LOD based on a shader specified value. The texture unit may provide feedback to the shader to allow the shader to react conditionally based on whether clamping was used, etc. Per region (e.g., per-tile) independent mipmap stacks may be used to allow very large textures. | 02-16-2012 |
20120057753 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 03-08-2012 |
20120128208 | Human Tracking System - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 05-24-2012 |
20120146902 | ORIENTING THE POSITION OF A SENSOR - Techniques are provided for re-orienting a field of view of a depth camera having one or more sensors. The depth camera may have one or more sensors for generating a depth image and may also have an RGB camera. In some embodiments, the field of view is re-oriented based on the depth image. The position of the sensor(s) may be altered to change the field of view automatically based on an analysis of objects in the depth image. The re-orientation process may be repeated until a desired orientation of the sensor is determined. Input from the RGB camera might be used to validate a final orientation of the depth camera, but is not required to during the process of determining new possible orientation of the field of view. | 06-14-2012 |
20120157207 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 06-21-2012 |
20120177254 | MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. | 07-12-2012 |
20120306735 | THREE-DIMENSIONAL FOREGROUND SELECTION FOR VISION SYSTEM - A method for controlling a computer system includes acquiring video of a subject, and obtaining from the video a time-resolved sequence of depth maps. An area targeting motion is selected from each depth map in the sequence. Then, a section of the depth map bounded by the area and lying in front of a plane is selected. This section of the depth map is used for fitting a geometric model of the subject. | 12-06-2012 |
20120309517 | THREE-DIMENSIONAL BACKGROUND REMOVAL FOR VISION SYSTEM - A method for controlling a computer system includes acquiring video of a subject, and obtaining from the video a time-resolved sequence of depth maps. A geometric model of the subject is fit to each depth map in the sequence and tracked into a subsequent depth map in the sequence. From the subsequent depth map, a background section is selected for exclusion. The background section is one that lacks coherent motion and is located more than a threshold distance from the coordinates of the geometric model tracked in. Then, a subsequent geometric model of the subject is fit to the depth map with the background section excluded. | 12-06-2012 |
20130028476 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 01-31-2013 |
20130070058 | SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose. | 03-21-2013 |
20130129155 | MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. | 05-23-2013 |
20130241833 | POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes. | 09-19-2013 |
20140022161 | HUMAN TRACKING SYSTEM - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 01-23-2014 |
20140044309 | HUMAN TRACKING SYSTEM - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 02-13-2014 |
20140078141 | POSE TRACKING PIPELINE - A method of tracking a subject includes receiving from a source a depth image of a scene including the subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that image the subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the subject as a model including a plurality of shapes. | 03-20-2014 |
20140112547 | SYSTEMS AND METHODS FOR REMOVING A BACKGROUND OF AN IMAGE - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed. | 04-24-2014 |
20140354775 | EDGE PRESERVING DEPTH FILTERING - A scene is illuminated with modulated illumination light that reflects from surfaces in the scene as modulated reflection light. Each of a plurality of pixels of a depth camera receive the modulated reflection light and observe a phase difference between the modulated illumination light and the modulated reflection light. For each of the plurality of pixels, an edginess of that pixel is recognized, and the phase difference of that pixel is smoothed as a function of the edginess of that pixel. | 12-04-2014 |
20140375557 | HUMAN TRACKING SYSTEM - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities. | 12-25-2014 |
20150086108 | IDENTIFICATION USING DEPTH-BASED HEAD-DETECTION DATA - A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer. | 03-26-2015 |