Patent application number | Description | Published |
20100332425 | Method for Clustering Samples with Weakly Supervised Kernel Mean Shift Matrices - A method clusters samples using a mean shift procedure. A kernel matrix is determined from the samples in a first dimension. A constraint matrix and a scaling matrix are determined from a constraint set. The kernel matrix is projected to a feature space having a second dimension using the constraint matrix, wherein the second dimension is higher than the first dimension. Then, the samples are clustered according to the kernel matrix. | 12-30-2010 |
20110157178 | Method and System for Determining Poses of Objects - A pose for an object in a scene is determined by first rendering sets of virtual images of a model of the object using a virtual camera. Each set of virtual images is for a different known pose the model, and constructing virtual depth edge map from each virtual image, which are stored in a database. A set of real images of the object at an unknown pose are acquired by a real camera, and constructing real depth edge map for each real image. The real depth edge maps are compared with the virtual depth edge maps using a cost function to determine the known pose that best matches the unknown pose, wherein the matching is based on locations and orientations of pixels in the depth edge maps. | 06-30-2011 |
20110200229 | Object Detecting with 1D Range Sensors - Moving objects are classified based on maximum margin classification and discriminative probabilistic sequential modeling of range data acquired by a scanner with a set of one or more 1D laser line scanner. The range data in the form of 2D images is pre-processed and then classified. The classifier is composed of appearance classifiers, sequence classifiers with different inference techniques, and state machine enforcement of a structure of the objects. | 08-18-2011 |
20120275702 | Method for Segmenting Images Using Superpixels and Entropy Rate Clustering - An image is segmented into superpixels by constructing a graph with vertices connected by edges, wherein each vertex corresponds to a pixel in the image, and each edge is associated with a weight indicating a similarity of the corresponding pixels, A subset of edges in the graph are selected to segment the graph into subgraphs, wherein the selecting maximizes an objective function based on an entropy rate and a balancing term. The edges with maximum gains are added to the graph until a number of subgraphs is equal to some threshold. | 11-01-2012 |
20130010067 | Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes - A dynamic scene is reconstructed as depths and an extended depth of field video by first acquiring, with a camera including a lens and sensor, a focal stack of the dynamic scene while changing a focal depth. An optical flow between the frames of the focal stack is determined, and the frames are warped according to the optical flow to align the frames and to generate a virtual static focal stack. Finally, a depth map and a texture map for each virtual static focal stack is generated using a depth from defocus, wherein the texture map corresponds to an EDOF image. | 01-10-2013 |
Patent application number | Description | Published |
20120269384 | Object Detection in Depth Images - A method for detecting an object in a depth image includes determining a detection window covering a region in the depth image, wherein a location of the detection window is based on a location of a candidate pixel in the depth image, wherein a size of the detection window is based on a depth value of the candidate pixel and a size of the object. A foreground region in the detection window is segmented based on the depth value of the candidate pixel and the size of the object. A feature vector is determined based on depth values of the pixels in the foreground region and the feature vector is classified to detect the object. | 10-25-2012 |
20130156262 | Voting-Based Pose Estimation for 3D Sensors - A pose of an object is estimated by first defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object. | 06-20-2013 |
20130223734 | Upscaling Natural Images - A natural input image is upscaled, first by interpolation. Second, edges in the interpolated image are sharpened by a lion-parametric patch transform. The result is decomposed into an edge layer and a detail layer. Only pixels in the detail layer enhanced, and the enhanced detail layer is merged with the edge layer to produce a high resolution version of the input image. | 08-29-2013 |
20130259371 | Appearance and Context Based Object Classification in Images - Objects in an image are classified by applying an appearance classifier to the image to determine candidates of the objects and statistics associated with the candidates, wherein the appearance classifier uses a set of windows, and the candidates are in selected windows. Then, a context classifier is applied only to the selected windows of the image to determine an identity, and location of objects in the image. | 10-03-2013 |
20140037146 | Method and System for Generating Structured Light with Spatio-Temporal Patterns for 3D Scene Reconstruction - A structured light pattern including a set of patterns in a sequence is generated by initializing a base pattern. The base pattern includes a sequence of colored stripes such that each subsequence of the colored stripes is unique for a particular size of the subsequence. The base pattern is shifted hierarchically, spatially and temporally a predetermined number of times to generate the set of patterns, wherein each pattern is different spatially and temporally. A unique location of each pixel in a set of images acquired of a scene is determined, while projecting the set of patterns onto the scene, wherein there is one image for each pattern. | 02-06-2014 |
Patent application number | Description | Published |
20140015992 | Specular Edge Extraction Using Multi-Flash Imaging - A method and system extract features from an image acquired of an object with a specular surface by first acquiring an image while illuminating the object with a hue circle generated by a set of lights flashed simultaneously. The lights have different colors and are arranged circularly around a lens of a camera. Then, the features correspond to locations of pixels in the image within a neighborhood of pixels that includes a subset of the colors of the lights. | 01-16-2014 |
20140219547 | Method for Increasing Resolutions of Depth Images - A resolution of a low resolution depth image is increased by applying joint geodesic upsampling to a high resolution image to obtain a geodesic distance map. Depths in the low resolution depth image are interpolated using the geodesic distance map to obtain a high resolution depth image. The high resolution image can be a gray scale or color image, or a binary boundary map. The low resolution depth image can be acquired by any type of depth sensor. | 08-07-2014 |
20140300599 | Method for Factorizing Images of a Scene into Basis Images - A set of nonnegative lighting basis images representing a scene illuminated by a set of stationary light sources is recovered from a set of input images of the scene that were acquired by a stationary camera. Each image is illuminated by a combination of the light sources, and at least two images in the set are illuminated by different combinations. The set of input images is decomposed into the nonnegative lighting basis images and a set of indicator coefficients, wherein each lighting basis image corresponds to an appearance of the scene illuminated by one of the light sources, and wherein each indicator coefficient indicates a contribution of one of the light sources to one of the input images. | 10-09-2014 |
20140300600 | Method for Detecting 3D Geometric Boundaries in Images of Scenes Subject to Varying Lighting - Three-dimensional (3D) geometric boundaries are detected in images of a scene that undergoes varying lighting conditions caused by light sources in different positions, from a set of input images of the scene illuminated by at least two different lighting conditions. The images are aligned, e.g., acquired by a stationary camera, so that pixels at the same location in all of the input images correspond to the same point in the scene. For each location, a patch of corresponding pixels centered at the location is extracted from each input image. For each location, a confidence value that there is a 3D geometric boundary at the location is determined. | 10-09-2014 |