Patent application number | Description | Published |
20090116728 | Method and System for Locating and Picking Objects Using Active Illumination - A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object. | 05-07-2009 |
20090273843 | Apparatus and Method for Reducing Glare in Images - Glare is reduced by acquiring an input image with a camera having a lens and a sensor, in which a pin-hole mask is placed in close proximity to the sensor. The mask localizes the glare at readily identifiable pixels, which can then be filtered to produce a glare reduce output image. | 11-05-2009 |
20100098323 | Method and Apparatus for Determining 3D Shapes of Objects - An apparatus and method determine a 3D shape of an object in a scene. The object is illuminated to cast multiple silhouettes on a diffusing screen coplanar and in close proximity to a mask. A single image acquired of the diffusing screen is partitioned into subview according to the silhouettes. A visual hull of the object is then constructed according to isosurfaces of the binary images to approximate the 3D shape of the object. | 04-22-2010 |
20100265386 | 4D Light Field Cameras - A camera acquires a 4D light field of a scene. The camera includes a lens and sensor. A mask is arranged in a straight optical path between the lens and the sensor. The mask including an attenuation pattern to spatially modulate the 4D light field acquired of the scene by the sensor. The pattern has a low spatial frequency when the mask is arranged near the lens, and a high spatial frequency when the mask is arranged near the sensor. | 10-21-2010 |
20110075020 | Increasing Temporal Resolution of Signals - Embodiments of invention disclose a system and a method for increasing a temporal resolution of a substantially periodic signal. The method acquires a signal as an input sequence of frames having a first temporal resolution, wherein the signal is a substantially periodic signal, wherein the frames in the input sequence of frames are encoded according to an encoded pattern; and transforms the input sequence of frames into an output sequence of frames having a second temporal resolution, such that the second temporal resolution is greater than the first temporal resolution, wherein the transforming is based on a sparsity of the signal in Fourier domain. | 03-31-2011 |
20110123122 | System and Method for Determining Poses of Objects - During pre-processing, a 3D model of the object is rendered for various poses by arranging virtual point light sources around the lens of a virtual camera. The shadows are used to obtain oriented depth edges of the object illuminated from multiple directions. The oriented depth edges are stored in a database. A camera acquires images of the scene by casting shadows onto the scene from different directions. The scene can include one or more objects arranged in arbitrary poses with respect to each other. The poses of the objects are determined by comparing the oriented depth edges obtained from the acquired images to the oriented depth edges stored in the database. The comparing evaluates, at each pixel, a cost function based on chamfer matching, which can be speed up using downhill simplex optimization. | 05-26-2011 |
20110235916 | Determining Points of Parabolic Curvature on Surfaces of Specular Objects - Embodiments of the invention disclose a system and a method for determining points of parabolic curvature on a surface of a specular object from a set of images of the object is acquired by a camera under a relative motion between a camera-object pair and the environment. The method determines directions of image gradients at each pixel of each image in the set of images, wherein pixels from different images corresponding to an identical point on the surface of the object form corresponding pixels. The corresponding pixels having substantially constant the direction of the image gradients are selected as pixels representing points of the parabolic curvature. | 09-29-2011 |
20110242341 | Method and System for Generating High Temporal Resolution Video from Low Temporal Resolution Videos - Embodiments of the invention disclose a system and a method for generating an output video having a first temporal resolution from input videos acquired synchronously of a scene by at least three cameras, wherein each input video has a second temporal resolution, wherein the second temporal resolution is less than the first temporal resolution. The method obtains frames of each input video, wherein the frames are sampled according to a code selected such that an integration time of the corresponding camera is greater than a frame time of the output video. Next, the method combines intensities of pixels of corresponding frames in a linear system; and solves the linear system independently for each corresponding frame to generate the output video. | 10-06-2011 |
20110243442 | Video Camera for Acquiring Images with Varying Spatio-Temporal Resolutions - A sequence of images of a scene having varying spatio-temporal resolutions is acquired by a sensor of a camera. Adjacent pixels of the sensor are partitioned into a multiple sets of the pixels. An integration time for acquiring each set of pixels is partitioned into multiple time intervals. The images are acquired while some of the pixels in each set are ON for some of the intervals, while other pixels are OFF. Then, the pixels are combined into a space-time volume of voxels, wherein the voxels have varying spatial resolutions and varying temporal resolutions. | 10-06-2011 |
20110316968 | Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras - A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image. | 12-29-2011 |
20120002304 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - Embodiment of invention discloses a system and a method for determining a three-dimensional (3D) location of a folding point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system. One embodiment maps the catadioptric system, including 3D locations of the PS and the COP on a two-dimensional (2D) plane defined by an axis of symmetry of a folding optical element and the PS to produce a conic and 2D locations of the PS and COP on the 2D plane, and determines a 2D location of the folding point on the 2D plane based on the conic, the 2D locations of the PS and the COP. Next, the embodiment determines the 3D location of the folding point from the 2D location of the folding point on the 2D plane. | 01-05-2012 |
20120250977 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - A three-dimensional (3D) location of a reflection point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system is determined. The catadioptric system is non-central and includes the camera and a reflector, wherein a surface of the reflector is a quadric surface rotationally symmetric around an axis of symmetry. The 3D location of the reflection point is determined based on a law of reflection, an equation of the reflector, and an equation describing a reflection plane defined by the COP, the PS, and a point of intersection of a normal to the reflector at the reflection point with the axis of symmetry. | 10-04-2012 |