Patent application number | Description | Published |
20110060556 | Method for Registering 3D Points with 3D Planes - Three-dimensional points acquired of an object in a sensor coordinate system are registered with planes modeling the object in a world coordinate system by determining correspondences between the points and the planes. Points are transformed to an intermediate coordinate system using the correspondences and transformation parameters. The planes are transformed to an intermediate world coordinate system using world rotation and translation parameters. Intermediate rotation and translation parameters are determined by applying coplanarity constraints and orthogonality constraints to the points in the intermediate sensor coordinate system and the planes in the intermediate world coordinate system. Then, rotation and translation parameters between the sensor and world coordinate systems are determined to register the points with the planes. | 03-10-2011 |
20110141251 | Method and System for Segmenting Moving Objects from Images Using Foreground Extraction - A set of images is acquired of a scene by a camera. The scene includes a moving object, and a relative difference of a motion of the camera and a motion of the object is substantially zero. Statistical properties of pixels in the images are determined, and a statistical method is applied to the statistical properties to identify pixels corresponding to the object. | 06-16-2011 |
20110211729 | Method for Generating Visual Hulls for 3D Objects as Sets of Convex Polyhedra from Polygonal Silhouettes - A visual hull for a 3D object is generated by using a set of silhouettes extracted from a set of images. First, a set of convex polyhedra is generated as a coarse 3D model of the object. Then for each image, the convex polyhedra are refined by projecting them to the image and determining the intersections with the silhouette in the image. The visual hull of the object is represented as union of the convex polyhedra. | 09-01-2011 |
20110246130 | Localization in Industrial Robotics Using Rao-Blackwellized Particle Filtering - Embodiments of the invention disclose a system and a method for determining a pose of a probe relative to an object by probing the object with the probe, comprising steps of: determining a probability of the pose using Rao-Blackwellized particle filtering, wherein a probability of a location of the pose is represented by a location of each particle, and a probability of an orientation of the pose is represented by a Gaussian distribution over orientation of each particle conditioned on the location of the particle, wherein the determining is performed for each subsequent probing until the probability of the pose concentrates around a particular pose; and estimating the pose of the probe relative to the object based on the particular pose. | 10-06-2011 |
20110276307 | Method and System for Registering an Object with a Probe Using Entropy-Based Motion Selection and Rao-Blackwellized Particle Filtering - A probe is registered with an object by probing the object with the probe at multiple poses, wherein each pose of the probe includes a location and an orientation. A probability distribution of a current location of the probe is represented by a set of particles, and a probability distribution of a current orientation of the probe is represented by a Gaussian distribution for each particle conditioned on the current location. A set of candidate motions is chosen, and for each candidate motion, an expected uncertainty based on the set of particles is determined. The candidate motion with a least expected uncertainty is selected as a next motion of the probe, the probe is moved according to the next motion, and the set of particles is updated using the next pose of the probe. | 11-10-2011 |
20110316968 | Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras - A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image. | 12-29-2011 |
20120002304 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - Embodiment of invention discloses a system and a method for determining a three-dimensional (3D) location of a folding point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system. One embodiment maps the catadioptric system, including 3D locations of the PS and the COP on a two-dimensional (2D) plane defined by an axis of symmetry of a folding optical element and the PS to produce a conic and 2D locations of the PS and COP on the 2D plane, and determines a 2D location of the folding point on the 2D plane based on the conic, the 2D locations of the PS and the COP. Next, the embodiment determines the 3D location of the folding point from the 2D location of the folding point on the 2D plane. | 01-05-2012 |
20120250977 | Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - A three-dimensional (3D) location of a reflection point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system is determined. The catadioptric system is non-central and includes the camera and a reflector, wherein a surface of the reflector is a quadric surface rotationally symmetric around an axis of symmetry. The 3D location of the reflection point is determined based on a law of reflection, an equation of the reflector, and an equation describing a reflection plane defined by the COP, the PS, and a point of intersection of a normal to the reflector at the reflection point with the axis of symmetry. | 10-04-2012 |
20130010067 | Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes - A dynamic scene is reconstructed as depths and an extended depth of field video by first acquiring, with a camera including a lens and sensor, a focal stack of the dynamic scene while changing a focal depth. An optical flow between the frames of the focal stack is determined, and the frames are warped according to the optical flow to align the frames and to generate a virtual static focal stack. Finally, a depth map and a texture map for each virtual static focal stack is generated using a depth from defocus, wherein the texture map corresponds to an EDOF image. | 01-10-2013 |
20130156262 | Voting-Based Pose Estimation for 3D Sensors - A pose of an object is estimated by first defining a set of pair features as pairs of geometric primitives, wherein the geometric primitives include oriented surface points, oriented boundary points, and boundary line segments. Model pair features are determined based on the set of pair features for a model of the object. Scene pair features are determined based on the set of pair features from data acquired by a 3D sensor, and then the model pair features are matched with the scene pair features to estimate the pose of the object. | 06-20-2013 |