Patent application number | Description | Published |
20090116728 | Method and System for Locating and Picking Objects Using Active Illumination - A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object. | 05-07-2009 |
20090201498 | Agile Spectrum Imaging Apparatus and Method - An optical system performs agile spectrum imaging. The system includes a first lens for focusing light from a light source. The focused light is dispersed over a spectrum of wavelengths. A second lens focuses the dispersed light onto a mask. The mask selectively attenuates the wavelengths of the spectrum of the light source onto an image plane of the light destination. Depending on the arrangement of the light source and destination, the system can act as a 2. The apparatus of claim | 08-13-2009 |
20090273843 | Apparatus and Method for Reducing Glare in Images - Glare is reduced by acquiring an input image with a camera having a lens and a sensor, in which a pin-hole mask is placed in close proximity to the sensor. The mask localizes the glare at readily identifiable pixels, which can then be filtered to produce a glare reduce output image. | 11-05-2009 |
20100079481 | METHOD AND SYSTEM FOR MARKING SCENES AND IMAGES OF SCENES WITH OPTICAL TAGS - A method and system marks a scene and images acquired of the scene with tags. A set of tags is projected into a scene while modulating an intensity of each tag according to a unique temporally varying code. Each tag is projected as an infrared signal at a known location in the scene. Sequences of infrared and color images are acquired of the scene while performing the projecting and the modulating. A subset of the tags is detected in the sequence of infrared images. Then, the sequence of color image is displayed while marking a location of each detected tag in the displayed sequence of color images, in which the marked location of the detected tag corresponds to the known location of the tag in the scene. | 04-01-2010 |
20100098323 | Method and Apparatus for Determining 3D Shapes of Objects - An apparatus and method determine a 3D shape of an object in a scene. The object is illuminated to cast multiple silhouettes on a diffusing screen coplanar and in close proximity to a mask. A single image acquired of the diffusing screen is partitioned into subview according to the silhouettes. A visual hull of the object is then constructed according to isosurfaces of the binary images to approximate the 3D shape of the object. | 04-22-2010 |
20100246989 | Multi-Image Deblurring - Embodiments of the invention describe a method for reducing a blur in an image of a scene. First, we acquire a set of images of the scene, wherein each image in the set of images includes an object having a blur associated with a point spread function (PSF) forming a set of point spread functions (PSFs), wherein the set of PSFs is suitable for null-filling operation. Next, we invert jointly the set of images and the set of PSFs to produce an output image having a reduced blur. | 09-30-2010 |
20100259670 | Methods and Apparatus for Coordinated Lens and Sensor Motion - In exemplary implements of this invention, a lens and sensor of a camera are intentionally destabilized (i.e., shifted relative to the scene being imaged) in order to create defocus effects. That is, actuators in a camera move a lens and a sensor, relative to the scene being imaged, while the camera takes a photograph. This motion simulates a larger aperture size (shallower depth of field). Thus, by translating a lens and a sensor while taking a photo, a camera with a small aperture (such as a cell phone or small point and shoot camera) may simulate the shallow DOF that can be achieved with a professional SLR camera. This invention may be implemented in such a way that programmable defocus effects may be achieved. Also, approximately depth-invariant defocus blur size may be achieved over a range of depths, in some embodiments of this invention. | 10-14-2010 |
20100265386 | 4D Light Field Cameras - A camera acquires a 4D light field of a scene. The camera includes a lens and sensor. A mask is arranged in a straight optical path between the lens and the sensor. The mask including an attenuation pattern to spatially modulate the 4D light field acquired of the scene by the sensor. The pattern has a low spatial frequency when the mask is arranged near the lens, and a high spatial frequency when the mask is arranged near the sensor. | 10-21-2010 |
20110017826 | Methods and apparatus for bokeh codes - In an illustrative implementation of this invention, an optical pattern that encodes binary data is printed on a transparency. For example, the pattern may comprise data matrix codes. A lenslet is placed at a distance equal to its focal length from the optical pattern, and thus collimates light from the optical pattern. The collimated light travels to a conventional camera. For example, the camera may be meters distant. The camera takes a photograph of the optical pattern at a time that the camera is not focused on the scene that it is imaging, but instead is focused at infinity. Because the light is collimated, however, a focused image is captured at the camera's focal plane. The binary data in the pattern may include information regarding the object to which the optical pattern is affixed and information from which the camera's pose may be calculated. | 01-27-2011 |
20110019056 | Bi-Directional Screen - A bidirectional screen alternately switches between a display mode showing conventional graphics and a capture mode in which the LCD backlight is disabled and the LCD displays a pinhole array or a tiled-broadband code. A large-format image sensor is placed behind the liquid crystal layer. Together, the image sensor and LCD function as a mask-based light field camera, capturing an array of images equivalent to that produced by an array of cameras spanning the display surface. The recovered multi-view orthographic imagery is used to passively estimate the depth of scene points from focus. | 01-27-2011 |
20110191073 | Methods and Apparatus for Direct-Global Separation of Light Using Angular Filtering - In an exemplary implementation of this invention, light from a scattering scene passes through a spatial light attenuation pattern and strikes a sensor plane of a camera. Based on said camera's measurements of the received light, a processing unit calculates angular samples of the received light. Light that strikes the sensor plane at certain angles comprises both scattered and directly transmitted components; whereas light that strikes at other angles comprises solely scattered light. A processing unit calculates a polynomial model for the intensity of scattered-only light that falls at the latter angles, and further estimates the direct-only component of the light that falls at the former angles. Further, a processing unit may use the estimated direct component to calculate a reconstructed 3D shape, such as a 3D shape of a finger vein pattern, using an algebraic reconstruction technique. | 08-04-2011 |
20110316968 | Digital Refocusing for Wide-Angle Images Using Axial-Cone Cameras - A single camera acquires an input image of a scene as observed in an array of spheres, wherein pixels in the input image corresponding to each sphere form a sphere image. A set of virtual cameras are defined for each sphere on a line joining a center of the sphere and a center of projection of the camera, wherein each virtual camera has a different virtual viewpoint and an associated cone of rays, appearing as a circle of pixels on its virtual image plane. A projective texture mapping of each sphere image is applied to all of the virtual cameras on the virtual image plane to produce a virtual camera image comprising circle of pixels. Each virtual camera image for each sphere is then projected to a refocusing geometry using a refocus viewpoint to produce a wide-angle lightfield view, which are averaged to produce a refocused wide-angle image. | 12-29-2011 |
20120057040 | APPARATUS AND METHOD FOR PROCESSING LIGHT FIELD DATA USING A MASK WITH AN ATTENUATION PATTERN - Provided are an apparatus and method for processing a light field image that is acquired and processed using a mask to spatially modulate a light field. The apparatus includes a lens, a mask to spatially modulate 4D light field data of a scene passing through the lens to include wideband information on the scene, a sensor to detect a 2D image corresponding to the spatially modulated 4D light field data, and a data processing unit to recover the 4D light field data from the 2D image to generate an all-in-focus image. | 03-08-2012 |
20120075423 | Methods and Apparatus for Transient Light Imaging - In illustrative implementations of this invention, multi-path analysis of transient illumination is used to reconstruct scene geometry, even of objects that are occluded from the camera. An ultrafast camera system is used. It comprises a photo-sensor (e.g., accurate in the picosecond range), a pulsed illumination source (e.g. a femtosecond laser) and a processor. The camera emits a very brief light pulse that strikes a surface and bounces. Depending on the path taken, part of the light may return to the camera after one, two, three or more bounces. The photo-sensor captures the returning light bounces in a three-dimensional time image I(x,y,t) for each pixel. The camera takes different angular samples from the same viewpoint, recording a five-dimensional STIR (Space Time Impulse Response). A processor analyzes onset information in the STIR to estimate pairwise distances between patches in the scene, and then employs isometric embedding to estimate patch coordinates. | 03-29-2012 |
20120140131 | Content-Adaptive Parallax Barriers for Automultiscopic Display - In exemplary implementations of this invention, two LCD screens display a multi-view 3D image that has both horizontal and vertical parallax, and that does not require a viewer to wear any special glasses. Each pixel in the LCDs can take on any value: the pixel can be opaque, transparent, or any shade between. For regions of the image that are adjacent to a step function (e.g., a depth discontinuity) and not adjacent to a sharp corner, the screens display local parallax barriers comprising many small slits. The barriers and the slits tend to be oriented perpendicular to the local angular gradient of the target light field. In some implementations, the display is optimized to seek to minimize the Euclidian distance between the desired light field and the actual light field that is produced. Weighted, non-negative matrix factorization (NMF) is used for this optimization. | 06-07-2012 |
20120206694 | Methods and Apparatus for Cataract Detection and Measurement - In exemplary implementations of this invention, cataracts in the human eye are assessed and mapped by measuring the perceptual impact of forward scattering on the foveal region. The same method can be used to measure scattering/blocking media inside lenses of a camera. Close-range anisotropic displays create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback is accepted and analyzed, to generate maps for opacity, attenuation, contrast and sub-aperture point-spread functions (PSFs). Optionally, the PSF data is used to reconstruct the individual's cataract-affected view. | 08-16-2012 |
20120300062 | Methods and apparatus for estimation of motion and size of non-line-of-sight objects - In exemplary implementations of this invention, a time of flight camera (ToF camera) can estimate the location, motion and size of a hidden moving object, even though (a) the hidden object cannot be seen directly (or through mirrors) from the vantage point of the ToF camera (including the camera's illumination source and sensor), and (b) the object is in a visually cluttered environment. The hidden object is a NLOS (non-line-of-sight) object. The time of flight camera comprises a streak camera and a laser. In these exemplary implementations, the motion and absolute locations of NLOS moving objects in cluttered environments can be estimated through tertiary reflections of pulsed illumination, using relative time differences of arrival at an array of receivers. Also, the size of NLOS moving objects can be estimated by backprojecting extremas of NLOS moving object time responses. | 11-29-2012 |
20130027668 | Near Eye Tool for Refractive Assessment - In exemplary implementations, this invention is a tool for subjective assessment of the visual acuity of a human eye. A microlens or pinhole array is placed over a high-resolution display. The eye is brought very near to the device. Patterns are displayed on the screen under some of the lenslets or pinholes. Using interactive software, a user causes the patterns that the eye sees to appear to be aligned. The software allows the user to move the apparent position of the patterns. This apparent motion is achieved by pre-warping the position and angle of the ray-bundles exiting the lenslet display. As the user aligns the apparent position of the patterns, the amount of pre-warping varies. The amount of pre-warping required in order for the user to see what appears to be a single, aligned pattern indicates the lens aberration of the eye. | 01-31-2013 |
20130100250 | Methods and apparatus for imaging of occluded objects from scattered light - In exemplary implementations of this invention, a 3D range camera “looks around a corner” to image a hidden object, using light that has bounced (reflected) off of a diffuse reflector. The camera can recover the 3D structure of the hidden object. | 04-25-2013 |
20130100339 | Methods and apparatus for ultra-fast camera - In exemplary implementations of this invention, a set of two scanning mirrors scans the one dimensional field of view of a streak camera across a scene. The mirrors are continuously moving while the camera takes streak images. Alternately, the mirrors may only between image captures. An illumination source or other captured event is synchronized with the camera so that for every streak image the scene looks different. The scanning assures that different parts of the scene are captured. | 04-25-2013 |
20130176704 | Polarization fields for dynamic light field display - In exemplary implementations of this invention, a flat screen device displays a 3D scene. The 3D display may be viewed by a person who is not wearing any special glasses. The flat screen device displays dynamically changing 3D imagery, with a refresh rate so fast that the device may be used for 3D movies or for interactive, 3D display. The flat screen device comprises a stack of LCD layers with two crossed polarization filters, one filter at each end of the stack. One or more processors control the voltage at each pixel of each LCD layer, in order to control the polarization state rotation induced in light at that pixel. The processor employs an algorithm that models each LCD layer as a spatially-controllable polarization rotator, rather than a conventional spatial light modulator that directly attenuates light. Color display is achieved using field sequential color illumination with monochromatic LCDs. | 07-11-2013 |
20130208241 | Methods and Apparatus for Retinal Imaging - In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging. | 08-15-2013 |
20140063077 | Tensor Displays - In exemplary implementations of this invention, an automultiscopic display device includes (1) one or more spatially addressable, light attenuating layers, and (2) a controller which is configured to perform calculations to control the device. In these calculations, tensors provide sparse, memory-efficient representations of a light field. The calculations include using weighted nonnegative tensor factorization (NTF) to solve an optimization problem. The NTF calculations can be sufficiently efficient to achieve interactive refresh rates. Either a directional backlight or a uniform backlight may be used. For example, the device may have (1) a high resolution LCD in front, and (2) a low resolution directional backlight. Or, for example, the device may have a uniform backlight and three or more LCD panels. In these examples, all of the LCDs and the directional backlight (if applicable) may be time-multiplexed. | 03-06-2014 |
20140240532 | Methods and Apparatus for Light Field Photography - In exemplary implementations of this invention, a light field camera uses a light field dictionary to reconstruct a 4D light field from a single photograph. The light field includes both angular and spatial information and has a spatial resolution equal to the spatial resolution of the imaging sensor. Light from a scene passes through a coded spatial light modulator (SLM) before reaching an imaging sensor. Computer processors reconstruct a light field. This reconstruction includes computing a sparse or compressible coefficient vector using a light field dictionary matrix. Each column vector of the dictionary matrix is a light field atom. These light field atoms each, respectively, comprise information about a small 4D region of a light field. Reconstruction quality may be improved by using an SLM that is as orthogonal as possible to the dictionary. | 08-28-2014 |
20140300869 | Methods and Apparatus for Light Field Projection - In exemplary implementations of this invention, light from a light field projector is transmitted through an angle-expanding screen to create a glasses-free, 3D display. The display can be horizontal-only parallax or full parallax. In the former case, a vertical diffuser may positioned in the optical stack. The angle-expanding screen may comprise two planar arrays of optical elements (e.g., lenslets or lenticules) separated from each other by the sum of their focal distances. Alternatively, a light field projector may project light rays through a focusing lens onto a diffuse, transmissive screen. In this alternative approach, the light field projector may comprise two spatial light modulators (SLMs). A focused image of the first SLM, and a slightly blurred image of the second SLM, are optically combined on the diffuser, creating a combined image that has a higher spatial resolution and a higher dynamic range than either of two SLMs. | 10-09-2014 |
20140340569 | Methods and apparatus for multi-frequency camera - In exemplary implementations of this invention, a multi-frequency ToF camera mitigates the effect of multi-path interference (MPI), and can calculate an accurate depth map despite MPI. A light source in the multi-frequency camera emits light in a temporal sequence of different frequencies. For example, the light source can emit a sequence of ten equidistant frequencies f=10 MHz, 20 MHz, 30 MHz, . . . , 100 MHz. At each frequency, a lock-in sensor within the ToF camera captures 4 frames. From these 4 frames, one or more processors compute, for each pixel in the sensor, a single complex number. The processors stack all of such complex quantities (one such complex number per pixel per frequency) and solve for the depth and intensity, using a spectral estimation technique. | 11-20-2014 |
20140347676 | Methods and Apparatus for Imaging of Occluded Objects - An active imaging system, which includes a light source and light sensor, generates structured illumination. The light sensor captures transient light response data regarding reflections of light emitted by the light source. The transient light response data is wavelength-resolved. One or more processors process the transient light response data and data regarding the structured illumination to calculate a reflectance spectra map of an occluded surface. The processors also compute a 3D geometry of the occluded surface. | 11-27-2014 |
20140367558 | Methods and Apparatus for High Speed Camera - In exemplary implementations of this invention, a camera can capture multiple millions of frames per second, such that each frame is 2D image, rather than a streak. A light source in the camera emits ultrashort pulses of light to illuminate a scene. Scattered light from the scene returns to the camera. This incoming light strikes a photocathode, which emits electrons, which are detected by a set of phosphor blocks, which emit light, which is detected by a light sensor. Voltage is applied to plates to create an electric field that deflects the electrons. The voltage varies in a temporal “stepladder” pattern, deflecting the electrons by different amounts, such that the electrons hit different phosphor blocks at different times during the sequence. Each phosphor block (together with the light sensor) captures a separate frame in the sequence. A mask may be used to increase resolution. | 12-18-2014 |
20140368728 | Methods and Apparatus for High Speed Camera - In exemplary implementations of this invention, a light source illuminates a scene and a light sensor captures data about light that scatters from the scene. The light source emits multiple modulation frequencies, either in a temporal sequence or as a superposition of modulation frequencies. Reference signals that differ in phase are applied to respective subregions of each respective pixel. The number of subregions per pixel, and the number of reference signals per pixel, is preferably greater than four. One or more processors calculate a full cross-correlation function for each respective pixel, by fitting light intensity measurements to a curve, the light intensity measurements being taken, respectively, by respective subregions of the respective pixel. The light sensor comprises M subregions. A lenslet is placed over each subregion, so that each subregion images the entire scene. At least one temporal sequence of frames is taken, one frame per subregion. | 12-18-2014 |
20150035880 | Methods and Apparatus for Visual Display - In exemplary implementations of this invention, light from a backlight is transmitted through two stacked LCDs and then through a diffuser. The front side of the diffuser displays a time-varying sequence of 2D images. Processors execute an optimization algorithm to compute optimal pixel states in the first and second LCDs, respectively, such that for each respective image in the sequence, the optimal pixel states minimize, subject to one or more constraints, a difference between a target image and the respective image. The processors output signals to control actual pixel states in the LCDs, based on the computed optimal pixel states. The 2D images displayed by the diffuser have a higher spatial resolution than the native spatial resolution of the LCDs. Alternatively, the diffuser may be switched off, and the device may display either (a) 2D images with a higher dynamic range than the LCDs, or (b) an automultiscopic display. | 02-05-2015 |