Patent application number | Description | Published |
20080297621 | Strategies for extracting foreground information using flash and no-flash image pairs - A flash-based strategy is used to separate foreground information from background information within image information. In this strategy, a first image is taken without the use of flash. A second image is taken of the same subject matter with the use of flash. The foreground information in the flash image is illuminated by the flash to a much greater extent than the background information. Based on this property, the strategy applies processing to extract the foreground information from the background information. The strategy supplements the flash information by also taking into consideration motion information and color information. | 12-04-2008 |
20080298678 | CHROMATIC ABERRATION CORRECTION - A chromatic aberration (CA) correction technique is presented that substantially removes CA from an image captured by a digital camera. In general, the effects of any in-camera sharpening are reversed by applying a blurring kernel. The image is then super-sampled to approximate its state prior to the application of in-camera sampling. One of the color channels is designated as a reference channel, and an objective function is established for each of the non-reference channels. The reference color channel is assumed to be CA-free, while the objective functions are used to compute the unknown CA parameters for each non-reference channel. These sets are used in a CA removal function to substantially remove the CA associated with each of the non-reference channels. The image is then sampled to return it to its original resolution, and a sharpening filter is applied if needed to undo the effects of the previously applied blurring kernel. | 12-04-2008 |
20080298712 | IMAGE SHARPENING WITH HALO SUPPRESSION - An image sharpening technique with halo suppression is presented. Generally, one implementation of this technique completely suppresses the haloing effect typically caused by image sharpening by restricting values to within the local minimum and maximum intensities of the unsharpened image. Thus, if the sharpened value is below the local minimum, it is replaced with the local minimum. Similarly, the local maximum is taken if the sharpened value exceeds it. In other implementations of the technique, haloing caused by image sharpening is suppressed but not completely eliminated, thereby producing a subtle haloing effect. | 12-04-2008 |
20080309660 | THREE DIMENSIONAL RENDERING OF DISPLAY INFORMATION - Game data is rendered in three dimensions in the GPU of a game console. A left camera view and a right camera view are generated from a single camera view. The left and right camera positions are derived as an offset from a default camera. The focal distance of the left and right cameras is infinity. A game developer does not have to encode dual images into a specific hardware format. When a viewer sees the two slightly offset images, the user's brain combines the two offset images into a single 3D image to give the illusion that objects either pop out from or recede into the display screen. In another embodiment, individual, private video is rendered, on a single display screen, for different viewers. Rather than rendering two similar offset images, two completely different images are rendered allowing each player to view only one of the images. | 12-18-2008 |
20090244367 | CHOOSING VIDEO DEINTERLACING INTERPOLANT BASED ON COST - Deinterlacing of video involves converting interlaced video to progressive video by interpolating a missing pixel in the interlaced video from other pixels in the video. A plurality of interpolants are provided, each of which interpolates a pixel value from other pixels that are nearby in space and/or time. The data costs of using the various interpolants is calculated. A particular one of the interpolants is chosen based on the data costs associated with the various interpolants. The chosen interpolant is used to interpolate the value of the missing pixel. The interpolated pixel value may be refined based on exemplars. The exemplars may be taken from the video that is being deinterlaced. | 10-01-2009 |
20090290810 | MATTE-BASED VIDEO RESTORATION - Matte-based video restoration technique embodiments are presented which model spatio-temporally varying film wear artifacts found in digitized copies of film media. In general, this is accomplished by employing residual color information in recovering of artifact mattes. To this end, the distributions of artifact colors and their fractional contribution to each pixel of each frame being considered are extracted based on color information from the spatial and temporal neighborhoods of the pixel. The extracted information can then be used to restore the video by removing the artifacts. | 11-26-2009 |
20100054595 | Automatic Image Straightening - Tilt is reduced or eliminated in captured digital images. Edges in a first image are detected. Angles corresponding to the detected edges are determined. A dominant angle is selected from the determined angles. The first image is rotated according to the selected dominant angle to generate a second image. The second image is a de-tilted version of the first image. | 03-04-2010 |
20100142801 | Stereo Movie Editing - The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes. | 06-10-2010 |
20100318914 | VIEWER-CENTRIC USER INTERFACE FOR STEREOSCOPIC CINEMA - Described is a user interface that displays a representation of a stereo scene, and includes interactive mechanisms for changing parameter values that determine the perceived appearance of that scene. The scene is modeled as if viewed from above, including a representation of a viewer's eyes, a representation of a viewing screen, and an indication simulating what each of the viewer eyes perceives on the viewing screen. Variable parameters may include a vergence parameter, a dolly parameter, a field-of-view parameter, an interocular parameter and a proscenium arch parameter. | 12-16-2010 |
20110109755 | HARDWARE ASSISTED IMAGE DEBLURRING - The described implementations relate to deblurring images. One system includes an imaging device configured to capture an image, a linear motion detector and a rotational motion detector. This system also includes a controller configured to receive a signal from the imaging device relating to capture of the image and to responsively cause the linear motion detector and the rotational motion detector to detect motion-related information. Finally, this particular system includes a motion calculator configured to recover camera motion associated with the image based upon the detected motion-related information and to infer imaging device motion induced blur of the image and an image deblurring component configured to reduce imaging device induced blur from the image utilizing the inferred camera motion induced blur. | 05-12-2011 |
20110142370 | GENERATING A COMPOSITE IMAGE FROM VIDEO FRAMES - A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene. | 06-16-2011 |
20110176043 | Reducing Motion-Related Artifacts in Rolling Shutter Video Information - A system is described for reducing artifacts produced by a rolling shutter capture technique in the presence of high-frequency motion, e.g., produced by large accelerations or jitter. The system operates by computing low-frequency information based on the motion of points from one frame to the next. The system then uses the low-frequency information to infer the high-frequency motion, e.g., by treating the low-frequency information as known integrals of the unknown underlying high-frequency information. The system then uses the high-frequency information to reduce the presence of artifacts. In effect, the correction aims to re-render video information as though all the pixels in each frame were imaged at the same time using a global shutter technique. An auto-calibration module can estimate the value of a capture parameter, which relates to a time interval between the capture of two subsequent rows of video information. | 07-21-2011 |
20110304687 | GENERATING SHARP IMAGES, PANORAMAS, AND VIDEOS FROM MOTION-BLURRED VIDEOS - A “Blur Remover” provides various techniques for constructing deblurred images from a sequence of motion-blurred images such as a video sequence of a scene. Significantly, this deblurring is accomplished without requiring specialized side information or camera setups. In fact, the Blur Remover receives sequential images, such as a typical video stream captured using conventional digital video capture devices, and directly processes those images to generate or construct deblurred images for use in a variety of applications. No other input beyond the video stream is required for a variety of the embodiments enabled by the Blur Remover. More specifically, the Blur Remover uses joint global motion estimation and multi-frame deblurring with optional automatic video “duty cycle” estimation to construct deblurred images from video sequences for use in a variety of applications. Further, the automatically estimated video duty cycle is also separately usable in a variety of applications. | 12-15-2011 |
20120114037 | COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING - A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe. | 05-10-2012 |
20120154518 | SYSTEM FOR CAPTURING PANORAMIC STEREOSCOPIC VIDEO - Systems and methods are disclosed for generating panoramic stereoscopic images. The system includes an assembly of three or more catadioptric image sensors affixed to each other in a chassis. Each image sensor generates a catadioptric image of a panorama, which may for example be a 360° view of a scene. The software components process the catadioptric image to a 3D stereoscopic view of a panorama. | 06-21-2012 |
20120154548 | LEFT/RIGHT IMAGE GENERATION FOR 360-DEGREE STEREOSCOPIC VIDEO - Methods are disclosed for capturing image data from three or more image sensors, and for processing the captured image data into left views of a panorama taken from each image sensor and right views taken of the panorama from each image sensor. The left views are combined and used as the left perspective of the panorama, and the right views are combined as used as the right perspective of the panorama, in the stereoscopic view. | 06-21-2012 |
20120155759 | ESTABLISHING CLUSTERS OF USER PREFERENCES FOR IMAGE ENHANCEMENT - An image enhancement system may match images to a matrix having various enhancements of images for groups of users. The matrix may define image enhancement settings for the particular images and groups of users, and the matching may apply enhancements to a new image that closely matches a user's preferences. After the matrix is initially populated, new users and new images may be added to increase the matrix's accuracy. The image enhancement system may be deployed as a cloud service, where images may be enhanced as a standalone application or as part of a social network or image sharing website. In some embodiments, the image enhancement system may be deployed on a personal computer or as a component of an image capture device. | 06-21-2012 |
20120155786 | SEAMLESS LEFT/RIGHT VIEWS FOR 360-DEGREE STEREOSCOPIC VIDEO - A method is disclosed for stitching together first and second sets of images from three or more image sensors. The first set of images are combined into a composite left view of the panorama, and the second set of images are combined into a composite right view of the panorama. When properly stitched together, the left and right views may be presented as a stereoscopic view of the panorama. A stitching algorithm is applied which removes any disparity due to the parallax in the combined left images and in the combined right images. | 06-21-2012 |
20120224789 | NOISE SUPPRESSION IN LOW LIGHT IMAGES - A low light noise reduction mechanism may perform denoising prior to demosaicing, and may also use parameters determined during the denoising operation for performing demosaicing. The denoising operation may attempt to find several patches of an image that are similar to a first patch, and use a weighted average based on similarity to determine an appropriate value for denoising a raw digital image. The same weighted average and similar patches may be used for demosaicing the same image after the denoising operation. | 09-06-2012 |
20130095920 | GENERATING FREE VIEWPOINT VIDEO USING STEREO IMAGING - Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map. | 04-18-2013 |
20130100256 | GENERATING A DEPTH MAP - Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map. | 04-25-2013 |
20130147911 | AUTOMATIC 2D-TO-STEREOSCOPIC VIDEO CONVERSION - In general, a “Stereoscopic Video Converter” (SVC) provides various techniques for automatically converting arbitrary 2D video sequences into perceptually plausible stereoscopic or “3D” versions while optionally generating dense depth maps for every frame of the video sequence. In particular, the automated 2D-to-3D conversion process first automatically estimates scene depth for each frame of an input video sequence via a label transfer process that matches features extracted from those frames with features from a database of images and videos having known ground truth depths. The estimated depth distributions for all image frames of the input video sequence are then used by the SVC for automatically generating a “right view” of a corresponding stereoscopic image for each frame (assuming that each original input frame represents the “left view” of the stereoscopic image). | 06-13-2013 |
20130188876 | AUTOMATIC IMAGE STRAIGHTENING - Tilt is reduced or eliminated in captured digital images. Edges in a first image are detected. Angles corresponding to the detected edges are determined. A dominant angle is selected from the determined angles. The first image is rotated according to the selected dominant angle to generate a second image. The second image is a de-tilted version of the first image. | 07-25-2013 |
20130243320 | Image Completion Including Automatic Cropping - Described is a technology by which an image such as a stitched panorama is automatically cropped based upon predicted quality data with respect to filling missing pixels. The image may be completed, including by completing only those missing pixels that remain after cropping. Predicting quality data may be based on using restricted search spaces corresponding to the missing pixels. The crop is computed based upon the quality data, in which the crop is biased towards including original pixels and excluding predicted low quality pixels. Missing pixels are completed by using restricted search spaces to find replacement values for the missing pixels, and may use histogram matching for texture synthesis. | 09-19-2013 |
20140293074 | GENERATING A COMPOSITE IMAGE FROM VIDEO FRAMES - A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene. | 10-02-2014 |
20140307047 | ACTIVE STEREO WITH ADAPTIVE SUPPORT WEIGHTS FROM A SEPARATE IMAGE - The subject disclosure is directed towards stereo matching based upon active illumination, including using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images. To correlate pixels in actively illuminated stereo images, adaptive support weights computations may be used to determine similarity of patches corresponding to the pixels. In order to obtain meaningful adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image. | 10-16-2014 |
20140307055 | INTENSITY-MODULATED LIGHT PATTERN FOR ACTIVE STEREO - The subject disclosure is directed towards projecting light in a pattern in which the pattern contains components (e.g., spots) having different intensities. The pattern may be based upon a grid of initial points associated with first intensities and points between the initial points with second intensities, and so on. The pattern may be rotated relative to cameras that capture the pattern, with captured images used active depth sensing based upon stereo matching of dots in stereo images. | 10-16-2014 |
20140307057 | SUPER-RESOLVING DEPTH MAP BY MOVING PATTERN PROJECTOR - The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution. | 10-16-2014 |
20140307058 | ROBUST STEREO DEPTH SYSTEM - The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output. | 10-16-2014 |
20140307098 | EXTRACTING TRUE COLOR FROM A COLOR AND INFRARED SENSOR - The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image. | 10-16-2014 |
20140307307 | DIFFRACTIVE OPTICAL ELEMENT WITH UNDIFFRACTED LIGHT EXPANSION FOR EYE SAFE OPERATION - Aspects of the subject disclosure are directed towards safely projecting a diffracted light pattern, such as in an infrared laser-based projection/illumination system. Non-diffracted (zero-order) light is refracted once to diffuse (defocus) the non-diffracted light to an eye safe level. Diffracted (non-zero-order) light is aberrated twice, e.g., once as part of diffraction by a diffracting optical element encoded with a Fresnel lens (which does not aberrate the non-diffracted light), and another time to cancel out the other aberration; the two aberrations may occur in either order. Various alternatives include upstream and downstream positioning of the diffracting optical element relative to a refractive optical element, and/or refraction via positive and negative lenses. | 10-16-2014 |