Patent application number | Description | Published |
20120249751 | IMAGE PAIR PROCESSING - At least one implementation determines whether two cameras are in parallel or are converging, based on an automated analysis of images from the cameras. One particular implementation determines the disparity of a foreground point and a background point. If the sign of the two disparities are the same, then the particular implementation decides that the cameras are in parallel. Otherwise, the particular implementation decides that the two cameras are converging. More generally, various implementations access a first image and a second image that form a stereo image pair. Multiple features are selected that exist in the first image and in the second image. An indicator of depth is determined for each of the multiple features. It is determined whether the first camera and the second camera were arranged in a parallel arrangement or a converging arrangement based on the values of the determined depth indicators. | 10-04-2012 |
20130084006 | Method and Apparatus for Foreground Object Detection - The present invention utilizes depth images captured by a depth camera to detect foreground/background. In one embodiment, the method comprises establishing a single background distribution model, updating the background distribution model if a new depth value for the pixel can be represented by the background distribution model, skipping update of the background distribution model if the pixel is before the background, and replacing the background distribution model if the pixel is behind the background. In case that the background distribution model does not exist initially, a new background distribution model is created. In one embodiment of the present invention, the non-meaningful pixels are handled. In another embodiment, fluctuation of the depth value due to noise is handled by using a candidate background distribution model. In yet another embodiment, the noise for pixels around object edges is handled by using a mixture of two background distribution models. | 04-04-2013 |
20130093849 | Method and Apparatus for customizing 3-dimensional effects of stereo content - A method and system for adjustable 3-dimensional content are described in which a viewer can adjust the depth range according to the viewer's own visual comfort level and/or viewing preference. The depth change is achieved by shifting the left and right images of stereoscopic content image pairs so that corresponding pixels in the shifted left and right images of a stereoscopic pair exhibit a new horizontal disparity sufficient to achieve the desired depth change. By shifting the left and right images in an image pair, content objects in the scene can appear closer to, or farther away from the viewer than those same objects in the un-shifted image pair. This technique achieves a viewer controlled customization of the sensation of depth in the stereo-scopic 3-dimensional content. | 04-18-2013 |
20130141432 | COLOR CALIBRATION AND COMPENSATION FOR 3D DISPLAY SYSTEMS - A method and system for calibration and compensation of color in a three dimensional display system includes user calibration of individual color channels using a multiplicity of grey screens while viewing with three dimensional glasses. Look-up tables are generated to ease conversion of input pixels to color corrected pixels to pre-distort the color of the pixels being driven by the three dimensional display system. Input pixels are then converted using the look-up tables and color corrected frames are displayed to a user. The pre-distortion effect allows a user to perceive colors in the three dimensional system as intended with the distortions caused by the viewing glasses and other aspects of the three dimensional display system. | 06-06-2013 |
20130162641 | METHOD OF PRESENTING THREE-DIMENSIONAL CONTENT WITH DISPARITY ADJUSTMENTS - Visual discomfort from depth jumps in 3D video content is reduced or avoided by detecting the occurrence of a depth jump and by Input video changing the disparity of a group of received image frames including the frames at the depth jump in order to adjust the perceived depth in a smooth transition across the group of image frames from a first disparity value to a second disparity value. Depth jumps may be detected, for instance, when content is switched from one 3D shot to another 3D shot. | 06-27-2013 |
20130266207 | METHOD FOR IDENTIFYING VIEW ORDER OF IMAGE FRAMES OF STEREO IMAGE PAIR ACCORDING TO IMAGE CHARACTERISTICS AND RELATED MACHINE READABLE MEDIUM THEREOF - A method for identifying an actual view order of image frames of a stereo image pair includes at least the following steps: receiving the image frames; obtaining image characteristics by analyzing the image frames according to an assumed view order; and identifying the actual view order by checking the image characteristics. In addition, a machine readable medium storing a program code is provided. The program causes a processor to perform at least the following steps for identifying an actual view order of image frames of a stereo image pair when executed by the processor: receiving the image frames; obtaining image characteristics by analyzing the image frames according to an assumed view order; and identifying the actual view order by checking the image characteristics. | 10-10-2013 |
20130266223 | REGION GROWING METHOD FOR DEPTH MAP/COLOR IMAGE - An exemplary region growing method include at least the following steps: selecting a seed point of a current frame as an initial growing point of a region in the current frame; determining a background confidence value at a neighboring pixel around the seed point; and utilizing a processing unit for checking if the neighboring pixel is allowed to be included in the region according to at least the background confidence value. | 10-10-2013 |
20140169667 | REMOVING AN OBJECT FROM AN IMAGE - A method for removing an object from an image is described. The image is separated into a source region and a target region. The target region includes the object to be removed. A contour of the target region may be extracted. One or more filling candidate pixels are obtained. Multiple filling patches are obtained. Each filling patch is centered at a filling candidate pixel. A filling patch may be selected for replacement. | 06-19-2014 |