Patent application number | Description | Published |
20100232692 | CFA IMAGE WITH SYNTHETIC PANCHROMATIC IMAGE - A method for forming a final digital color image with reduced motion blur including of a processor for providing images having panchromatic pixels and color pixels corresponding to at least two color photo responses, interpolating between the panchromatic pixels and color pixels to produce a panchromatic image and a full-resolution color image to produce a fall-resolution synthetic panchromatic image from the fill-resolution color image; and developing color correction weights in response to the synthetic panchromatic image and the panchromatic image; and using the color correction weights to modify the fill-resolution color image to provide a final color digital image. | 09-16-2010 |
20100245636 | PRODUCING FULL-COLOR IMAGE USING CFA IMAGE - A method of forming a full-color output image using a color filter array image having a plurality of color channels and a panchromatic channel, comprising capturing a color filter array image having a plurality of color channels and a panchromatic channel, wherein the panchromatic channel is captured using a different exposure time than at least one of the color channels; computing an interpolated color image and an interpolated panchromatic image from the color filter array image; computing a chrominance image from the interpolated color image; and forming the full color output image using the interpolated panchromatic image and the chrominance image. | 09-30-2010 |
20100265370 | PRODUCING FULL-COLOR IMAGE WITH REDUCED MOTION BLUR - A method of forming a full-color output image using a color filter array image having a plurality of color channels and a panchromatic channel, comprising capturing a color filter array image having a plurality of color channels and a panchromatic channel, wherein the panchromatic channel is captured using a different exposure time than at least one of the color channels; computing an interpolated color image and an interpolated panchromatic image from the color filter array image; computing a transform relationship from the interpolated color image; and forming the full color output image using the interpolated panchromatic image and the functional relationship. | 10-21-2010 |
20100302418 | FOUR-CHANNEL COLOR FILTER ARRAY INTERPOLATION - A method of forming a full-color output image from a color filter array image having a plurality of color pixels having at least two different color responses and panchromatic pixels, comprising capturing a color filter array image using an image sensor including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit having at least three rows and three columns, the color pixels being arranged along one of the diagonals of the minimal repeating unit, and all other pixels being panchromatic pixels; computing an interpolated panchromatic image from the color filter array image; computing an interpolated color image from the color filter array image; and forming the full color output image from the interpolated panchromatic image and the interpolated color image. | 12-02-2010 |
20100302423 | FOUR-CHANNEL COLOR FILTER ARRAY PATTERN - An image sensor for capturing a color image comprising a two dimensional array of light-sensitive pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit having at least three rows and three columns, the color pixels being arranged along one of the diagonals of the minimal repeating unit, and all other pixels being panchromatic pixels. | 12-02-2010 |
20100309347 | INTERPOLATION FOR FOUR-CHANNEL COLOR FILTER ARRAY - A method is described for forming a full-color output image from a color filter array image comprising capturing an image using an image sensor including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses there is at least one row, column or diagonal of the repeating pattern that only has color pixels of the given color response and panchromatic pixels. The method further comprising, computing an interpolated panchromatic image from the color filter array image; computing an interpolated color image from the color filter array image; and forming the full color output image from the interpolated panchromatic image and the interpolated color image. | 12-09-2010 |
20100309350 | COLOR FILTER ARRAY PATTERN HAVING FOUR-CHANNELS - An image sensor for capturing a color image comprising a two dimensional array of light-sensitive pixels including panchromatic pixels and color pixels having at least three different color responses, the pixels being arranged in a rectangular minimal repeating unit having at least eight pixels and having at least two rows and two columns, wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses there is at least one row, column or diagonal of the repeating pattern that only has color pixels of the given color response and panchromatic pixels. | 12-09-2010 |
20110085745 | SEAM CARVING FOR IMAGE RESIZING - A method for modifying an input digital image having input dimensions defined by a number of input rows and input columns to form an output digital image where the number of rows or columns is reduced by one, comprising an image energy map determined from the input image; determining a seam path responsive to the image energy map; imposing constraints on the seam path; and removing pixels along the seam path to modify the input digital image. | 04-14-2011 |
20110091132 | COMBINING SEAM CARVING AN IMAGE RESIZING - A method for resizing an input digital image to form an output digital image with an output aspect ratio, comprising: determining a number of rows or columns that need to be reduced from the input digital image; determining an image energy map for the input digital image; repeatedly determining a seam path responsive to the image energy map and removing pixels along the determined seam path to determine the output digital image, wherein the determined seam path satisfies a constraint that a directional image gradient is less than a gradient threshold for each pixel in the seam path, until either the determined number of rows or columns has been reduced or no valid seam path can be found; and cropping or scaling the output digital image to the output aspect ratio if the determined number of rows or columns was not reduced. | 04-21-2011 |
20110096205 | REDUCING SIGNAL-DEPENDENT NOISE IN DIGITAL CAMERAS - A method for producing a noise-reduced digital image captured using a digital imaging system having signal-dependent noise characteristics, comprising: capturing one or more noisy digital images of a scene, wherein said at least one noisy digital image has signal-dependent noise characteristics; defining a functional relationship to relate the noisy digital images to a noise-reduced digital image, wherein the functional relationship includes at least two sets of unknown parameters, and wherein at least one of the sets of unknown parameters relates to the signal-dependent noise characteristics; defining an energy function responsive to the functional relationship which includes at least a data fidelity term to enforce similarities between the noisy digital images and the noise-reduced digital image, and a spatial fidelity term to encourage sharp edges in the noise-reduced digital image; and using an optimization process to determine a noise-reduced image responsive to the energy function. | 04-28-2011 |
20110187902 | DENOISING CFA IMAGES USING WEIGHTED PIXEL DIFFERENCES - A method for reducing noise in an image captured using a digital image sensor having pixels being arranged in a rectangular minimal repeating unit, comprising: computing first weighted pixel differences by combining first pixel differences between the pixel value of a central pixel and pixel values for nearby pixels of the first channel in a plurality of directions with corresponding local edge-responsive weighting values; computing second weighted pixel differences by combining second pixel differences between pixel values for pixels of at least one different channel in the plurality of directions with corresponding local edge-responsive weighting values; and computing a noise-reduced pixel value for the central pixel by combining the first and second weighted pixel differences with the pixel value for the central pixel. | 08-04-2011 |
20110188748 | ITERATIVELY DENOISING COLOR FILTER ARRAY IMAGES - A method for reducing noise in a color image captured using a digital image sensor having pixels being arranged in a rectangular minimal repeating unit. The method comprises, for a first color channel, determining noise reduced-pixel values using a first noise reducing process that includes computing weighted pixel differences by combining the pixel differences with corresponding local edge-responsive weighting values. The method further comprises a second noise reducing process that includes computing weighted chroma differences by combining chroma differences with corresponding local edge-responsive weighting values. | 08-04-2011 |
20110205402 | ZOOM LENS SYSTEM CHARACTERIZATION FOR IMAGE SHARPENING - A method for sharpening an input digital image captured using a digital camera having a zoom lens, determining a parameterized representation of lens acuity of the zoom lens as a function of at least the lens focal length and lens F# by fitting a parameterized function to lens acuity data for the zoom lens at a plurality of lens focal length and lens F/#; using a processor to sharpen the input digital image responsive to the particular lens focal length and lens F/# corresponding to the input digital image using the parameterized representation of the lens acuity. | 08-25-2011 |
20120099793 | VIDEO SUMMARIZATION USING SPARSE BASIS FUNCTION COMBINATION - A method for determining a video summary from a video sequence including a time sequence of video frames, comprising: defining a global feature vector representing the entire video sequence; selecting a plurality of subsets of the video frames; extracting a frame feature vector for each video frame in the selected subsets of video frames; defining a set of basis functions, wherein each basis function is associated with the frame feature vectors for the video frames in a particular subset of video frames; using a data processor to automatically determine a sparse combination of the basis functions representing the global feature vector; determining a summary set of video frames responsive to the sparse combination of the basis functions; and forming the video summary responsive to the summary set of video frames. | 04-26-2012 |
20120148149 | VIDEO KEY FRAME EXTRACTION USING SPARSE REPRESENTATION - A method for identifying a set of key frames from a video sequence including a time sequence of video frames, comprising: extracting a feature vector for each video frame in a set of video frames selected from the video sequence; defining a set of basis functions that can be used to represent the extracted feature vectors, wherein each basis function is associated with a different video frame in the set of video frames; representing the feature vectors for each video frame in the set of video frames as a sparse combination of the basis functions associated with the other video frames; and analyzing the sparse combinations of the basis functions for the set of video frames to select the set of key frames. | 06-14-2012 |
20120148157 | Video key-frame extraction using bi-level sparsity - A method for identifying a set of key frames from a video sequence including a time sequence of video frames, the method executed at least in part by a data processor, comprising: selecting a set of video frames from the video sequence; identifying a plurality of visually homogeneous regions from each of the selected video frames; defining a set of basis functions, wherein each basis function is associated with a different visually homogeneous region; determining a feature vector for each of the selected video frames; representing each of the determined feature vectors as a sparse combination of the basis functions; for each of the determined feature vectors, determining a sparse set of video frames that contain the visually homogeneous regions corresponding to the basis functions included in the corresponding sparse combination of the basis functions; and analyzing the sparse sets of video frames to identify the set of key frames. | 06-14-2012 |
20120275701 | IDENTIFYING HIGH SALIENCY REGIONS IN DIGITAL IMAGES - A method for identifying high saliency regions in a digital image, comprising: segmenting the digital image into a plurality of segmented regions; determining a saliency value for each segmented region, merging neighboring segmented regions that share a common boundary in response to determining that one or more specified merging criteria are satisfied; and designating one or more of the segmented regions to be high saliency regions. The determination of the saliency value for a segmented region includes: determining a surround region including a set of image pixels surrounding the segmented region; analyzing the image pixels in the segmented region to determine one or more segmented region attributes; analyzing the image pixels in the surround region to determine one or more corresponding surround region attributes; determining a region saliency value responsive to differences between the one or more segmented region attributes and the corresponding surround region attributes. | 11-01-2012 |
20130177242 | SUPER-RESOLUTION IMAGE USING SELECTED EDGE PIXELS - A method of providing a super-resolution image is disclosed. The method uses a processor to perform the following steps of acquiring a captured low-resolution image of a scene and resizing the low-resolution image to provide a high-resolution image. The method further includes computing local edge parameters including local edge orientations and local edge centers of gravity from the high-resolution image, selecting edge pixels in the high-resolution image responsive to the local edge parameters, and modifying the high-resolution image in response to the selected edge pixels to provide a super-resolution image. | 07-11-2013 |
20130235275 | SCENE BOUNDARY DETERMINATION USING SPARSITY-BASED MODEL - A method for determining a scene boundary location dividing a first scene and a second scene in an input video sequence. The scene boundary location is determined responsive to a merit function value, which is a function of the candidate scene boundary location. The merit function value for a particular candidate scene boundary location is determined by representing the dynamic scene content for the input video frames before and after candidate scene boundary using sparse combinations of a set of basis functions, wherein the sparse combinations of the basis functions are determined by finding a sparse vector of weighting coefficients for each of the basis functions. The weighting coefficients determined for each of the input video frames are combined to determine the merit function value. The candidate scene boundary providing the smallest merit function value is designated to be the scene boundary location. | 09-12-2013 |
20130235939 | VIDEO REPRESENTATION USING A SPARSITY-BASED MODEL - A method for representing a video sequence including a time sequence of input video frames, the input video frames including some common scene content that is common to all of the input video frames and some dynamic scene content that changes between at least some of the input video frames. Affine transform are determined to align the common scene content in the input video frames. A common video frame including the common scene content is determined by forming a sparse combination of a first basis functions. A dynamic video frame is determined for each input video frame by forming a sparse combination of a second basis functions, wherein the dynamic video frames can be combined with the respective affine transforms and the common video frame to provide reconstructed video frames. | 09-12-2013 |