Patent application number | Description | Published |
20100309346 | AUTOMATIC TONE MAPPING FOR CAMERAS - A device, method, computer useable medium, and processor programmed to automatically generate tone mapping curves in a digital camera based on image metadata are described. By examining image metadata from a digital camera's sensor, such as the light-product, one can detect sun-lit, high-light, and low-light scenes. Once the light-product value has been calculated for a given image, a tone mapping curve can automatically be generated within the sensor and adjusted appropriately for the scene based on predetermined parameters. Further, it has been determined that independently varying the slopes of the tone mapping curve at the low end (S | 12-09-2010 |
20100309975 | IMAGE ACQUISITION AND TRANSCODING SYSTEM - A method and system are provided to encode a video sequence into a compressed bitstream. An encoder receives a video sequence from an image-capture device, together with metadata associated with the video sequence, and codes the video sequence into a first compressed bitstream using the metadata to select or revise a coding parameter associated with a coding operation. Optionally, the video sequence may be conditioned for coding by a preprocessor, which also may use the metadata to select or revise a preprocessing parameter associated with a preprocessing operation. The encoder may itself generate metadata associated with the first compressed bitstream, which may be used together with any metadata received by the encoder, to transcode the first compressed bitstream into a second compressed bitstream. The compressed bitstreams may be decoded by a decoder to generate recovered video data, and the recovered video data may be conditioned for viewing by a postprocessor, which may use the metadata to select or revise a postprocessing parameter associated with a postprocessing operation. | 12-09-2010 |
20100309987 | IMAGE ACQUISITION AND ENCODING SYSTEM - A method and system are provided to encode a video sequence into a compressed bitstream. An encoder receives a video sequence from an image-capture device, together with metadata associated with the video sequence, and codes the video sequence into a first compressed bitstream using the metadata to select or revise a coding parameter associated with a coding operation. Optionally, the video sequence may be conditioned for coding by a preprocessor, which also may use the metadata to select or revise a preprocessing parameter associated with a preprocessing operation. The encoder may itself generate metadata associated with the first compressed bitstream, which may be used together with any metadata received by the encoder, to transcode the first compressed bitstream into a second compressed bitstream. The compressed bitstreams may be decoded by a decoder to generate recovered video data, and the recovered video data may be conditioned for viewing by a postprocessor, which may use the metadata to select or revise a postprocessing parameter associated with a postprocessing operation. | 12-09-2010 |
20110032992 | METHOD AND APPARATUS FOR H.264 TO MPEG-2 VIDEO TRANSCODING - A method for transcoding from an H.264 format to an MPEG-2 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the H.264 format to generate a picture having a plurality of macroblock pairs that used an H.264 macroblock adaptive field/frame coding; (B) determining a mode indicator for each of the macroblock pairs; and (C) coding the macroblock pairs into an output video stream in the MPEG-2 format using one of (i) an MPEG-2 field mode coding and (ii) an MPEG-2 frame mode coding as determined from the mode indicators. | 02-10-2011 |
20110074931 | SYSTEMS AND METHODS FOR AN IMAGING SYSTEM USING MULTIPLE IMAGE SENSORS - Systems and methods may employ separate image sensors for collecting different types of data. In one embodiment, separate luma, chroma and 3-D image sensors may be used. The systems and methods may involve generating an alignment transform for the image sensors, and using the 3-D data from the 3-D image sensor to process disparity compensation. The systems and methods may involve image sensing, capture, processing, rendering and/or generating images. For example, one embodiment may provide an imaging system, including: a first image sensor configured to obtain luminance data of a scene; a second image sensor configured to obtain chrominance data of the scene; a third image sensor configured to obtain three-dimensional data of the scene; and an image processor configured to receive the luminance, chrominance and three-dimensional data and to generate a composite image corresponding to the scene from that data. | 03-31-2011 |
20110090242 | SYSTEM AND METHOD FOR DEMOSAICING IMAGE DATA USING WEIGHTED GRADIENTS - Various techniques are provided herein for the demosaicing of images acquired and processed by an imaging system. The imaging system includes an image signal processor and image sensors utilizing color filter arrays (CFA) for acquiring red, green, and blue color data using one pixel array. In one embodiment, the CFA may include a Bayer pattern. During image signal processing, demosaicing may be applied to interpolate missing color samples from the raw image pattern. In one embodiment, interpolation for the green color channel may include employing edge-adaptive filters with weighted gradients of horizontal and vertical filtered values. The red and blue color channels may be interpolated using color difference samples with co-located interpolated values of the green color channel. In another embodiment, interpolation of the red and blue color channels may be performed using color ratios (e.g., versus color difference data). | 04-21-2011 |
20110090351 | TEMPORAL FILTERING TECHNIQUES FOR IMAGE SIGNAL PROCESSING - Various techniques for temporally filtering raw image data acquired by an image sensor are provided. In one embodiment, a temporal filter determines a spatial location of a current pixel and identifies at least one collocated reference pixel from a previous frame. A motion delta value is determined based at least partially upon the current pixel and its collocated reference pixel. Next, an index is determined based upon the motion delta value and a motion history value corresponding to the spatial location of the current pixel, but from the previous frame. Using the index, a first filtering coefficient may be selected from a motion table. After selecting the first filtering coefficient, an attenuation factor may be selected from a luma table based upon the value of the current pixel, and a second filtering coefficient may subsequently be determined based upon the selected attenuation factor and the first filtering coefficient. The temporally filtered output value corresponding to the current pixel may then be calculated based upon the second filtering coefficient, the current pixel, and the collocated reference pixel. | 04-21-2011 |
20110090364 | Integrated Camera Image Signal Processor and Video Encoder - An apparatus including a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information, wherein said first circuit is further configured to pass said image signal processing related information to said second circuit and said second circuit is further configured to pass said encoding related information to said first circuit. | 04-21-2011 |
20110090370 | SYSTEM AND METHOD FOR SHARPENING IMAGE DATA - Various techniques relating to image sharpening are provided. In one embodiment, a luminance image is obtained based upon image data acquired by an image sensor. A multi-scale unsharp mask, which may include at least two Gaussian filters of difference radii, is applied to the luminance image to determine a plurality of unsharp values. Each of the unsharp values may be compared to a corresponding threshold and, for the unsharp values that exceed their respective thresholds, the unsharp value is multiplied by a corresponding gain and added to a base image, which may be selected as the luminance image or the output of one of the Gaussian filters. Each gained unsharp value may be summed with the base image to produce a final sharpened output. In some embodiments, an attenuated gain may be applied to unsharp values that do not exceed their respective thresholds. | 04-21-2011 |
20110090371 | SYSTEM AND METHOD FOR DETECTING AND CORRECTING DEFECTIVE PIXELS IN AN IMAGE SENSOR - Various techniques are provided for the detection and correction of defective pixels in an image sensor. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold. If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient. | 04-21-2011 |
20110090380 | IMAGE SIGNAL PROCESSOR FRONT-END IMAGE DATA PROCESSING SYSTEM AND METHOD - Various techniques are provided herein for processing raw image data in front-end processing logic of an image signal processing system. In one embodiment, the front-end processing logic includes a statistics processing unit configured to process raw image data acquired by an image sensor to obtain one or more sets of statistics. The statistics processing unit may first correct defective pixels in the raw image data and then correct lens shading errors in the raw image data prior to extracting the statistics information. In certain embodiments, black level compensation may be applied between the defective pixel correction and lens shading correction steps, and inverse black level compensation may be applied between the lens shading correction step and the extraction of the statistics information. The acquired statistics information may be utilized by an image signal processing pipeline for converting the raw image data into a color (e.g., RGB) and/or luma (e.g., YCbCr) image. | 04-21-2011 |
20110090381 | SYSTEM AND METHOD FOR PROCESSING IMAGE DATA USING AN IMAGE PROCESSING PIPELINE OF AN IMAGE SIGNAL PROCESSOR - Various techniques are provided herein for processing raw image data acquired using a digital image sensor in an image processing pipeline of an image signal processing system. In one embodiment, the image processing pipeline may first process the raw image data (e.g., Bayer image data) for the detection and correction of defective pixels. Next, the image processing pipeline may process the raw image data to reduce noise. Thereafter, the image processing pipeline may correct lens shading distortion in the raw image data and, subsequently, apply a demosaicing algorithm to convert the raw image data into full color image data (e.g., RGB image data). The color image data may be further processed by the image processing pipeline to correct color and gamma properties prior to being converted into a luma and chroma color space (e.g., YCbCr color space). | 04-21-2011 |
20110091101 | SYSTEM AND METHOD FOR APPLYING LENS SHADING CORRECTION DURING IMAGE PROCESSING - Various techniques for lens shading correction are provided. In one embodiment, the location of a current pixel is determined relative to a gain grid having a plurality of grid points distributed in horizontal and vertical directions. If the location of the current pixel corresponds to a grid point, a lens shading gain associated with that grid point is applied to the current pixel. If the location of the current pixel is between four grid points, bi-linear interpolation is applied to the four grid points to determine an interpolated lens shading gain. In another embodiment, a radial gain grid may be provided, and lens shading gains may be interpolated based upon grid points neighboring a current pixel in the radial and angular directions. In a further embodiment, a radial lens shading gain is determined by determining a radial distance from the center of the image to the current pixel and multiplying the radial distance by a global gain parameter based upon the color of the current pixel. The radial lens shading gain is then applied to the current pixel, along with the determined lens shading grid gain or lens shading interpolated gain. | 04-21-2011 |
20110122940 | METHOD AND APPARATUS FOR VC-1 TO MPEG-2 VIDEO TRANSCODING - A method for transcoding from a VC-1 format to an MPEG-2 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the VC-1 format to generate a picture; (B) determining a first mode indicator for the picture; and (C) coding the picture into an output video stream in the MPEG-2 format using one of (i) an MPEG-2 field mode coding and (ii) an MPEG-2 frame mode coding as determined from the first mode indicator. | 05-26-2011 |
20120002082 | Capturing and Rendering High Dynamic Range Images - Some embodiments of the invention provide a mobile device that captures and produces images with high dynamic ranges. To capture and produce a high dynamic range image, the mobile device of some embodiments includes novel image capture and processing modules. In some embodiments, the mobile device produces a high dynamic range (HDR) image by (1) having its image capture module rapidly capture a succession of images at different image exposure durations, and (2) having its image processing module composite these images to produce the HDR image. | 01-05-2012 |
20120002727 | METHOD AND APPARATUS FOR MPEG-2 TO VC-1 VIDEO TRANSCODING - A method for transcoding from an MPEG-2 format to a VC-1 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the MPEG-2 format to generate a picture; (B) determining a mode indicator for the picture; and (C) coding the picture into an output video stream in the VC-1 format using one of (i) a VC-1 field mode coding and (ii) a VC-1 frame mode coding as determined from the mode indicator. | 01-05-2012 |
20120002898 | Operating a Device to Capture High Dynamic Range Images - Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels. | 01-05-2012 |
20120002899 | Aligning Images - Some embodiments provide a method of aligning a pair of images. The method defines multiple different pairs of images at multiple different resolutions. The method hierarchically aligns the original pair of images by first aligning the pair of images at the lowest resolution and then aligning each pair of images at each higher resolution based on the alignments of the pair of images at the lower resolutions. For some of the resolutions, to perform the hierarchically alignment, the method identifies, for at least one image at each resolution, portions that are suitable for performing the alignment and portions that are not suitable for performing the alignment. The method compares each pair of images at a particular resolution by using the suitable portions while excluding the unsuitable portions from the comparison. | 01-05-2012 |
20120147952 | METHOD AND APPARATUS FOR H.264 TO MPEG-2 VIDEO TRANSCODING - A method for transcoding from an H.264 format to an MPEG-2 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the H.264 format to generate a picture having a plurality of macroblock pairs that used an H.264 macroblock adaptive field/frame coding; (B) determining a mode indicator for each of the macroblock pairs; and (C) coding the macroblock pairs into an output video stream in the MPEG-2 format using one of (i) an MPEG-2 field mode coding and (ii) an MPEG-2 frame mode coding as determined from the mode indicators. | 06-14-2012 |
20120230404 | VIDEO BITSTREAM TRANSCODING METHOD AND APPARATUS - A video transcoder is disclosed. The video transcoder generally comprises a processor and a video digital signal processor. The processor may be formed on a first die. The video digital signal processor may be formed on a second die and coupled to the processor. The video digital signal processor may have (i) a first module configured to perform a first operation in decoding an input video stream in a first format and (ii) a second module configured to perform a second operation in coding an output video stream in a second format, wherein the first operation and the second operation are performed in parallel. | 09-13-2012 |
20120230415 | METHOD AND APPARATUS FOR MPEG-2 TO H.264 VIDEO TRANSCODING - A method for transcoding from an MPEG-2 format to an H.264 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the MPEG-2 format to generate a plurality of macroblocks; (B) determining a plurality of indicators from a pair of the macroblocks, the pair of the macroblocks being vertically adjoining; and (C) coding the pair of the macroblocks into an output video stream in the H.264 format using one of (i) a field mode coding and (ii) a frame mode coding as determined from the indicators. | 09-13-2012 |
20130002907 | METHOD AND/OR ARCHITECTURE FOR MOTION ESTIMATION USING INTEGRATED INFORMATION FROM CAMERA ISP - A camera comprising a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information. The first circuit may be further configured to pass the image signal processing related information to the second circuit. The second circuit may be further configured to pass the encoding related information to the first circuit. The second circuit may be further configured to modify one or more motion estimation processes based upon the information from the first circuit. | 01-03-2013 |
20130121403 | METHOD AND APPARATUS FOR QP MODULATION BASED ON PERCEPTUAL MODELS FOR PICTURE ENCODING - A method for encoding a picture is disclosed. The method generally includes the steps of (A) generating at least one respective macroblock statistic from each of a plurality of macroblocks in the picture, (B) generating at least one global statistic from the picture and (C) generating a respective macroblock quantization parameter for each of the macroblocks based on both (i) the at least one respective macroblock statistic and (ii) said at least one global statistic. | 05-16-2013 |
20130286242 | FLASH SYNCHRONIZATION USING IMAGE SENSOR INTERFACE TIMING SIGNAL - Certain aspects of this disclosure relate to an image signal processing system that includes a flash controller that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame. | 10-31-2013 |
20130321671 | SYSTEMS AND METHOD FOR REDUCING FIXED PATTERN NOISE IN IMAGE DATA - The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may be configured to receive a frame of the image data having a plurality of pixels acquired using a digital image sensor. The image processing pipeline may then be configured to determine a first plurality of correction factors that may correct each pixel in the plurality of pixels for fixed pattern noise. The first plurality of correction factors may be determined based at least in part on fixed pattern noise statistics that correspond to the frame of the image data. After determining the first plurality of correction factors, the image processing pipeline may be configured to configured to apply the first plurality of correction factors to the plurality of pixels, thereby reducing the fixed pattern noise present in the plurality of pixels. | 12-05-2013 |
20130321672 | SYSTEMS AND METHODS FOR COLLECTING FIXED PATTERN NOISE STATISTICS OF IMAGE DATA - The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may collect statistics associated with fixed pattern noise of image data by receiving a first frame of the image data comprising a plurality of pixels. The image processing pipeline may then determine a sum of a first plurality of pixel values that correspond to at least a first portion of the plurality of pixels such that each pixel in at least the first portion of the plurality of pixels is disposed along a first axis within the frame of the image data. After determining the sum of the first plurality of pixel values, the image processing pipeline may store the sum of the first plurality of pixel values in a memory such that the sum of the first plurality of pixel values represent the statistics. | 12-05-2013 |
20130321674 | Image Signal Processing Involving Geometric Distortion Correction - Systems and methods for correcting geometric distortion are provided. In one example, an electronic device may include an imaging device, which may obtain image data of a first resolution, and geometric distortion and scaling logic. The imaging device may include a sensor and a lens that causes some geometric distortion in the image data. The geometric distortion correction and scaling logic may scale and correct for geometric distortion in the image data by determining first pixel coordinates in uncorrected or partially corrected image data that, when resampled, would produce corrected output image data at second pixel coordinates. The geometric distortion correction and scaling logic may resample pixels around the image data at the first pixel coordinates to obtain the corrected output image data at the second pixel coordinates. The corrected output image data may be of a second resolution. | 12-05-2013 |
20130321675 | RAW SCALER WITH CHROMATIC ABERRATION CORRECTION - Systems and methods for down-scaling are provided. In one example, a method for processing image data includes determining a plurality of output pixel locations using a position value stored by a position register, using the current position value to select a center input pixel from the image data and selecting an index value, selecting a set of input pixels adjacent to the center input pixel, selecting a set of filtering coefficients from a filter coefficient lookup table using the index value, filtering the set of source input pixels to apply a respective one of the set of filtering coefficients to each of the set of source input pixels to determine an output value for the current output pixel at the current position value, and correcting chromatic aberrations in the set of source input pixels. | 12-05-2013 |
20130321676 | Green Non-Uniformity Correction - Systems and methods for correcting green channel non-uniformity (GNU) are provided. In one example, GNU may be corrected using energies between the two green channels (Gb and Gr) during green interpolation processes for red and green pixels. Accordingly, the processes may be efficiently employed through implementation using demosaic logic hardware. In addition, the green values may be corrected based on low-pass-filtered values of the green pixels (Gb and Gr). Additionally, green post-processing may provide some defective pixel correction on interpolated greens by correcting artifacts generated through enhancement algorithms. | 12-05-2013 |
20130321677 | SYSTEMS AND METHODS FOR RAW IMAGE PROCESSING - Systems and methods for processing raw image data are provided. One example of such a system may include memory to store image data in raw format from a digital imaging device and an image signal processor to process the image data. The image signal processor may include data conversion logic and a raw image processing pipeline. The data conversion logic may convert the image data into a signed format to preserve negative noise from the digital imaging device. The raw image processing pipeline may at least partly process the image data in the signed format. The raw image processing pipeline may also include, among other things, black level compensation logic, fixed pattern noise reduction logic, temporal filtering logic, defective pixel correction logic, spatial noise filtering logic, lens shading correction logic, and highlight recovery logic. | 12-05-2013 |
20130321678 | SYSTEMS AND METHODS FOR LENS SHADING CORRECTION - Systems and methods for correcting intensity drop-offs due to geometric properties of lenses are provided. In one example, a method includes receiving an input pixel of the image data, the image data acquired using an image sensor. A color component of the input pixel is determined. A gain grid is determined by pointing to the gain grid in external memory. Each of the plurality of grid points is associated with a lens shading gain selected based upon the color of the input pixel. A nearest set of grid points that enclose the input pixel is identified. Further, a lens shading gain is determined by interpolating the lens shading gains associated with each of the set of grid points and is applied to the input pixel. | 12-05-2013 |
20130321700 | Systems and Methods for Luma Sharpening - Systems, methods, and devices for sharpening image data are provided. One example of an image signal processing system includes a YCC processing pipeline that includes luma sharpening logic. The luma sharpening logic may sharpen the luma component while avoiding sharpening some noise. Specifically, a multi-scale unsharp mask filter may obtain unsharp signals by filtering an input luma component, and sharp component determination logic may determine sharp signals representing differences between the unsharp signals and the luma component. Sharp lookup tables may “core” the sharp signals, which may prevent some noise from being sharpened. Output logic may determine a sharpened output luma signal by combining the sharp signals with, for example, luma component or one of the unsharp signals. | 12-05-2013 |
20130322745 | Local Image Statistics Collection - Systems and methods for generating local image statistics are provided. In one example, an image signal processing system may include a statistics pipeline with image processing logic and local image statistics collection logic. The image processing logic may receive and process pixels of raw image data. The local image statistics collection logic may generate a local histogram associated with a luminance of the pixels of a first block of pixels of the raw image data or a thumbnail in which a pixel of the thumbnail represents a downscaled version of the luminance of the pixels of the first block of the pixel. The raw image data may include many other blocks of pixels of the same size as the first block of pixels. | 12-05-2013 |
20130322746 | SYSTEMS AND METHODS FOR YCC IMAGE PROCESSING - Systems and methods for processing YCC image data provided. In one example, an electronic device includes memory to store image data in RGB or YCC format and a YCC image processing pipeline to process the image data. The YCC image processing pipeline may include receiving logic configured to receive the image data in RGB or YCC format and color space conversion logic configured to, when the image data is received in RGB format, convert the image data into YCC format. The YCC image processing logic may also include luma sharpening and chroma suppression logic; brightness, contrast, and color adjustment logic; gamma logic; chroma decimation logic; scaling logic; and chromanoise reduction logic. | 12-05-2013 |
20130322753 | SYSTEMS AND METHODS FOR LOCAL TONE MAPPING - Systems and methods for local tone mapping are provided. In one example, an electronic device includes an electronic display, an imaging device, and an image signal processor. The electronic display may display images of a first bit depth, and the imaging device may include an image sensor that obtains image data of a higher bit depth than the first bit depth. The image signal processor may process the image data, and may include local tone mapping logic that may apply a spatially varying local tone curve to a pixel of the image data to preserve local contrast when displayed on the display. The local tone mapping logic may smooth the local tone curve applied to the intensity difference between the pixel and another nearby pixel exceeds a threshold. | 12-05-2013 |
20140010480 | SYSTEMS AND METHODS FOR STATISTICS COLLECTION USING CLIPPED PIXEL TRACKING - Systems and methods are provided for selectively performing image statistics processing based at least partly on whether a pixel has been clipped. In one example, an image signal processor may include statistics collection logic. The statistics collection logic may include statistics image processing logic and a statistics core. The statistics image processing logic may perform initial image processing on image pixels, at least occasionally causing some of the image pixels to become clipped. The statistics core may obtain image statistics from the image pixels. The statistics core may obtain at least one of the image statistics using only pixels that have not been clipped and excluding pixels that have been clipped. | 01-09-2014 |
20140133749 | Systems And Methods For Statistics Collection Using Pixel Mask - Systems and methods are provided for collecting image statistics using a pixel mask. In one example, statistics collection logic of an image signal processor may include a pixel weighting mask and accumulation logic. The pixel weighting mask may receive a first representation of a pixel that includes a luma and chroma representation of the pixel. The pixel weighting mask may output a pixel weighting using first and second chroma components of the luma and chroma representation of the pixel. The accumulation logic may receive the first or a second representation of the pixel and the pixel weighting value. Using these, the accumulation logic may weight the second representation of the pixel or the first representation of the pixel using the pixel weighting value to obtain a weighted pixel value, adding the weighted pixel value to a statistics count. | 05-15-2014 |
20140205005 | METHOD AND APPARATUS FOR MPEG-2 TO H.264 VIDEO TRANSCODING - A method for transcoding from an MPEG-2 format to an H.264 format is disclosed. The method generally comprises the steps of (A) decoding an input video stream in the MPEG-2 format to generate a plurality of macroblocks; (B) determining a plurality of indicators from a pair of the macroblocks, the pair of the macroblocks being vertically adjoining; and (C) coding the pair of the macroblocks into an output video stream in the H.264 format using one of (i) a field mode coding and (ii) a frame mode coding as determined from the indicators. | 07-24-2014 |
20140240587 | FLASH SYNCHRONIZATION USING IMAGE SENSOR INTERFACE TIMING SIGNAL - Certain aspects of this disclosure relate to an image signal processing system that includes a flash controller that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame. | 08-28-2014 |
20150062382 | OPERATING A DEVICE TO CAPTURE HIGH DYNAMIC RANGE IMAGES - Some embodiments provide a method of operating a device to capture an image of a high dynamic range (HDR) scene. Upon the device entering an HDR mode, the method captures and stores multiple images at a first image exposure level. Upon receiving a command to capture the HDR scene, the method captures a first image at a second image exposure level. The method selects a second image from the captured plurality of images. The method composites the first and second images to produce a composite image that captures the HDR scene. In some embodiments, the method captures multiple images at multiple different exposure levels. | 03-05-2015 |
20150084968 | NEIGHBOR CONTEXT CACHING IN BLOCK PROCESSING PIPELINES - Methods and apparatus for caching neighbor data in a block processing pipeline that processes blocks in knight's order with quadrow constraints. Stages of the pipeline may maintain two local buffers that contain data from neighbor blocks of a current block. A first buffer contains data from the last C blocks processed at the stage. A second buffer contains data from neighbor blocks on the last row of a previous quadrow. Data for blocks on the bottom row of a quadrow are stored to an external memory at the end of the pipeline. When a block on the top row of a quadrow is input to the pipeline, neighbor data from the bottom row of the previous quadrow is read from the external memory and passed down the pipeline, each stage storing the data in its second buffer and using the neighbor data in the second buffer when processing the block. | 03-26-2015 |
20150084969 | NEIGHBOR CONTEXT PROCESSING IN BLOCK PROCESSING PIPELINES - A block processing pipeline in which blocks are input to and processed according to row groups so that adjacent blocks on a row are not concurrently at adjacent stages of the pipeline. A stage of the pipeline may process a current block according to neighbor pixels from one or more neighbor blocks. Since adjacent blocks are not concurrently at adjacent stages, the left neighbor of the current block is at least two stages downstream from the stage. Thus, processed pixels from the left neighbor can be passed back to the stage for use in processing the current block without the need to wait for the left neighbor to complete processing at a next stage of the pipeline. In addition, the neighbor blocks may include blocks from the row above the current block. Information from these neighbor blocks may be passed to the stage from an upstream stage of the pipeline. | 03-26-2015 |
20150084970 | REFERENCE FRAME DATA PREFETCHING IN BLOCK PROCESSING PIPELINES - Block processing pipeline methods and apparatus in which pixel data from a reference frame is prefetched into a search window memory. The search window may include two or more overlapping regions of pixels from the reference frame corresponding to blocks from the rows in the input frame that are currently being processed in the pipeline. Thus, the pipeline may process blocks from multiple rows of an input frame using one set of pixel data from a reference frame that is stored in a shared search window memory. The search window may be advanced by one column of blocks by initiating a prefetch for a next column of reference data from a memory. The pipeline may also include a reference data cache that may be used to cache a portion of a reference frame and from which at least a portion of a prefetch for the search window may be satisfied. | 03-26-2015 |
20150085931 | DELAYED CHROMA PROCESSING IN BLOCK PROCESSING PIPELINES - A block processing pipeline in which macroblocks are input to and processed according to row groups so that adjacent macroblocks on a row are not concurrently at adjacent stages of the pipeline. The input method may allow chroma processing to be postponed until after luma processing. One or more upstream stages of the pipeline may process luma elements of each macroblock to generate luma results such as a best mode for processing the luma elements. Luma results may be provided to one or more downstream stages of the pipeline that process chroma elements of each macroblock. The luma results may be used to determine processing of the chroma elements. For example, if the best mode for luma is an intra-frame mode, then a chroma processing stage may determine a best intra-frame mode for chroma and reconstruct the chroma elements according to the best chroma intra-frame mode. | 03-26-2015 |