Patent application number | Description | Published |
20090003730 | Method And System For Processing Video Data In A Multipixel Memory To Memory Compositor - A method and system for processing video data using multi-pixel scaling in a memory system are provided. The multi-pixel scaling may include reading pixel data for one or more data streams from the memory system into one or more scalers, wherein each of the plurality of data streams includes a plurality of pixels, scaling the pixel via the one or more scalers and outputting the scaled pixels from the one or more scalers. Pixel data may be sequential or parallel. The plurality of scalers may be in parallel, scaling sequential pixel data with independent phase control, or scaling parallel pixel data in substantially equal phase. Pixel data may be transposed, replicated, distributed and aligned prior to reading by scalers, and may be aligned merged and transposed after scaling. Scaling may include interpolation or sub sampling using pixel phase, position, step size and scaler quantities. | 01-01-2009 |
20090073319 | Method and System for Processing Chroma Signals - Methods and systems for processing chroma signals are disclosed. Aspects of one method may include compensating a modulated chroma signal for distortions due to audio trap filtering. The compensated chroma signal may be demodulated to generate U and V chroma signals. The U and V chroma signals may be low pass filtered, and then YC aligned. YC alignment may comprise correcting portions of the U and V chroma signals based on the luma signal. The correction may be based on, for example, correlation between the U and V chroma signals and the Y luma signal, and/or energy level of the Y luma signal. The corrected U and V chroma signals may be chroma sharpened, where a gain parameter may be changed to change a transition slope for the U and V chroma signals. The chroma sharpened signals may also be clamped to reduce overshoot and undershoot. | 03-19-2009 |
20090089014 | PERFORMANCE MONITORS IN A MULTITHREADED PROCESSOR ARCHITECTURE - A system comprising a plurality of execution units configured to execute, at least in part, a plurality of instruction threads; a plurality of performance monitors, each performance monitor being configured to collect performance information related to the execution of at least one instruction thread; a selected thread identifier configured to provide, during operation, the selection of at least one instruction thread; and a performance manager configured to filter, utilizing the selected thread, the information collected by the plurality of performance monitors. | 04-02-2009 |
20090180028 | METHOD AND SYSTEM FOR 3-D COLOR ADJUSTMENT BASED ON COLOR REGION DEFINITION USING PWL MODULES - A video processing system may be operable to utilize one-dimensional (1-D) piecewise linear (PWL) functions to adjust chroma and/or luma parameters corresponding to pixels that are determined to fall within one or more N-dimensional color adjustment regions in spatial representation of pixels' chroma and luma information. The chroma and/or luma parameters comprise Y, Cb, Cr, saturation and/or hue parameters in systems using Y′CbCr color coding. The 1-D PWL functions are operable to generate adjustment data corresponding to one of chroma and/or luma parameters, wherein the adjustment data comprise offset or gain data. The 1-D PWL functions are reprogrammable. The 1-D PWL functions may enable smooth transitions in boundary areas of at least some of the N-dimensional color adjustment regions. Determination of whether pixels fall within the color adjustment regions is based on a plurality of boundary points and/or criteria. Adjustment data corresponding to overlapped regions are aggregated. | 07-16-2009 |
20090180030 | METHOD AND SYSTEM FOR SHARPENING THE LUMA AND THE CHROMA SIGNALS - A video processing system may be operable to utilize multi-band sharpening to process luma signals for image signals. The luma signal may be decomposed into a plurality of frequency band components, wherein each component may be processed separately using different sharpening gains and/or offsets. The multi-band processed components may be combined to generate sharpened output luma signals. The multi-band sharpening may be performed utilizing peaking processing, and the input luma signal and/or LTI sharpened luma signals may be combined with the multi-band peaking sharpened signals to generate the sharpened output luma signals. Corresponding chroma signals may also be adjusted to generate sharpened output chroma signals. Luma and/or chroma sharpening operations may be further adjusted based on coring, clipping avoidance, luma statistics, color region detections, and/or curve control parameters. Sharpened output image signals may be generated based on the sharpened output luma signals and the sharpened output chroma signals. | 07-16-2009 |
20090220150 | METHOD AND SYSTEM FOR AUTOMATIC CORRECTION OF FLESH-TONES (SKIN-TONES) - Flesh-tones corrections may be performed to correct color shifts that may occur in transmitted video frames wherein chroma information corresponding to flesh-tone video pixels may be distorted. A target region may be determined based on a determined flesh-tones region within a spatial representation of chroma in video color space, such as Y′CrCb. The flesh-tones correction may utilize one or more methodologies based on an elliptical shape and/or a triangular shape algorithm(s). A video processing system may be utilized to analyze chroma information of received video pixels and/or to perform flesh-tones corrections by shifting the chroma value of received video pixels towards good flesh-tones regions to compensate for possible distortions. The video processing system may perform conversion calculation and/or shift operations dynamically. The video processing system may also utilize lookup tables (LUTs) to convert received chroma values. The LUTs may be programmable to enable modifying and/or updating of the system. | 09-03-2009 |
20090262240 | SYSTEM AND METHOD FOR PROVIDING GRAPHICS USING GRAPHICAL ENGINE - Systems and methods that provide graphics using a graphical engine are provided. In one example, a system may provide layered graphics in a video environment. The system may include a bus, a graphical engine and a graphical pipeline. The graphical engine may be coupled to the bus and may be adapted to composite a plurality of graphical layers into a composite graphical layer. The graphical engine may include a memory that stores the composite graphical layer. The graphical pipeline may be coupled to the bus and may be adapted to transport the composite graphical layer. | 10-22-2009 |
20090296822 | Reduced Memory Mode Video Decode - A method and system to decode a video stream are provided. The method comprises receiving macroblocks, filtering and decimating the macroblocks to create decimated macroblocks and storing the decimated macroblocks. The method further comprises creating a decimated reference block from one or more decimated macroblocks of a decimated reference picture and interpolating selected pixels of the decimated reference block to create an interpolated reference block. The method further comprises pre-processing selected columns of the interpolated reference block to create a processed reference block for motion compensation. | 12-03-2009 |
20100013993 | PULLDOWN FIELD DETECTOR - A system and method for detecting the presence and location of pull-down fields in a video field stream. Various aspects of the present invention may comprise method steps and circuit structure for generating an array of variance indications, each of which represents a degree of variance between two video fields in the video field stream. Various aspects may comprise comparing the array of variance indications to a pattern to detect a pull-down field in the video field stream. Various aspects may comprise comparing corresponding portions of video fields and generating a histogram of differences between the corresponding portions. Various aspects may comprise generating an indication of variance of the histogram and analyzing the indication of variance. Various aspects may comprise analyzing an array of such indications of variance and may comprise comparing the array of such indications to a pattern or plurality of patterns. | 01-21-2010 |
20100066902 | FILTER MODULE FOR A VIDEO DECODING SYSTEM - Systems and methods are disclosed for filter modules in a video display system or network. One embodiment relates to a method for operating a filter module in a video display network comprising determining a picture type, display type and operation of the display network. The method further comprises determining, in real time, a filter configuration from a plurality of possible filter configurations based on the determined picture type, display type and operation. | 03-18-2010 |
20100086060 | MPEG FIELD DATA-DRIVEN DISPLAY - A system and method that support display of video fields using related data encoded in data structures. Each data structure is associated with one video field and contains all the information associated with the display of the video field. The data structure is encoded with the video field that is displayed exactly one field prior to the field associated with the data structure. In an embodiment of the present invention, the data structure contains all the information associated with the display of a video field, regardless of whether certain data changes from one field to the next. | 04-08-2010 |
20100188583 | SYSTEM AND METHOD FOR VIDEO PROCESSING DEMONSTRATION - Systems and methods for processing a video signal are disclosed and may include degrading a received video signal utilizing one or more of a plurality of video signal degrading methods. The degraded video signal may be processed to generate an improved video signal. At least a portion of the degraded video signal and a corresponding portion of the improved video signal may be displayed. Random noise may be added to the received video signal to generate the degraded video signal. Noise within the degraded video signal may be reduced to generate the improved video signal utilizing digital noise reduction and/or analog noise reduction. The received video signal may be compressed and decompressed to generate the random noise. The received video signal may be softened to generate the degraded video signal. The degraded video signal may be sharpened to generate the improved video signal. | 07-29-2010 |
20100215284 | System and Method for Implementing Graphics and Video Scaling Algorithm Using Interpolation Based on Symmetrical Polyphase Filtering - A video processing system may implement a video scaling algorithm using interpolation based on polyphase filtering. A video or graphics scaler may be utilized to scale luma, chroma, and/or alpha information in a video image. The scaler may comprise a first polyphase sub-filtering with zero phase shift that generates an in-phase filtered output from an input video image and a second polyphase sub-filtering that generates an out-of-phase filtered output from the input video image. The video scaler may also comprise an interpolator that may generate a scaled video image based on the generated in-phase and out-of-phase filtered outputs and a scaling factor. The scaling factor may be determined based on an input video size (M) and a desired output video size (N). The interpolation of the generated in-phase and out-of-phase filtered outputs in the video scaler may be implemented by utilizing a Farrow structure. | 08-26-2010 |
20110032331 | METHOD AND SYSTEM FOR 3D VIDEO FORMAT CONVERSION - A 3-dimensional (3D) video receiver may be operable to deinterlace a decompressed 3D video frame having a 3D video interlaced format to generate a first 3D video frame having a first 3D video progressive format. The generated first 3D video frame having the first 3D video progressive format may be converted to generate a second 3D video frame having a second 3D video progressive format. The generated first 3D video frame having the first 3D video progressive format may be scaled to generate the second 3D video frame having the second 3D video progressive format. When the 3D video receiver operates in an electronic program guide mode or a graphics over video mode, the generated second 3D video frame may be blended with graphics. The second 3D video frame comprising a 50Hz frame rate may be frame-rate upconverted to a third 3D video frame comprising a 60Hz frame rate. | 02-10-2011 |
20110032332 | METHOD AND SYSTEM FOR MULTIPLE PROGRESSIVE 3D VIDEO FORMAT CONVERSION - A 3-dimensional (3D) video receiver may be operable to scale a decompressed 3D video frame having a first 3D video progressive format to generate a 3D video frame having a second 3D video progressive format, where the second 3D video progressive format comprises a high-definition multimedia interface (HDMI) format. When operating in an electronic program guide mode or a graphics over video mode, the 3D video frame having the second 3D video progressive format may be blended with graphics. The 3D video frame having the second 3D video progressive format may be converted to generate a 3D video frame having a 3D video interlaced format by performing a pulldown. The 3D video frame having the second 3D video progressive format at a 50 Hz frame rate may be frame-rate upconverted to generate a 3D video frame having a third 3D video progressive format at a 60 Hz frame rate. | 02-10-2011 |
20110032333 | METHOD AND SYSTEM FOR 3D VIDEO FORMAT CONVERSION WITH INVERSE TELECINE - A 3-dimensional (3D) video receiver may be operable to convert a decompressed 3D video frame having a 3D video interlaced format to generate a first 3D video frame having a first 3D video progressive format by performing an inverse pulldown. The generated first 3D video frame having the first 3D video progressive format may be converted to generate a second 3D video frame having a second 3D video progressive format. The generated first 3D video frame having the first 3D video progressive format may be scaled to generate the second 3D video frame having the second 3D video progressive format. When the 3D video receiver is operating in an electronic program guide (EPG) mode or in a graphics over video mode, the generated second 3D video frame having the second 3D video progressive format may be blended with graphics. | 02-10-2011 |
20110087487 | METHOD AND SYSTEM FOR MEMORY USAGE IN REAL-TIME AUDIO SYSTEMS - System and method for encoding, transmitting and decoding audio data. Audio bit steam syntax is re-organized to allow system optimizations that work well with memory latency and memory burst operations. Multiple small entropy coding tables are stored in RAM and loaded to on-chip memory as needed. Audio prediction is pipelined in the bitstream syntax. Intra frames, independent of other frames in the bitstream, are included in the bitstream for error recovery and channel change. New algorithms are implemented in legacy syntax by including the new information in the user data space of the audio frame. The new decoder can use projection to determine where the new information is and read ahead in the stream. Audio prediction from the immediately previous frame is restricted. Audio prediction is performed across channels within a single audio frame. A variable re-order function comprises storing channels of data to DRAM in the order they are decoded and reading them out in presentation order. | 04-14-2011 |
20110134211 | METHOD AND SYSTEM FOR HANDLING MULTIPLE 3-D VIDEO FORMATS - Aspects of a method and system for handling multiple 3-D video formats are provided. A video processing system may receive one or more video frames comprising first 3-D view pixel data and second 3-D view pixel data suitable for generating a three-dimensional (3-D) video frame. The video system may be operable to determine an arrangement of the first 3-D view pixel data and the second view pixel data in the one or more video frames. In instances that the determined arrangement is not a desired arrangement, the video processing system may be operable to convert the one or more video frames to the desired arrangement. Either or both of the determined arrangement and the desired arrangement may comprise a series of two single-view frames. Either or both of the determined arrangement and the desired arrangement may comprise a single frame comprising the first 3-D view pixel data and the second 3-D view pixel data. | 06-09-2011 |
20110134212 | METHOD AND SYSTEM FOR PROCESSING 3-D VIDEO - A video processing system may receive a first frame comprising pixel data for a first 3-D view of an image, which may be referred to as first 3-D view pixel data, and receive a second frame comprising pixel data for a second 3-D view of the image, which may be referred to as second 3-D view pixel data. The system may generate a multi-view frame comprising the first 3-D view pixel data and the second 3-D view pixel data. The system may make a decision for performing processing of the image, wherein the decision is generated based on one or both of the first 3-D view pixel data and/or the second 3-D view pixel data. The system may process the 3-D multi-view frame based on the decision. The image processing operation may comprise, for example, deinterlacing, filtering, and cadence processing such as 3:2 pulldown. | 06-09-2011 |
20110134216 | METHOD AND SYSTEM FOR MIXING VIDEO AND GRAPHICS - A method and system are provided in which a video processor may select a 2D video output format or a 3D video output format. The video processor may generate composited video data by combining video data from a video source, and one or both of video data from additional video sources and graphics data from graphics source(s). The video processor may select the order in which such combination is to occur. The video data from the various video sources may comprise one or both of 2D video data and 3D video data. The graphics data from the graphics sources may comprise one or both of 2D graphics data and 3D graphics data. The video processor may perform 2D-to-3D and/or 3D-to-2D format conversion when appropriate to generate the composited video data in accordance with the selected output video format. | 06-09-2011 |
20110134217 | METHOD AND SYSTEM FOR SCALING 3D VIDEO - A method and system are provided in which an integrated circuit (IC) comprises multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis. | 06-09-2011 |
20110134218 | METHOD AND SYSTEM FOR UTILIZING MOSAIC MODE TO CREATE 3D VIDEO - A method and system are provided in which a video feeder may receive video data from multiple sources. The video data from one or more of those sources may comprise three-dimensional (3D) video data. The video data from each source may be stored in corresponding different areas in memory during a capture time for a single picture. Each of the different areas in memory may correspond to a different window of multiple windows in an output video picture. The video data from each source may be stored in memory in a 2D format or in a 3D format, based on a format of the output video picture. When a 3D format is to be used, left-eye and right-eye information may be stored in different portions of memory. The video data may be read from memory to a single buffer during a feed time for a single picture. | 06-09-2011 |
20120212673 | Method and System for Processing Video Data in a Multipixel Memory to Memory Compositor - A method and system for processing video data using multi-pixel scaling in a memory system are provided. The multi-pixel scaling may include reading pixel data for one or more data streams from the memory system into one or more scalers, wherein each of the plurality of data streams includes a plurality of pixels, scaling the pixel via the one or more scalers and outputting the scaled pixels from the one or more scalers. Pixel data may be sequential or parallel. The plurality of scalers may be in parallel, scaling sequential pixel data with independent phase control, or scaling parallel pixel data in substantially equal phase. Pixel data may be transposed, replicated, distributed and aligned prior to reading by scalers, and may be aligned merged and transposed after scaling. Scaling may include interpolation or sub sampling using pixel phase, position, step size and scaler quantities. | 08-23-2012 |
20120300128 | Systems and Methods for Mitigating Visible Envelope Effects - Systems and methods in accordance with embodiments of the present invention are provided to compensate for the “envelope effect” that appears to an end user as a result of the sampling and digital processing of near-Nyquist frequency components of a video information signal. Embodiments of the present invention improve image quality by effectively nullifying gamma correction in areas where the envelope effect exists, enabling the human eye to perceive the displayed signal without the envelope effect. | 11-29-2012 |
20130044261 | VIDEO SOURCE RESOLUTION DETECTION - Embodiments for video content source resolution detection are provided. Embodiments enable systems and methods that measure video content source resolution and that provide image-by-image source scale factor measurements to picture quality (PQ) processing modules. With the source scale factor information, PQ processing modules can be adapted dynamically (on a picture-by-picture basis) according to the source scale factor information for better picture quality enhancement. In addition, embodiments provide source resolution detection that is minimally affected by video coding artifacts and superimposed content (e.g., graphics). | 02-21-2013 |
20130120420 | METHOD AND SYSTEM FOR EFFICIENTLY ORGANIZING DATA IN MEMORY - A method and system for efficiently organizing data in memory is provided. Exemplary aspects of the invention may include storing linear data and block data in more than one DRAM device and accessing the data with one read/write access cycle. Common control signals may be used to control the DRAM devices and the address lines used to address each DRAM device may be independent from one another. The data read from the DRAM devices may be reordered to make the data more suitable for processing by applications. | 05-16-2013 |
20130120448 | SYSTEM AND METHOD FOR PROVIDING GRAPHICS USING GRAPHICAL ENGINE - Systems and methods that provide graphics using a graphical engine are provided. One such system includes at least one graphical pipeline and a graphical engine. The at least one graphical pipeline is coupled to a bus and operable to generate a plurality of graphical layers. The graphical engine is coupled to the bus and operable to receive, over the bus, the plurality of graphical layers. The graphical engine is operable to composite the received plurality of graphical layers into a composite graphical layer, and to store the composite graphical layer in a local memory of the graphical engine. | 05-16-2013 |
20130138842 | MULTI-PASS SYSTEM AND METHOD SUPPORTING MULTIPLE STREAMS OF VIDEO - Systems and methods are disclosed for performing multiple processing of data in a network. In one embodiment, the network comprises a first display pipeline that is formed in real time from a plurality of possible display pipelines and that performs at least a first processing step on received data. A buffer stores the processed data and a second display pipeline that is formed in real time from a plurality of possible display pipelines performs at least a second processing step on stored data. | 05-30-2013 |
20130148018 | Video Source Resolution Detection - Embodiments for video content source resolution detection are provided. Embodiments enable systems and methods that measure video content source resolution and that provide image-by-image source scale factor measurements to picture quality (PQ) processing modules. With the source scale factor information, PQ processing modules can be adapted dynamically (on a picture-by-picture basis) according to the source scale factor information for better picture quality enhancement. In addition, embodiments provide source resolution detection that is minimally affected by video coding artifacts and superimposed content (e.g., graphics). | 06-13-2013 |
20130336411 | REDUCING MOTION COMPENSATION MEMORY BANDWIDTH THROUGH MEMORY UTILIZATION - A system and method for processing video information. Various aspects of the present invention may provide a decoder module that decodes block encoded video information. The system may, for example, comprise a first memory module, communicatively coupled to the decoder module, that stores video processing information utilized by the decoder module for decoding a current video block from a current video frame. The system may also, for example, comprise a second memory module, communicatively coupled to the decoder module, that stores reference video information from a previous video frame utilized by the decoder module for decoding the current video block. In a non-limiting exemplary scenario, the first memory module and the second memory module may be communicatively coupled to the decoder module with independent respective data and/or address buses. | 12-19-2013 |