Patent application number | Description | Published |
20110013692 | Adaptive Video Transcoding - Embodiments of the invention describe a method for transcoding an input video in a first encoded format to an output video in a second encoded format, wherein the videos include a set of segments and each segment includes frames. First, the method is determining a set of downsample resilient segments in the input video and a set of full-resolution segments in the input video. Next, the method is downsampling the set of downsample resilient segments to produce a set of downsampled segments and transcoding the input video using the set of full-resolution segments and the set of downsampled segments to produce the output video including at least two segments with different resolutions. | 01-20-2011 |
20110090952 | Directional Transforms for Video and Image Coding - A bitstream includes a sequence of frames. Each frame is partitioned into encoded blocks. For each block, a set of paths is determined at a transform angle determined from a transform index in the bitstream. Transform coefficients are obtained from bitstream. The transform coefficients include one DC coefficient for each path. An inverse transform is applied to the transform coefficients to produce a decoded video. | 04-21-2011 |
20110090954 | Video Codes with Directional Transforms - An encoded video in the form of a bitstream includes a sequence of frames, and each frame is partitioned into encoded blocks. A context for decoding is selected for each encoded block. The bitstream is entropy decoded based on the context to obtain a transform indicator difference. The transform index, which indicates a transform type and a transform direction, is based on the transform indicator difference and a predicted transform indicator. Transform coefficients are obtained from the bitstream, and inverse transformed according to the transform index to produce a decoded video. | 04-21-2011 |
20120163451 | Method for Coding Videos Using Dictionaries - A video encoded as a bit stream is decoded by maintaining a set of dictionaries generated from decoded prediction residual signals, wherein elements of the set of dictionaries have associated indices. A current macroblock is entropy decoded and inverse quantized to produce decoded coefficients. For the current macroblock, a particular dictionary of the set of dictionaries is selected according to a prediction mode signaled in the bit stream, and particular elements of the particular dictionary are selected according to a copy mode signal in the bit stream and the associated index. The particular elements is scaled and combined, using the decoded coefficients, to reconstruct a current decoded macroblock prediction residual signal. Then, the current decoded macroblock prediction residual signal is combined with previously decoded macroblocks to generate an output macroblock of a reconstructed video, wherein the steps are performed in a decoder. | 06-28-2012 |
20120183043 | Method for Training and Utilizing Separable Transforms for Video Coding - A video encoded as a bit stream is decoded using trained sparse orthonormal transforms generated from decoded prediction residual signals, wherein the transforms have associated indices. A current macroblock is entropy decoded and inverse quantized to produce decoded coefficients. For the current macroblock, an L | 07-19-2012 |
20120230396 | Method for Embedding Decoding Information in Quantized Transform Coefficients - A method decodes a picture in a form of a bit-stream. The picture is encoded and represented by vectors of coefficients. Each coefficient is in a quantized form. A specific coefficient is selected in each vector based on a scan order of the vector. Then, a set of modes is inferred based on characteristics of the specific coefficient. Subsequently, the bit-stream is decoded according to the set of modes. | 09-13-2012 |
20120281928 | Method for Coding Pictures Using Hierarchical Transform Units - A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree. | 11-08-2012 |
20130003828 | Method for Selecting Transform Types From Mapping Table for Prediction Modes - A method codes pictures in a bitstream, wherein the bitstream includes coded pictures to obtain data for associated TUs and data for generating a transform tree, and a partitioning of coding units (CUs) into Prediction Units (PUs), and data for obtaining prediction modes or directions associated with each PU. One or more mapping tables are defined, wherein each row of each table has an associated index and a first set of transform types to be used for applying an inverse transformation to the data in TU. The first set of transform types is selected according to an index, and then a second set of transform types is applied as the inverse transformation to the data, wherein the second set of transform types is determined according to the first set of transform types and a transform-toggle flag (ttf) to obtain a reconstructed prediction residual. | 01-03-2013 |
20130101232 | Coding Images Using Intra Prediction Modes - A system and a method for decoding at least a portion of an image includes determining a current prediction mode based on a combination of a prediction mode residue and a function of at least one previous prediction mode and decoding the portion of the image using the current prediction mode. | 04-25-2013 |
20130279820 | Method for Coding Pictures Using Hierarchical Transform Units - A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TVs associated with the PU according to the transform tree. | 10-24-2013 |
20140169451 | Perceptually Coding Images and Videos - Blocks in pixel images are template matched to select candidate blocks and weights according to a structural similarity and a perceptual distortion of the blocks. The perceptual distortion is a function of a just-noticeable-distortion (JND). A filter outputs a prediction residual between the block and the candidate blocks. The prediction residual is transformed and quantized to produce a quantized prediction residual using the JND. The matching and quantizing is optimized jointly using the perceptual distortion. Then, the quantized prediction residual and the weights are entropy encoded into a bit-stream for later decoding. | 06-19-2014 |
20140192866 | Data Remapping for Predictive Video Coding - Specifically, a method decodes a picture. The picture is encoded and represented by blocks in a bitstream. For each block, a remap flag is obtained from the bit-stream. The block is either a remapped reconstructed block or a non-remapped reconstructed block. Either the non-mapped reconstructed block or an inverse remapped reconstructed block is output according to the remap flag. The remapped reconstructed block maximizes a similarity with the neighboring blocks, as compared to the similarity of the non-mapped reconstructed block and the neighboring blocks, by applying point operations to the remapped reconstructed block. | 07-10-2014 |
20140307780 | Method for Video Coding Using Blocks Partitioned According to Edge Orientations - A bitstream corresponding to an encoded video is decoded. The encoded video includes a sequence of frames, and each frame is partitioned into encoded blocks. For each encoded block, an edge mode index is decoded based on an edge mode codeword and a prediction mode. The edge mode index indicates a subset of predetermined partitions selected from a partition library according to the prediction mode. The encoded block is partitioned based on the edge mode index to produce two or more block partitions. To each block partition, a coefficient rearrangement, an inverse transform and an inverse quantization is applied to produce a processed block partition. The processed block partitions are then combined into a decoded block for a video. | 10-16-2014 |
20150085920 | Distributed Source Coding using Prediction Modes Obtained from Side Information - In a decoder, a desired image is estimated by first retrieving coding modes from an encoded side information image. For each bitplane in the encoded side information image, syndrome bits or parity bits are decoded to obtain an estimated bitplane of quantized transform coefficients of the desired image. A quantization and a transform are applied to a prediction residual obtained using the coding modes, wherein the decoding uses the quantized transform coefficients of the encoded side information image. The estimated bitplanes of quantized transform coefficients of the desired image are combined to produce combined bitplanes. Then, an inverse quantization, an inverse transform and a prediction based on the coding modes are applied to the combined bitplanes to recover the estimate of the desired image. | 03-26-2015 |
20150085923 | Method for Improving Compression Efficiency of Distributed Source Coding Using Intra-Band Information - In a decoder, a desired image is estimated by first retrieving coding modes from an encoded side information image. For each bitplane in the encoded side information image, syndrome bits or parity bits are decoded to obtain an estimated bitplane of quantized transform coefficients of the desired image. A quantization and a transform are applied to a prediction residual obtained using the coding modes, wherein the decoding uses the quantized transform coefficients of the encoded side information image, and is based on previously decoded bitplanes in a causal neighborhood. The estimated bitplanes of quantized transform coefficients of the desired image are combined to produce combined bitplanes. Then, an inverse quantization, an inverse transform and a prediction based on the coding modes are applied to the combined bitplanes to recover the estimate of the desired image. | 03-26-2015 |