Patent application number | Description | Published |
20110200097 | ADAPTIVE TRANSFORM SIZE SELECTION FOR GEOMETRIC MOTION PARTITIONING - In one example, an apparatus includes a video encoder configured to partition a block of video data into a first geometric partition and a second geometric partition using a geometric motion partition line, wherein the block comprises N×N pixels, divide the block of video data into four equally-sized, non-overlapping (N/2)×(N/2) sub-blocks, and encode at least one of the sub-blocks through which the geometric motion partition line passes using a transform size smaller than (N/2)×(N/2). The video encoder may determine transform sizes for the sub-blocks based on whether the geometric motion partition line passes through the sub-blocks. In one example, a video decoder may inverse transform the sub-blocks, and may determine transform sizes for the sub-blocks based on whether the geometric motion partition line passes through the sub-blocks. | 08-18-2011 |
20110200110 | SMOOTHING OVERLAPPED REGIONS RESULTING FROM GEOMETRIC MOTION PARTITIONING - In one example, an apparatus includes a video encoder configured to partition a block of video data into a first partition and a second partition using a geometric motion partition line, calculate a prediction value of a pixel in a transition region of the block using a filter that applies a value for at least one neighboring pixel from the first partition and a value for at least one neighboring pixel from the second partition, calculate a residual value of the pixel in the transition region of the block based on the prediction value of the pixel in the transition region, and output the residual value of the pixel. In one example, a video decoder may use a similar filter to decode an the encoded block after receiving the residual value for the encoded block, and using a definition of the geometric motion partition line. | 08-18-2011 |
20110200111 | ENCODING MOTION VECTORS FOR GEOMETRIC MOTION PARTITIONING - In one example, an apparatus includes a video encoder configured to partition a block of video data into a first partition and a second partition using a geometric motion partition line, determine a first motion vector for the first partition and a second motion vector for the second partition, encode the first motion vector based on a first motion predictor selected from motion vectors for blocks neighboring the first partition, encode the second motion vector based on a second motion predictor selected from motion vectors for blocks neighboring the second partition, wherein the blocks neighboring the second partition are determined independently of the blocks neighboring the first partition, and output the encoded first and second motion vectors. A video decoder may similarly decode the motion vectors based on determining the first and second motion predictors for the first and second partitions. | 08-18-2011 |
20110248873 | VARIABLE LENGTH CODES FOR CODING OF VIDEO DATA - A method and system for entropy coding can comprise, in response to detecting a first symbol combination comprising first run information indicating a first number of contiguous zero coefficients is greater than a cut-off-run value, assigning a first codeword to a first symbol combination, wherein the first codeword comprises an escape code from a first-level VLC table; and in response to a second symbol combination comprising second run information indicating a second number of contiguous zero coefficients is less than or equal to the cut-off-run value, assigning a second codeword to the second symbol combination, wherein the second codeword is from the first-level VLC table. The system and method can further comprise collecting coding statistics for a set of candidate symbol combinations and adjusting a mapping between codewords of the first-level VLC table and a subset of the set of candidate symbol combinations based on the coding statistics. | 10-13-2011 |
20110249721 | VARIABLE LENGTH CODING OF CODED BLOCK PATTERN (CBP) IN VIDEO COMPRESSION - This disclosure describes techniques for coding video data. As one example, this disclosure describes a coded block pattern (CBP) for a coding unit (CU) of video data that indicates whether or not each of a luminance component (Y), a first chrominance component (U), and a second chrominance component (V) include at least one non-zero coefficient. According to another example, this disclosure describes a CBP that indicates whether respective blocks of a CU include at least on non-zero coefficient. The CBP described herein may be mapped to a single variable length code (VLC) code word. The VLC code word may be used by a coder to code the CU of video data. | 10-13-2011 |
20110249745 | BLOCK AND PARTITION SIGNALING TECHNIQUES FOR VIDEO CODING - A video block syntax element indicates whether all of the partitions of a video block are predicted based on a same reference list and no greater than quarter-pixel accuracy is used. If the video block syntax element is set, partition-level signaling of the reference lists is avoided. If the video block syntax element is not set, partition-level signaling of the reference lists occurs. If the video block syntax element is set, partition-level syntax elements may be used for each of the partitions of the video block, wherein the partition-level syntax elements each identify one of the reference lists and motion vector accuracy for a given one of the partitions. | 10-13-2011 |
20110249754 | VARIABLE LENGTH CODING OF CODED BLOCK PATTERN (CBP) IN VIDEO COMPRESSION - In one example, this disclosure describes method of coding video data. The method comprises coding a block of video data as one or more luminance blocks of transform coefficients and one or more chrominance blocks of transform coefficients, and coding a coded block pattern (CBP) for the block of video data. The CBP comprises syntax information that identifies whether non-zero data is included in each of the luminance blocks and each of the chrominance blocks. Coding the CBP includes selecting one or more variable length coding (VLC) tables based on a transform size used in performing one or more transforms on the one or more luminance blocks. | 10-13-2011 |
20110310976 | Joint Coding of Partition Information in Video Coding - In one example, a video decoder is configured to receive a value for a coding unit of video data, wherein the coding unit is partitioned into a plurality of sub-coding units, determine whether the sub-coding units are partitioned into further sub-coding units based on the value, and decode the sub-coding units and the further sub-coding units. In another example, a video encoder is configured to partition a coding unit of video data into a plurality of sub-coding units, determine whether to partition the sub-coding units into further sub-coding units, and encode the coding unit to include a value that indicates whether the sub-coding units are partitioned into the further sub-coding units. | 12-22-2011 |
20120027088 | CODING MOTION PREDICTION DIRECTION IN VIDEO CODING - This disclosure relates to techniques for reducing a cost of coding prediction information in video coding. Video blocks in a generalized P/B (GPB) frame are encoded using up to two motion vectors calculated from reference pictures in two separate reference picture lists that are identical. When one of the reference picture lists is preferred over the other reference picture list, the preferred reference picture list may be used for unidirectional prediction, by default. When a GPB frame is enabled such that the first and second reference picture lists are identical, either of the first and second reference picture lists may be used for unidirectional prediction. The techniques include coding one or more syntax elements indicating that a video block is coded using one of the unidirectional prediction mode with respect to a reference picture in a reference picture list and the bidirectional prediction mode using less than two bits. | 02-02-2012 |
20120027089 | CODING MOTION VECTORS IN VIDEO CODING - This disclosure relates to techniques for reducing a cost of coding prediction information in video coding. Video blocks in a generalized P/B (GPB) frame are encoded using up to two motion vectors calculated from reference pictures in two separate reference picture lists that are identical. Video blocks of a GPB frame may, therefore, be encoded using a bidirectional prediction mode with a first motion vector from a reference picture in a first reference picture list and a second motion vector from the same or substantially similar reference picture in a second reference picture list. The techniques include jointly coding the first and second motion vectors for a video block of a GPB frame. The techniques include coding the first motion vector relative to a first motion predictor generated from a motion vector of a neighboring block, and coding the second motion vector relative to the first motion vector. | 02-02-2012 |
20120082210 | CODING PREDICTION MODES IN VIDEO CODING - A video encoder can maintain, by generating, storing, adjusting, altering, and/or updating, one or more variable length coding (VLC) tables that represent a mapping of prediction modes to codewords. One or more codewords representing a selected prediction mode can be communicated to the decoder for a CU of a frame. The decoder maintains one or more VLC tables that match the VLC tables maintained by the video encoder. Thus, based on the one or more codewords received from the video encoder, the video decoder can determine the prediction mode used to encode a CU. | 04-05-2012 |
20120082222 | VIDEO CODING USING INTRA-PREDICTION - In general, techniques of this disclosure are related to determining a prediction characteristic associated with a coding unit of video data, wherein determining the prediction characteristic includes determining a prediction type that defines a number of prediction units associated with the coding unit. Techniques of this disclosure may also be related to generating a set of available intra-prediction modes for the coding unit based on the prediction characteristic, selecting an intra-prediction mode from the available intra-prediction modes, and applying one of the available intra-prediction modes to code the coding unit. | 04-05-2012 |
20120082223 | INDICATING INTRA-PREDICTION MODE SELECTION FOR VIDEO CODING - For a block of video data, a video encoder can signal to a video decoder a selected intra-prediction mode using a codeword that is mapped to a modified intra-prediction mode index. The video decoder can receive the codeword, determine the modified intra-prediction mode index corresponding to the codeword, determine most probable modes based on a context, map the modified intra-prediction mode index to an intra-prediction mode index by comparing the modified intra-prediction mode index to the mode indexes of the most probable modes, and determine the selected intra-prediction mode used to encode the block of video data based on the intra-prediction mode index. | 04-05-2012 |
20120082224 | INTRA SMOOTHING FILTER FOR VIDEO CODING - This disclosure relates to techniques for reducing the amount of additional data encoded with a block encoded using intra-predictive coding. Particularly, the techniques provide apparatus and methods of applying a smoothing filter to prediction samples used in intra-predictive coding. For example, in fixed mode-dependent intra-predictive coding, a video encoder may determine the type of smoothing filter applied to prediction samples based on block size and intra-prediction mode combination associated with the current block, where the combination is used to look up a filter in a first filter table. In adaptive mode-dependent intra-predictive coding, the encoder uses two filters, one from the first filter table and another from a second filter table, applies both filters, and determines which yields better results. When the second filter table filter yields better results, the encoder encodes a filtering indication. When a filter from the first filter table is used, no filtering indication is encoded. | 04-05-2012 |
20120082230 | VARIABLE LENGTH CODING OF VIDEO BLOCK COEFFICIENTS - This disclosure describes techniques for coding transform coefficients for a block of video data. According to one aspect of this disclosure, a coder (e.g., an encoder or decoder) may map between a code number cn and a level_ID value and an run value based on a structured mapping. According to other aspects of this disclosure, the coder may map between a code number cn and a level_ID value and an run value for the current transform coefficient using a first technique or a second technique based on a coded block type of a block of video data being coded. For example, if the coded block type is a first coded block type, the coder may use a structured mapping. However, if the coded block type is a second coded block type different than the first coded block type, the coder may access one or more mapping tables stored in memory to perform the mapping. | 04-05-2012 |
20120106649 | JOINT CODING OF SYNTAX ELEMENTS FOR VIDEO CODING - In one example, a video decoder is configured to determine whether a component of a transform unit of a coding unit of video data includes at least one non-zero coefficient based on a codeword for the transform unit, determine whether the transform unit is split into sub-transform units based on the codeword, and decode the transform unit based on the determinations. In another example, a video encoder is configured to determine whether a component of a transform unit of a coding unit of video data includes at least one non-zero coefficient, determine whether the transform unit is split into sub-transform units, select a codeword from a variable length code table, wherein the variable length code table provides an indication that the codeword corresponds to the determinations, and provide the codeword for the transform unit. | 05-03-2012 |
20120140822 | VIDEO CODING USING FUNCTION-BASED SCAN ORDER FOR TRANSFORM COEFFICIENTS - Video coding devices and methods use a function-based definition of scan order to scan transform coefficients associated with a block of residual video data. A video coder may define a scan order for coefficients based on a predefined function and one or more parameter values. A video encoder may use a function-based scan order to scan a two-dimensional array of coefficients to produce a one-dimensional array of coefficients for use in producing encoded video data. The video encoder may signal the parameters to a video decoder, or the video decoder may infer one or more of the parameters. The video decoder may use the function-based scan order to scan a one-dimensional array of coefficients to reproduce the two-dimensional array of coefficients for use in producing decoded video data. In each case, the scan order may vary according to the parameter values, which may include block size, orientation, and/or orientation strength. | 06-07-2012 |
20120147947 | CODEWORD ADAPTATION FOR VARIABLE LENGTH CODING - In one example, this disclosure describes a method of codeword adaptation for variable length coding. The method includes applying a first codeword adaptation scheme to groups of codewords in a variable length coding (VLC) table to change mappings of codewords within the groups to events in the VLC table; and applying a second codeword adaptation scheme to individual codewords within the groups of codewords in the VLC table to change mappings of the codewords to the events within the groups in the VLC table. | 06-14-2012 |
20120147970 | CODEWORD ADAPTATION FOR VARIABLE LENGTH CODING - In one example, this disclosure describes a method of codeword adaptation for variable length coding. The method comprises determining if a number codewords stored in a variable length coding (VLC) table satisfies a threshold; selecting a codeword adaptation scheme from a group of two or more codeword adaptation schemes based on whether the number of codewords satisfies the threshold; and applying the selected adaptation scheme to the codewords stored in the VLC table. | 06-14-2012 |
20120147971 | CODEWORD ADAPTATION FOR VARIABLE LENGTH CODING - In one example, this disclosure describes a method of codeword adaptation for variable length coding. The method comprises applying a first codeword adaptation scheme to a first group of codewords of a variable length coding (VLC) table to change a mapping of codewords to events in the VLC table; and applying a second codeword adaptation scheme to a second group of codewords of the VLC table to change the mapping of the codewords to the events in the VLC table. | 06-14-2012 |
20120163471 | VARIABLE LENGTH CODING OF VIDEO BLOCK COEFFICIENTS - This disclosure describes techniques for coding transform coefficients for a block of video data. According to some aspects of this disclosure, an encoder or decoder may map between a code number cn and last_pos and level_ID syntax elements associated with a block of video data based on a scaling factor S. The scaling factor S may be based on a size of the block of video data being coded. | 06-28-2012 |
20120170662 | VARIABLE LENGTH CODING OF VIDEO BLOCK COEFFICIENTS - This disclosure describes techniques for coding transform coefficients for a block of video data. According to some aspects of this disclosure, a coder (e.g., an encoder or decoder) may map between a code number cn and level_ID and run values associated with a first transform coefficient of the block of video data according to a first technique (e.g., a structured mapping), and map between a code number cn and level_ID and run values associated with a second coefficient of the block using a second technique. According to other aspects of this disclosure, the coder may map between a code number cn and level_ID and run syntax elements using different mathematical relationships, depending on a determined value of the code number cn or the level_ID syntax element. For example, the coder may access a mapping table of a plurality of mapping tables differently, dependent on the determined value. | 07-05-2012 |
20120177118 | INDICATING INTRA-PREDICTION MODE SELECTION FOR VIDEO CODING USING CABAC - For a block of video data, a video encoder can signal to a video decoder, using a context-based adaptive binary arithmetic coding (CABAC) process, a selected intra-prediction mode using a codeword that is mapped to a modified intra-prediction mode index. The video decoder can perform a context-based adaptive binary arithmetic coding (CABAC) process to determine the codeword signaled by the video encoder, determine the modified intra-prediction mode index corresponding to the codeword, determine most probable modes based on a context, map the modified intra-prediction mode index to an intra-prediction mode index by comparing the modified intra-prediction mode index to the mode indexes of the most probable modes, and determine the selected intra-prediction mode used to encode the block of video data based on the intra-prediction mode index. | 07-12-2012 |
20120230421 | TRANSFORMS IN VIDEO CODING - Aspects of this disclosure relate to a method of coding video data. In an example, the method includes determining a first residual quadtree (RQT) depth at which to apply a first transform to luma information associated with a block of video data, wherein the RQT represents a manner in which transforms are applied to luma information and chroma information. The method also includes determining a second RQT depth at which to apply a second transform to the chroma information associated with the block of video data, wherein the second RQT depth is different than the first RQT depth. The method also includes coding the luma information at the first RQT depth and the chroma information at the second RQT depth. | 09-13-2012 |
20120236931 | TRANSFORM COEFFICIENT SCAN - This disclosure describes techniques for coding transform coefficients for a block of video data. According to these techniques, a video encoder may adaptively scan a first plurality of coefficients of a two-dimensional matrix of coefficients, and use a fixed scan technique for a second plurality of coefficients of the two-dimensional matrix, to generate a one-dimensional vector of transform coefficients. Also according to these techniques, a video decoder may adaptively scan a first plurality of coefficients of a one-dimensional vector of coefficients, and use a fixed scan technique for a second plurality of coefficients of the one-dimensional vector, to generate a two-dimensional matrix of transform coefficients. | 09-20-2012 |
20120307888 | RUN-MODE BASED COEFFICIENT CODING FOR VIDEO CODING - A video coding device is configured to code coefficients of residual blocks of video data. When a coefficient of a transform unit of video data has a scan order value that is less than a threshold and when the coefficient is the last significant coefficient in a scan order in the transform unit, the video coding device may execute a function to determine a mapping between data for the coefficient and a codeword index value, and code the data for the coefficient using a codeword associated with the codeword index value. The video coding device may comprise a video encoder or a video decoder, in some examples. | 12-06-2012 |
20120314766 | ENHANCED INTRA-PREDICTION MODE SIGNALING FOR VIDEO CODING USING NEIGHBORING MODE - This disclosure describes techniques for intra-prediction mode signaling for video coding. In one example, a video coder is configured to determine, for a block of video data, a set of most probable intra-prediction modes such that the set of most probable intra-prediction modes has a size that is equal to a predetermined number that is greater than or equal to two. The video coder is also configured to code a value representative of an actual intra-prediction mode for the block based at least in part on the set of most probable intra-prediction modes and code the block using the actual intra-prediction mode. The video coder may further be configured to code the block using the actual intra-prediction mode, e.g., to encode or decode the block. Video encoders and video decoders may implement these techniques. | 12-13-2012 |
20120314767 | BORDER PIXEL PADDING FOR INTRA PREDICTION IN VIDEO CODING - A video coder performs a padding operation that processes a set of border pixels according to an order. The order starts at a bottom-left border pixel and proceeds through the border pixels sequentially to a top-right border pixel. When the padding operation processes an unavailable border pixel, the padding operation predicts a value of the unavailable border pixel based on a value of a border pixel previously processed by the padding operation. The video coder may generate an intra-predicted video block based on the border pixels. | 12-13-2012 |
20120320968 | UNIFIED MERGE MODE AND ADAPTIVE MOTION VECTOR PREDICTION MODE CANDIDATES SELECTION - A unified candidate block set for both adaptive motion vector prediction (AMVP) mode and merge mode for use in inter-prediction is proposed. In general, the same candidate block set is used regardless of which motion vector prediction mode (e.g., merge mode or AMVP mode) is used. In other examples of this disclosure, one candidate block in a set of candidate blocks is designated as an additional candidate block. The additional candidate block is used if one of the other candidate blocks is unavailable. Also, the disclosure proposes a checking pattern where the left candidate block is checked before the below left candidate block. Also, the above candidate block is checked before the right above candidate block. | 12-20-2012 |
20120320969 | UNIFIED MERGE MODE AND ADAPTIVE MOTION VECTOR PREDICTION MODE CANDIDATES SELECTION - A unified candidate block set for both adaptive motion vector prediction (AMVP) mode and merge mode for use in inter-prediction is proposed. In general, the same candidate block set is used regardless of which motion vector prediction mode (e.g., merge mode or AMVP mode) is used. In other examples of this disclosure, one candidate block in a set of candidate blocks is designated as an additional candidate block. The additional candidate block is used if one of the other candidate blocks is unavailable. Also, the disclosure proposes a checking pattern where the left candidate block is checked before the below left candidate block. Also, the above candidate block is checked before the right above candidate block. | 12-20-2012 |
20120328003 | MEMORY EFFICIENT CONTEXT MODELING - In an example, aspects of this disclosure relate to a method of coding video data that includes determining context information for a block of video data, where the block is included within a coded unit of video data, where the block is below a top row of blocks in the coded unit, and where the context information does not include information from an above-neighboring block in the coded unit. That method also includes entropy coding data of the block using the determined context information. | 12-27-2012 |
20120328004 | QUANTIZATION IN VIDEO CODING - In an example, aspects of this disclosure relate to a method of coding video data that includes identifying a plurality of quantization parameter (QP) values associated with a plurality of reference blocks of video data. The method also includes generating a reference QP for the plurality of reference blocks based on the plurality of QPs. The method also includes storing the reference QP, and coding a block of video data based on the stored reference QP. | 12-27-2012 |
20130003821 | SIGNALING SYNTAX ELEMENTS FOR TRANSFORM COEFFICIENTS FOR SUB-SETS OF A LEAF-LEVEL CODING UNIT - This disclosure describes techniques for coding transform coefficients for a block of video data. According to these techniques, a video encoder divides a leaf-level unit of video data into a plurality of transform coefficient sub-sets. The video encoder generates, for a sub-set of the plurality of transform coefficient sub-sets, a syntax element that indicates whether or not the sub-set includes any non-zero coefficients. In some examples, the video encoder may selectively determine whether to generate the syntax element for each sub-set. A decoder may read an entropy encoded bit stream that includes the syntax element, and determine whether to decode the sub-set based on the syntax element. | 01-03-2013 |
20130003824 | APPLYING NON-SQUARE TRANSFORMS TO VIDEO DATA - In one example, a device for coding video data includes a video coder, such as a video encoder or a video decoder, that is configured to code information indicative of whether a transform unit of the video data is square or non-square, and code data of the transform unit based at least in part on whether the transform unit is square or non-square. In this manner, the video coder may utilize non-square transform units. The video coder may be configured to use non-square transform units for certain situations, such as only for chrominance or luminance components or only when a corresponding prediction unit is non-square. The video coder may further be configured to perform an entropy coding process that selects context for coding data of the transform unit based on whether the transform unit is square or non-square. | 01-03-2013 |
20130003859 | TRANSITION BETWEEN RUN AND LEVEL CODING MODES - This disclosure describes techniques for coding transform coefficients for a block of video data. According to some aspects of this disclosure, a video coder (e.g., encoder, decoder) may code a first coefficient of a leaf-level unit of video data using a run encoding mode. The coder may code a second coefficient of the leaf-level unit of video data using a level encoding mode. After coding at least one coefficient using the level coding mode, the coder may use the run coding mode to code a third other coefficient of the leaf-level unit of video data. According to other aspects, an encoder may signal, to a decoder, at least one indication of a transition between level and run coding modes. According to still other aspects, a coder may automatically determine when to transition between the level and run coding modes. | 01-03-2013 |
20130022119 | BUFFERING PREDICTION DATA IN VIDEO CODING - In an example, aspects of this disclosure relate to a method of coding video data that generally includes determining prediction information for a block of video data, where the block is included in a coded unit of video data and positioned below a top row of above-neighboring blocks in the coded unit, and where the prediction information for the block is based on prediction information from one or more other blocks in the coded unit but not based on prediction information from any of the top row of blocks in the coded unit. The method also generally includes coding the block based on the determined prediction information. | 01-24-2013 |
20130070848 | LINE BUFFER REDUCTION FOR SHORT DISTANCE INTRA-PREDICTION - A video coder, such as a video encoder or a video decoder, identifies an entropy coding context in a set of one or more entropy coding contexts. The video coder identifies the entropy coding context without reference to a neighboring coding unit that is above a current coding unit in a current picture. The video coder then entropy codes a short distance intra-prediction (SDIP) syntax element of a coding unit (CU) using the identified entropy coding context. The SDIP syntax element at least partially defines a mode by which the CU is partitioned into a set of one or more transform units. | 03-21-2013 |
20130070854 | MOTION VECTOR DETERMINATION FOR VIDEO CODING - For each prediction unit (PU) belonging to a coding unit (CU), a video coder generates a candidate list. The video coder generates the candidate list such that each candidate in the candidate list that is generated based on motion information of at least one other PU is generated without using motion information of any of the PUs belonging to the CU. After generating the candidate list for a PU, the video coder generates a predictive video block for the PU based on one or more reference blocks indicated by motion information of the PU. The motion information of the PU is determinable based on motion information indicated by a selected candidate in the candidate list for the PU. | 03-21-2013 |
20130070855 | HYBRID MOTION VECTOR CODING MODES FOR VIDEO CODING - In one example, a device for coding video data includes a video coder (such as a video decoder or a video encoder) configured to code motion information for a current block of video data using a hybrid motion information coding mode, wherein to code the motion information, the video coder is configured to code a merge index syntax element of the motion information in a manner substantially conforming to a merge mode, and code at least one additional syntax element of the motion information in a manner substantially conforming to an advanced motion vector prediction (AMVP) mode, and wherein the video coder is configured to code the current block using the motion information. The hybrid mode may comprise a partial merge mode or a partial AMVP mode. | 03-21-2013 |
20130077691 | PARALLELIZATION FRIENDLY MERGE CANDIDATES FOR VIDEO CODING - This disclosure presents methods and systems for coding video in merge mode of a motion vector prediction process. A method of coding video data may determining a merge candidate set for a current prediction unit of a current coding unit, wherein the merge candidate set is determined without comparing motion information of a merge candidate in the merge candidate set to motion information of any other prediction units, and performing a merge motion vector prediction process for the current prediction unit using the merge candidate set. The method may further comprise excluding merge candidates from the merge candidate set that are within another prediction unit of the current coding unit. | 03-28-2013 |
20130083857 | MULTIPLE ZONE SCANNING ORDER FOR VIDEO CODING - A method for encoding transform coefficients in a video encoding process includes dividing a block of transform coefficients into a plurality of zones, determining a scan order for each of the plurality of zones, and performing a scan on each of the transform coefficients in each of the plurality of zones according to their respective determined scan order. In another example, a method for decoding transform coefficients in a video encoding process includes receiving a one-dimensional array of transform coefficients, determining a scan order for each of a plurality of sections of the one-dimensional array, wherein each section of the one-dimensional array corresponds to one of a plurality of zones defining a block of transform coefficients, and performing a scan on each of the transform coefficients in each of the section of the one dimensional array of zones according to their respective determined scan order. | 04-04-2013 |
20130089138 | CODING SYNTAX ELEMENTS USING VLC CODEWORDS - This disclosure describes techniques for coding transform coefficients for a block of video data. For example, according to one embodiment, a video encoder determines an lrg1Pos value associated with the transform coefficient based on the noTr1 value and a position k of the transform in the scan order of the block of video data based on using at least one table that defines an lrg1Pos value for more than one potential noTr1 value for the scan order of the block of video data. In one embodiment, the video decoder uses the determined lrg1Pos value associated with the transform coefficient to perform a structured mapping to determine a code number cn based on a determined value for the level_ID syntax element and a determined value for the run syntax element. | 04-11-2013 |
20130089145 | MOST PROBABLE TRANSFORM FOR INTRA PREDICTION CODING - A video coder can be configured to determine an intra-prediction mode for a block of video data, identify a most probable transform based on the intra-prediction mode determined for the block of video data, and code an indication of whether the most probable transform is a transform used to encode the block of video data. The most probable transform can be a non-square transform. | 04-11-2013 |
20130101016 | LOOP FILTERING AROUND SLICE BOUNDARIES OR TILE BOUNDARIES IN VIDEO CODING - The techniques of this disclosure apply to loop filtering across slice or tile boundaries in a video coding process. In one example, a method for performing loop filtering in a video coding process includes determining that pixels corresponding to filter coefficients of a filter mask for a loop filter are across a slice or tile boundary, removing filter coefficients corresponding to the pixels across the slice or tile boundary from the filter mask, renormalizing the filter mask without the removed filter coefficients, performing loop filtering using the renormalized filter mask. | 04-25-2013 |
20130101018 | ADAPTIVE LOOP FILTERING FOR CHROMA COMPONENTS - This disclosure proposes techniques to allow more flexibility in filtering chroma components in the adaptive loop filter. In one example, a method for adaptive loop filtering includes performing luma adaptive loop filtering based for luma components of a block of pixels, and performing chroma adaptive loop filtering for chroma components of the block of pixels, wherein filter coefficients for both the luma adaptive loop filtering and chroma adaptive loop filtering are derived from a block-based mode or a region-based mode. The method may further include determining to perform luma adaptive loop filtering on the block of pixels, and determining to perform chroma adaptive loop filtering on the block of pixels, wherein the determining to perform chroma adaptive loop filtering is performed independently of determining to perform luma adaptive loop filtering. | 04-25-2013 |
20130101024 | DETERMINING BOUNDARY STRENGTH VALUES FOR DEBLOCKING FILTERING FOR VIDEO CODING - A video coder associates a first boundary strength value with an edge in response to determining that a first video block or a second video block is associated with an intra-predicted coding unit (CU), where the edge occurs at a boundary between the first video block and the second video block. The video coder may associate a second or a third boundary strength value with the edge when neither the first video block nor the second video block is associated with an intra-predicted CU. The video coder may apply one or more deblocking filters to samples associated with the edge when the edge is associated with the first boundary strength value or the second boundary strength value. The third boundary strength value indicates that the deblocking filters are turned off for the samples associated with the edge. | 04-25-2013 |
20130101025 | INTRA PULSE CODE MODULATION (IPCM) AND LOSSLESS CODING MODE DEBLOCKING FOR VIDEO CODING - Techniques for coding video data include coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an intra pulse code modulation (IPCM) coding mode and a lossless coding mode. In some examples, the lossless coding mode may use prediction. The techniques further include assigning a non-zero quantization parameter (QP) value for the at least one block coded using the coding mode. The techniques also include performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block. | 04-25-2013 |
20130101031 | DETERMINING QUANTIZATION PARAMETERS FOR DEBLOCKING FILTERING FOR VIDEO CODING - A video coder determines a deblocking quantization parameter (QP) value based on at least one of a first QP value and a second QP value. Subsequently, the video coder applies a deblocking filter that is based on the deblocking filter to an edge associated with a first video block. The edge occurs at a boundary between the first video block and a second video block. The first video block is associated with a current coding unit (CU) and the second video block is associated with a neighboring CU. The current CU is included in a first quantization group and the neighboring CU is included in a second quantization group. The first QP value is defined for the first quantization group. The second QP value is defined for the second quantization group. | 04-25-2013 |
20130107950 | NON-SQUARE TRANSFORMS IN INTRA-PREDICTION VIDEO CODING | 05-02-2013 |
20130107970 | TRANSFORM UNIT PARTITIONING FOR CHROMA COMPONENTS IN VIDEO CODING | 05-02-2013 |
20130114669 | VLC COEFFICIENT CODING FOR LARGE CHROMA BLOCK - This disclosure describes techniques for coding transform coefficients for a block of video data. According to these techniques, a video coder (a video encoder or video decoder) determines whether a block of video data is a luma block or a chroma block. If the block of video data is a luma block, the video coder adaptively updates a VLC table index value based on a code number cn and value of a scaling factor. However, if the block of video data is a chroma block, the video coder adaptively updates the VLC table index value based on the code number cn and without using the scaling factor. The video coder uses the updated VLC table index value to select a VLC table of a plurality of VLC tables that are used to encode or decode the block of video data. | 05-09-2013 |
20130114675 | CONTEXT STATE AND PROBABILITY INITIALIZATION FOR CONTEXT ADAPTIVE ENTROPY CODING - In one example, an apparatus for context adaptive entropy coding may include a coder configured to determine one or more initialization parameters for a context adaptive entropy coding process based on one or more initialization parameter index values. The coder may be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy coding process based on the initialization parameters. The coder may be still further configured to initialize the contexts based on the initial context states. In some examples, the initialization parameters may be included in one or more tables, wherein, to determine the initialization parameters, the coder may be configured to map the initialization parameter index values to the initialization parameters in the tables. Alternatively, the coder may be configured to calculate the initialization parameters using the initialization parameter index values and one or more formulas. | 05-09-2013 |
20130114691 | ADAPTIVE INITIALIZATION FOR CONTEXT ADAPTIVE ENTROPY CODING - In one example, an apparatus for context adaptive entropy coding a video unit comprises a coder configured to code a syntax element, wherein a first value of the syntax element indicates that one or more of a plurality of context states are initialized using an adaptive initialization mode for the video unit, and a second value of the syntax element indicates that each of the plurality of context states is initialized using a default initialization mode for the video unit. In some examples, when the syntax element has the first value, the coder is further configured to code a map that indicates which of the context states are initialized using the adaptive initialization mode, and to further code either an initial state value for those contexts, or information from which the initial state values of those adaptively initialized context may be derived. | 05-09-2013 |
20130114717 | GENERATING ADDITIONAL MERGE CANDIDATES - In generating a candidate list for inter prediction video coding, a video coder can perform pruning operations when adding spatial candidates and temporal candidates to a candidate list while not performing pruning operations when adding an artificially generated candidate to the candidate list. The artificially generated candidate can have motion information that is the same as motion information of a spatial candidate or temporal candidate already in the candidate list. | 05-09-2013 |
20130114730 | CODING SIGNIFICANT COEFFICIENT INFORMATION IN TRANSFORM SKIP MODE - This disclosure describes techniques for coding significant coefficient information for a video block in a transform skip mode. The transform skip mode may provide a choice of a two-dimensional transform mode, a horizontal one-dimensional transform mode, a vertical one-dimensional transform mode, or a no transform mode. In other cases, the transform skip mode may provide a choice between a two-dimensional transform mode and a no transform mode. The techniques include selecting a transform skip mode for a video block, and coding significant coefficient information for the video block using a coding procedure defined based at least in part on the selected transform skip mode. Specifically, the techniques include using different coding procedures to code one or more of a position of a last non-zero coefficient and a significance map for the video block in the transform skip mode. | 05-09-2013 |
20130114734 | CODING SYNTAX ELEMENTS USING VLC CODEWORDS - This disclosure describes techniques for coding transform coefficients for a block of video data. According to these techniques, a video coder (a video encoder or video decoder) stores a first VLC table array selection table in memory, and an indication of at least one difference between the first VLC table array selection table and a second VLC table array selection table. The video coder reconstructs at least one entry of the second VLC table array selection table based on the first VLC table array selection table using the stored indication of the difference between the first VLC table array selection table and a second VLC table array selection table. The video coder uses the reconstructed at least one entry of the second VLC table array selection table to code at least one block of video data. | 05-09-2013 |
20130128971 | TRANSFORMS IN VIDEO CODING - Aspects of this disclosure relate to coding video data. In an example, a method of coding video data includes determining a first residual quadtree (RQT) depth at which to apply one or more first transforms to residual video data based on at least one characteristic of the residual of video data. The method also includes determining a second RQT depth at which to apply one or more second transforms to the residual video data based on the at least one characteristic. The method also includes coding the residual video data using the one or more first transforms and the one or more second transforms. | 05-23-2013 |
20130136167 | LARGEST CODING UNIT (LCU) OR PARTITION-BASED SYNTAX FOR ADAPTIVE LOOP FILTER AND SAMPLE ADAPTIVE OFFSET IN VIDEO CODING - This disclosure relates to techniques for performing sample adaptive offset (SAO) processes in a video coding process. A video coder may store sets of SAO information. The SAO information may include data indicative of offset values. The video coder may also store mapping information that maps at least some of the sets of SAO information for one or more sequence partitions of a frame of video data. Additionally, the video coder may perform the SAO processes for one of the partitions of the frame based on the stored SAO information and the stored mapping information. | 05-30-2013 |
20130136175 | NON-SQUARE TRANSFORM UNITS AND PREDICTION UNITS IN VIDEO CODING - This disclosure proposes techniques for transform partitioning in an intra-prediction video coding process. In one example, for a given intra-predicted block, a reduced number of transform unit partition options is allowed, based on certain conditions. In another example, transform units are decoupled from prediction units for intra-predicted block. For a given prediction unit, transforms of different sizes and shapes from the prediction unit may be applied. In another example, a reduced number of intra-prediction modes are allowed for a prediction unit having a non-square shape. | 05-30-2013 |
20130163664 | UNIFIED PARTITION MODE TABLE FOR INTRA-MODE CODING - In an example, aspects of this disclosure relate to a method for coding video data that includes predicting a first non-square partition of a current block of video data using a first intra-prediction mode, where the first non-square partition has a first size. The method also includes predicting a second non-square partition of the current block of video data using a second intra-prediction mode, where the second non-square partition has a second size different than the first size. The method also includes coding the current block based on the predicted first and second non-square partitions. | 06-27-2013 |
20130163668 | PERFORMING MOTION VECTOR PREDICTION FOR VIDEO CODING - In general, techniques are described for performing motion vector prediction for video coding. A video coding device comprising a processor may perform the techniques. The processor may be configured to determine a plurality of candidate motion vectors for a current block of the video data so as to perform the motion vector prediction process and scale one or more of the plurality of candidate motion vectors determined for the current block of the video data to generate one or more scaled candidate motion vectors. The processor may then be configured to modify the scaled candidate motion vectors to be within a specified range. | 06-27-2013 |
20130170553 | CODING MOTION VECTOR DIFFERENCE - The techniques described in this disclosure may be generally related to identifying when motion vector difference (MVD) is skipped for one or both reference picture lists. The techniques may further relate to contexts for signaling MVD values. The techniques may also be related to syntax that indicates when at least one of the MVD values is zero. | 07-04-2013 |
20130170562 | DEBLOCKING DECISION FUNCTIONS FOR VIDEO CODING - In one example, a video coding device is configured to decode four blocks of video data, wherein the four blocks are non-overlapping and share one common point such that four edge segments are formed by the four blocks, for each of the four edge segments, determine whether to deblock the respective edge segment based on a first analysis of at least one line of pixels that is perpendicular to the respective edge segment and that intersects the respective edge segment, for each of the four edge segments that was determined to be deblocked, determine whether to apply a strong filter or a weak filter to the respective edge segment based on a second analysis of the at least one line of pixels for the respective edge, and deblock the four edge segments based on the determinations. | 07-04-2013 |
20130177083 | MOTION VECTOR CANDIDATE INDEX SIGNALING IN VIDEO CODING - A video encoder generates a first and a second candidate list. The first candidate list includes a plurality of motion vector (MV) candidates. The video encoder selects, from the first candidate list, a MV candidate for a first prediction unit (PU) of a coding unit (CU). The second MV candidate list includes each of the MV candidates of the first MV candidate list except the MV candidate selected for the first PU. The video encoder selects, from the second MV candidate list, a MV candidate for a second PU of the CU. A video decoder generates the first and second MV candidate lists in a similar way and generates predictive sample blocks for the first and second PUs based on motion information of the selected MV candidates. | 07-11-2013 |
20130177084 | MOTION VECTOR SCALING IN VIDEO CODING - This disclosure proposes techniques for motion vector scaling. In particular, this disclosure proposes that both an implicit motion vector scaling process (e.g., the POC-based motion vector scaling process described above), as well as an explicit motion vector (e.g., a motion vector scaling process using scaling weights) may be used to perform motion vector scaling. This disclosure also discloses example signaling methods for indicating the type of motion vector scaling used. | 07-11-2013 |
20130188700 | CONTEXT ADAPTIVE ENTROPY CODING WITH A REDUCED INITIALIZATION VALUE SET - Techniques for coding data, such as, e.g., video data, include coding a first syntax element, conforming to a particular type of syntax element, of a first slice of video data, conforming to a first slice type, using an initialization value set. The techniques further include coding a second syntax element, conforming to the same type of syntax element, of a second slice of video data, conforming to a second slice type, using the same initialization value set. In this example, the first slice type may be different from the second slice type. Also in this example, at least one of the first slice type and the second slice type may be a temporally predicted slice type. For example, the at least one of the first and second slice types may be a unidirectional inter-prediction (P) slice type, or a bi-directional inter-prediction (B) slice type. | 07-25-2013 |
20130188701 | SUB-BLOCK LEVEL PARALLEL VIDEO CODING - The techniques of this disclosure are generally related to parallel coding of video units that reside along rows or columns of blocks in largest coding units. For example, the techniques include removing intra-prediction dependencies between two video units in different rows or columns to allow for parallel coding of rows or columns of the video units. | 07-25-2013 |
20130188715 | DEVICE AND METHODS FOR MERGE LIST REORDERING IN VIDEO CODING - A video coding device configured according to some aspects of this disclosure includes a memory configured to store an initial list of motion vector candidates and a temporal motion vector predictor (TMVP). The video coding device also includes a processor in communication with the memory. The processor is configured to obtain a merge candidate list size value (N) and identify motion vector candidates to include in a merge candidate list having a list size equal to the merge candidate list size value. The merge candidate list may be a merge motion vector (MV) candidate list or a motion vector predictor (MVP) candidate list (also known as an AMVP candidate list). The processor generates the merge candidate list such that the merge candidate list includes the TMVP, regardless of the list size. | 07-25-2013 |
20130188716 | TEMPORAL MOTION VECTOR PREDICTOR CANDIDATE - The techniques of this disclosure may be generally related to temporal motion vector prediction candidate. A video coder may determine a temporal motion vector prediction candidate for a plurality of blocks only once. Each of the plurality of blocks may include different spatial motion vector prediction candidates, but the temporal motion vector prediction candidate for the plurality of blocks may be the same. | 07-25-2013 |
20130188720 | VIDEO CODING USING PARALLEL MOTION ESTIMATION - An example video encoder is configured to receive an indication of merge mode coding of a block within a parallel motion estimation region (PMER), generate a merge mode candidate list comprising one or more spatial neighbor motion vector (MV) candidates and one or more temporal motion vector prediction (TMVP) candidates, wherein motion information of at least one of the spatial neighbor MV candidates is known to be unavailable during coding of the block at an encoder, determine an index value identifying, within the merge mode candidate list, one of the TMVP candidates or the spatial neighbor MV candidates for which motion information is available during coding of the particular block, and merge mode code the block using the identified MV candidate. | 07-25-2013 |
20130188744 | DEBLOCKING CHROMA DATA FOR VIDEO CODING - A video coding device is configured to obtain an array of sample values. The sample values may be formatted according to a 4:2:0, 4:2:2, or 4:4:4 chroma format. The video coding device determines whether to apply a first filter to rows of chroma sample values associated with defined horizontal edges within the array. The video coding device determines whether to apply a second filter to columns of chroma sample values associated with defined vertical edges. The horizontal and vertical edges may be separated by a number of chroma samples according to a deblocking grid. | 07-25-2013 |
20130195189 | IMPLICIT DERIVATION OF PARALLEL MOTION ESTIMATION RANGE SIZE - A method for decoding video data is described. The method may comprise receiving an indication of a size of a parallel motion estimation (PME) area, performing a motion vector prediction process on coding units having a size smaller than or equal to the PME area using a PME style candidate list construction process and the PME area, deriving an implicit PME area for coding units having a size larger than the PME area, and performing the motion vector prediction process on coding units having the size larger than the PME area using the PME style candidate list construction process and the implicit PME area. | 08-01-2013 |
20130195199 | RESIDUAL QUAD TREE (RQT) CODING FOR VIDEO CODING - A video decoding device receives an array of transform coefficients for a chroma component of video data. The video decoding device receives entropy encoded data representing the value of a split flag associated with the chroma component. The value of the split flag indicates whether the array of transform coefficients is divided into smaller transform blocks. The video decoding device determines a context for the entropy encoded data representing the split flag. The context is based on the value of a split flag associated with another component of video data. The video decoding device entropy decodes the data representing the value of the split flag based on the determined context using context adaptive binary arithmetic coding (CABAC). The luma and chroma components have independent residual quadtree (RQT) structures. | 08-01-2013 |
20130202037 | RESTRICTION OF PREDICTION UNITS IN B SLICES TO UNI-DIRECTIONAL INTER PREDICTION - A computing device determines whether a prediction unit (PU) in a B slice is restricted to uni-directional inter prediction. In addition, the computing device generates a merge candidate list for the PU and determines a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the computing device generates a predictive video block for the PU based on no more than one reference block associated with motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the computing device generates the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate. | 08-08-2013 |
20130202038 | RESTRICTION OF PREDICTION UNITS IN B SLICES TO UNI-DIRECTIONAL INTER PREDICTION - A video coding device generates a motion vector (MV) candidate list for a prediction unit (PU) of a coding unit (CU) that is partitioned into four equally-sized PUs. The video coding device converts a bi-directional MV candidate in the MV candidate list into a uni-directional MV candidate. In addition, the video coding device determines a selected MV candidate in the merge candidate list and generates a predictive video block for the PU based at least in part on one or more reference blocks indicated by motion information specified by the selected MV candidate. | 08-08-2013 |
20130215974 | CODING OF LOOP FILTER PARAMETERS USING A CODEBOOK IN VIDEO CODING - Techniques for coding video data include coding sample adaptive offset (SAO) offset values as part of performing a video coding process. In particular, the techniques include determining the SAO offset values according to a SAO process. The techniques further include storing a codebook defining a plurality of codes for coding different variations of SAO offset values. The techniques also include coding the SAO offset values in accordance with the codebook so as to specify the SAO offset values as one of the plurality of codes defined by the codebook. | 08-22-2013 |
20130251030 | INTER LAYER TEXTURE PREDICTION FOR VIDEO CODING - An apparatus for coding video information according to certain aspects is disclosed. Multi-layer video steams including a base layer and an enhancement layer can be coded. Predictors generated for the base layer and the enhancement layer can be combined to form a final predictor of the enhancement layer. Each predictor can be weighted such that those predictors which are more likely to produce high quality results can be factored more heavily in the final predictor. The conditions upon which the respective weights for enhancement layer predictors and base layer predictors are determined may be implicitly derived from the predictors or characteristics thereof. Alternatively, data may be generated explicating indicating the weights or providing information from which the weights can be determined. | 09-26-2013 |
20130259141 | CHROMA SLICE-LEVEL QP OFFSET AND DEBLOCKING - In one example, an apparatus for processing video data comprises a video coder configured to, for each of the one or more chrominance components, calculate a chrominance quantization parameter for a common edge between two blocks of video data based on a first luminance quantization parameter for the first block of video data, a second luminance quantization parameter for the second block of video data, and a chrominance quantization parameter offset value for the chrominance component. The video coder is further configured to determine a strength for a deblocking filter for the common edge based on the chrominance quantization parameter for the chrominance component, and apply the deblocking filter according to the determined strength to deblock the common edge. | 10-03-2013 |
20130266074 | CODED BLOCK FLAG CODING - A video encoder generates a bitstream that includes a residual quad tree (RQT) for a coding unit (CU). The CU is larger than a maximum-allowable transform unit (TU) size and the RQT includes a hierarchy of nodes. A root node of the RQT corresponds to the CU as a whole and leaf nodes of the RQT correspond to TUs of the CU. The root node is associated with a coded block flag (CBF) for a chroma component. The CBF for the chroma component indicates whether any of the TUs of the CU are associated with a significant coefficient block that is based on samples of the particular chroma component. A video decoder receives the bitstream and determines, based on the CBF, whether coefficient blocks associated with TUs that correspond to the leaf nodes include non-zero coefficients. | 10-10-2013 |
20130272377 | BYPASS BINS FOR REFERENCE INDEX CODING IN VIDEO CODING - In an example, aspects of this disclosure relate to a method for decoding a reference index syntax element in a video decoding process that includes decoding at least one bin of a reference index value with a context coding mode of a context-adaptive binary arithmetic coding (CABAC) process. The method also includes decoding, when the reference index value comprises more bins than the at least one bin coded with the context coded mode, at least another bin of the reference index value with a bypass coding mode of the CABAC process, and binarizing the reference index value. | 10-17-2013 |
20130272381 | SIMPLIFIED NON-SQUARE QUADTREE TRANSFORMS FOR VIDEO CODING - In an example, a method of decoding video data includes determining a prediction partitioning structure for predicting pixel values associated with a block of video data. The method also includes determining a transform partitioning structure for applying one or more transforms to the predicted pixel values. Determining the transform split structure includes splitting a parent transform unit, upon determining the transform partitioning structure comprises splitting the parent transform unit into one or more square transforms, determining one or more square transforms such that each of the one or more square transforms correspond to exactly one prediction partition, and upon determining the transform partitioning structure comprises splitting the parent transform unit into one or more non-square transforms, determining whether to split the one or more non-square transforms based at least in part on the one or more non-square transforms being non-square. | 10-17-2013 |
20130272402 | INTER-LAYER MODE DERIVATION FOR PREDICTION IN SCALABLE VIDEO CODING - In some embodiments of a video coder, if some prediction information is not available for a first block in a current layer, the video coder uses corresponding information (e.g., intra prediction direction and motion information), if available, from the first block's co-located second block in the base layer as if it were the prediction information for the first block. The corresponding information can then be used in the current layer to determine the prediction information of succeeding blocks in the current layer. | 10-17-2013 |
20130272409 | BANDWIDTH REDUCTION IN VIDEO CODING THROUGH APPLYING THE SAME REFERENCE INDEX - Techniques for encoding and decoding video data are described. A method of coding video may include determining a plurality of motion vector candidates for a block of video data for use in a motion vector prediction process, wherein each of the motion vector candidates points to a respective reference frame index, performing the motion vector prediction process using the motion vector candidates to determine a motion vector for the block of video data, and performing motion compensation for the block of video data using the motion vector and a common reference frame index, wherein the common reference frame index is used regardless of the respective reference frame index associated with the determined motion vector. | 10-17-2013 |
20130272410 | MOTION VECTOR ROUNDING - A video decoder determines, based at least in part on a size of a prediction unit (PU), whether to round either or both a horizontal or a vertical component of a motion vector of the PU from sub-pixel accuracy to integer-pixel accuracy. The video decoder generates, based at least in part on the motion vector, a predictive sample block for the PU and generates, based in part on the predictive sample block for the PU, a reconstructed sample block. | 10-17-2013 |
20130272411 | SCALABLE VIDEO CODING PREDICTION WITH NON-CAUSAL INFORMATION - This disclosure pertains to video coding. Prediction information for a current block in an enhancement layer may be determined based at least in part on base layer information obtained by coding a base block in a base layer beneath the enhancement layer. This base block may occur in a position in the base layer such that it is co-located with a non-causal block in the enhancement layer (e.g., a block that occurs after the current block in the coding order of the enhancement layer). The prediction information determined for the current block may be used to code the current block (e.g., encoding or decoding the current block). | 10-17-2013 |
20130272412 | COMMON MOTION INFORMATION CANDIDATE LIST CONSTRUCTION PROCESS - In one example, an apparatus for coding video data comprises a video coder configured to generate first and second lists of motion information candidates, respectively, for first and second video block using a common list construction process, wherein the common list construction process is common to at least a first motion information prediction mode and a second motion information prediction mode. The video coder is further configured to code the first video block using the first motion information prediction mode based on a first motion information candidate selected from the first list, and code the second video block using the second motion information prediction mode based on a second motion information candidate selected from the second list. | 10-17-2013 |
20130272413 | COMMON SPATIAL CANDIDATE BLOCKS FOR PARALLEL MOTION ESTIMATION - In one example, an apparatus for coding video data comprises a video coder configured to, for a parallel motion estimation (PME) region comprising a plurality of blocks of video data within the PME region, identify a common set of spatial candidate blocks outside of and adjacent to the PME region, each of the common set of spatial candidate blocks at a respective, predefined location relative to the PME region and, for each of the blocks within the PME region for which motion information prediction is performed, generate a respective motion information candidate list, wherein, for at least some of the blocks within the PME region for which motion information prediction is performed, generating the motion information candidate list comprises evaluating motion information of at least one of the common set of spatial candidate blocks for inclusion in the motion information candidate list for the block. | 10-17-2013 |
20130272425 | BETA OFFSET CONTROL FOR DEBLOCKING FILTERS IN VIDEO CODING - Techniques are described for providing continuous control of a deblocking filter for a video block using a beta offset parameter. Deblocking filters are defined based on one or more deblocking decisions. Conventionally, a quantization parameter and a beta offset parameter are used to identify a beta parameter (“β”) value that determines threshold values of the deblocking decisions. The value of the beta offset parameter results in a change or increment of the β value. For small increments of the β value, rounding of the threshold values may result in no change and discontinuous control of the deblocking decisions. The techniques include calculating at least one deblocking decision for the deblocking filter according to a threshold value that has been modified based on a multiplier value of the beta offset parameter. The multiplier value applied to the beta offset parameter causes an integer change in the modified threshold value. | 10-17-2013 |
20130287103 | QUANTIZATION PARAMETER (QP) CODING IN VIDEO CODING - A method of coding delta quantization parameter values is described. In one example a video decoder may receive a delta quantization parameter (dQP) value for a current quantization block of video data, wherein the dQP value is received whether or not there are non-zero transform coefficients in the current quantization block. In another example, a video decoder may receive the dQP value for the current quantization block of video data only in the case that the QP Predictor for the current quantization block has a value of zero, and infer the dQP value to be zero in the case that the QP Predictor for the current quantization block has a non-zero value, and there are no non-zero transform coefficients in the current quantization block. | 10-31-2013 |
20130287109 | INTER-LAYER PREDICTION THROUGH TEXTURE SEGMENTATION FOR VIDEO CODING - An apparatus for coding video data according to certain aspects includes a memory and a processor in communication with the memory. The memory stores the video data. The video data may include a base layer and an enhancement layer, the base layer including a base layer block and the enhancement layer including an enhancement layer block. The base layer block may be located at a position in the base layer corresponding to a position of the enhancement layer block in the enhancement layer. The processor determines, based on information associated with the base layer block, a partitioning mode of the enhancement layer block. The partitioning mode may indicate that the enhancement layer block is to be partitioned into a first partition and a second partition. The processor further performs motion compensation for the first partition and the second partition of the enhancement layer block. | 10-31-2013 |
20130294513 | INTER LAYER MERGE LIST CONSTRUCTION FOR VIDEO CODING - A method of decoding video data includes receiving syntax elements extracted from an encoded video bitstream, determining a candidate list for an enhancement layer block, and selectively pruning the candidate list. The syntax elements include information associated with a base layer block of a base layer of the video data. The candidate list is determined at least in part on motion information associated with the base layer block. The enhancement layer block is in an enhancement layer of the video data. The candidate list includes at least one motion information candidate that includes the motion information associated with the base layer block. The candidate list includes a merge list or an AMVP list. Pruning includes comparing one or more motion information candidates and at least one motion information candidate associated with the base layer block that is in the candidate list. | 11-07-2013 |
20130322538 | REFERENCE INDEX FOR ENHANCEMENT LAYER IN SCALABLE VIDEO CODING - An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information of a base, or reference, layer and an enhancement layer. The processor determines whether a base layer reference index is valid for the enhancement layer, and resolves mismatches between base layer and enhancement layer reference indices and reference frame picture order counts. Resolving mismatches may comprise deriving valid reference information from the base layer, using spatial motion information of video data associated with the reference information of the base and/or enhancement layers. | 12-05-2013 |
20130329789 | PREDICTION MODE INFORMATION DOWNSAMPLING IN ENHANCED LAYER CODING - In one embodiment, a video coder for processing video data includes a processor and a memory. The processor is configured to downsample at least prediction mode information of a reference layer block. In addition, the processor is configured to predict at least one of an enhancement layer block or prediction mode information of the enhancement layer block based at least on the prediction mode information of the reference layer block before the processor downsamples the prediction mode information of the reference layer block. The memory is configured to store the prediction mode information of the reference layer block. The prediction mode information of the reference layer block, for example, includes an inter-prediction mode, an intra-prediction mode, or a motion vector of the reference layer block. | 12-12-2013 |
20130329806 | BI-LAYER TEXTURE PREDICTION FOR VIDEO CODING - In one example, an apparatus is configured to code video data. The apparatus comprises a processor configured to determine a base layer reference block for a current block. The base layer reference block may be located in the base layer. The processor is further configured to determine an enhancement layer reference block for the current block. The enhancement layer reference block may comprise a weighted sum of a first reference block located in the enhancement layer and a second reference block located in the enhancement layer. The processor is further configured to determine a reference block from the base layer reference block and the enhancement layer reference block. | 12-12-2013 |
20130336394 | INFERRED BASE LAYER BLOCK FOR TEXTURE_BL MODE IN HEVC BASED SINGLE LOOP SCALABLE VIDEO CODING - An apparatus for coding video data using a single-loop decoding approach may include a memory unit and a processor in communication with the memory unit. In an embodiment, the memory unit stores the video data, the video data including a base layer and an enhancement layer. The base layer includes a base layer block, a non-constrained INTRA mode block, and an INTER mode block. The base layer block includes a sub-block located at least partially within one of the non-constrained INTRA mode block or the INTER mode block. The enhancement layer includes an enhancement layer block located at a position in the enhancement layer corresponding to a position of the base layer block in the base layer. The processor approximates pixel values of the sub-block and determines, based at least in part on the approximated pixel values, pixel values of the enhancement layer block. | 12-19-2013 |
20140092967 | USING BASE LAYER MOTION INFORMATION - Systems, methods, and devices for coding video data are described herein. In some aspects, a memory is configured to store the video data associated with a base layer and an enhancement layer. The base layer may comprise a reference block and base layer motion information associated with the reference block. The enhancement layer may comprise a current block. A processor operationally coupled to the memory is configured to determine a position of the base layer motion information in a candidate list based on a prediction mode in a plurality of prediction modes used at the enhancement layer. The processor is further configured to perform a prediction of the current block based at least in part on the candidate list. | 04-03-2014 |
20140219342 | MODE DECISION SIMPLIFICATION FOR INTRA PREDICTION - In general, techniques are described for reducing the complexity of mode selection when selecting from multiple, different prediction modes. A video coding device comprising a processor may perform the techniques. The processor may compute approximate costs for a pre-defined set of intra-prediction modes identified in a current set. The current set of intra-prediction modes may include fewer modes than a total number of intra-prediction modes. The processor may compare approximate costs computed for one or more most probable intra-prediction modes to a threshold and replace one or more of the intra-prediction modes of the current set with one or more most probable intra-prediction modes. The processor may perform rate distortion analysis with respect to each intra-prediction mode identified in the current set and perform intra-prediction coding with respect to the current block using a mode of the current set. | 08-07-2014 |
20140219349 | INTRA PREDICTION MODE DECISION WITH REDUCED STORAGE - In general, techniques are described for reducing the space required to store rate distortion values when selecting from multiple, different prediction modes. A video coding device comprising a processor may perform the techniques. The processor may determine first and second sets of intra-prediction modes for a current block of video data. The first and second sets of intra-prediction modes may include less intra-prediction modes, collectively, than a total number of intra-prediction modes. The processor may compute an approximate cost for each intra-prediction mode included in the first and second sets of intra-prediction modes. The processor may store the approximate cost for each intra-prediction mode identified in the first and second sets of intra-prediction modes to a memory. The processor may perform intra-prediction to encode the current block using a mode identified in at least one of the first or second set. | 08-07-2014 |
20140294078 | BANDWIDTH REDUCTION FOR VIDEO CODING PREDICTION - In one example, an apparatus for coding video data comprises a video coder configured to obtain a motion vector for predicting a video block with a non-4:2:0 chroma format, determine a video block size for the video block, modify the motion vector to generate a modified motion vector for obtaining samples of at least one reference picture with which to predict the video block if the video block size meets a size criterion, and generate a prediction block for the video block using the samples of the at least one reference picture and the modified motion vector. | 10-02-2014 |
20140301460 | INTRA RATE CONTROL FOR VIDEO ENCODING BASED ON SUM OF ABSOLUTE TRANSFORMED DIFFERENCE - This disclosure describes techniques for rate control for intra coded frames. In one example of the disclosure, a rate control parameter may be calculated using a target bit rate and a complexity measure. In one example, the complexity measure is calculated with a sum of absolute transformed differences (SATD) calculation of an intra-coded frame. | 10-09-2014 |