Patent application number | Description | Published |
20100046635 | TILING IN VIDEO DECODING AND ENCODING - Implementations are provided that relate, for example, to view tiling in video encoding and decoding. A particular method includes accessing a video picture that includes multiple pictures combined into a single picture, accessing information indicating how the multiple pictures in the accessed video picture are combined, decoding the video picture to provide a decoded representation of at least one of the multiple pictures, and providing the accessed information and the decoded video picture as output. Some other implementations format or process the information that indicates how multiple pictures included in a single video picture are combined into the single video picture, and format or process an encoded representation of the combined multiple pictures. | 02-25-2010 |
20100135391 | METHODS AND APPARATUS FOR MOTION SKIP MOVE WITH MULTIPLE INTER-VIEW REFERENCE PICTURES - There are provided methods and apparatus for motion skip mode with multiple inter-view reference pictures. An apparatus includes an encoder for encoding an image block relating to multi-view video content by performing a selection, for the image block, of at least one of an inter-view reference picture list from a set of inter-view reference picture lists, an inter-view reference picture from among a set of inter-view reference pictures, and a disparity vector from among a set of disparity vectors corresponding to the inter-view reference picture. The encoder extracts motion information for the image block based on at least one of the inter-view reference picture list, the inter-view reference picture, and disparity vector. | 06-03-2010 |
20100284466 | VIDEO AND DEPTH CODING - Various implementations are described. Several implementations relate to video and depth coding. One method includes selecting a component of video information for a picture. A motion vector is determined for the selected video information or for depth information for the picture. The selected video information is coded based on the determined motion vector. The depth information is coded based on the determined motion vector. An indicator is generated that the selected video information and the depth information are coded based on the determined motion vector. One or more data structures are generated that collectively include the coded video information, the coded depth information, and the generated indicator. | 11-11-2010 |
20110001792 | VIRTUAL REFERENCE VIEW - Various implementations are described. Several implementations relate to a virtual reference view. According to one aspect, coded information is accessed for a first-view image. A reference image is accessed that depicts the first-view image from a virtual-view location different from the first-view. The reference image is based on a synthesized image for a location that is between the first-view and the second-view. Coded information is accessed for a second-view image coded based on the reference image. The second-view image is decoded. According to another aspect, a first-view image is accessed. A virtual image is synthesized based on the first-view image, for a virtual-view location different from the first-view. A second-view image is encoded using a reference image based on the virtual image. The second-view is different from the virtual-view location. The encoding produces an encoded second-view image. | 01-06-2011 |
20110038418 | CODE OF DEPTH SIGNAL - Various implementations are described. Several implementations relate to determining, providing, or using a depth value representative of an entire coding partition. According to a general aspect, a first portion of an image is encoded using a first-portion motion vector that is associated with the first portion and is not associated with other portions of the image. The first portion has a first size. A first-portion depth value is determined that provides depth information for the entire first portion and not for other portions. A second portion of an image is encoded using a second-portion motion vector that is associated with the second portion and is not associated with other portions of the image. The second portion has a second size that is different from the first size. A second-portion depth value is determined that provides depth information for the entire second portion and not for other portions. | 02-17-2011 |
20110142138 | REFINED DEPTH MAP - Various implementations are described. Several implementations relate to a refined depth map. According to one aspect, depth information for a picture in a set of pictures is accessed. Modified depth information for the picture is accessed. A refinement is determined that characterizes a difference between the depth information and the modified depth information. The refinement, and the depth information, is provided for use in processing one or more pictures in the set of pictures. | 06-16-2011 |
20110148858 | VIEW SYNTHESIS WITH HEURISTIC VIEW MERGING - Several implementations relate to view synthesis with heuristic view merging for 3D Video (3DV) applications. According to one aspect, a first candidate pixel from a first warped reference view and a second candidate pixel from a second warped reference view are assessed based on at least one of a backward synthesis process to assess a quality of the first and second candidate pixels, a hole distribution around the first and second candidate pixels, or on an amount of energy around the first and second candidate pixels above a specified frequency. The assessing occurs as part of merging at least the first and second warped reference views into a signal synthesized view. Based on the assessing, a result is determined for a given target pixel in the single synthesized view. The result may be determining a value for the given target pixel, or marking the given target pixel as a hole. | 06-23-2011 |
20110157229 | VIEW SYNTHESIS WITH HEURISTIC VIEW BLENDING - Various implementations are described. Several implementations relate to view synthesis with heuristic view blending for 3D Video (3DV) applications. According to one aspect, at least one reference picture, or a portion thereof, is warped from at least one reference view location to a virtual view location to produce at least one warped reference. A first candidate pixel and a second candidate pixel are identified in the at least one warped reference. The first candidate pixel and the second candidate pixel are candidates for a target pixel location in a virtual picture from the virtual view location. A value for a pixel at the target pixel location is determined based on values of the first and second candidate pixels. | 06-30-2011 |
20110268177 | JOINT DEPTH ESTIMATION - Various implementations are described. Several implementations relate to joint depth estimation for multiple depth maps. In one implementation, a first-view depth indicator for a location in a first view is estimated, and a second-view depth indicator for a corresponding location in a second view is estimated. The estimating of one or more of the first-view depth indicator and the second-view depth indicator is based on a constraint. The constraint provides a relationship between the first-view depth indicator and the second-view depth indicator for corresponding locations. | 11-03-2011 |
20110273529 | CODING OF DEPTH MAPS - Various implementations are described. Several implementations relate to filtering of depth maps. According to a general aspect, a first depth picture is accessed that corresponds to a first video picture. For a given portion of the first depth picture, a co-located video portion of the first video picture is determined. A video motion vector is accessed that indicates motion of the co-located video portion of the first video picture with respect to a second video picture. A second depth picture is accessed that corresponds to the second video picture. A depth portion of the second depth picture is determined, from the given portion of the first depth picture, based on the video motion vector. The given portion of the first depth picture is updated based on the depth portion of the second depth picture. | 11-10-2011 |
20110286530 | Frame packing for video coding - Implementations are provided that relate, for example, to view tiling in video encoding and decoding. A particular implementation accesses a video picture that includes multiple pictures combined into a single picture, and accesses additional information indicating how the multiple pictures in the accessed video picture are combined. The accessed information includes spatial interleaving information and sampling information. Another implementation encodes a video picture that includes multiple pictures combined into a single picture, and generates information indicating how the multiple pictures in the accessed video picture are combined. The generated information includes spatial interleaving information and sampling information. A bitstream is formed that includes the encoded video picture and the generated information. Another implementation provides a data structure for transmitting the generated information. | 11-24-2011 |
20110292043 | Depth Map Coding to Reduce Rendered Distortion - Several implementations relate to depth map coding. In one implementation, a depth coding rate, that results from coding one or more portions of a depth map using a particular coding mode, is determined. The depth map can be used to render video for a different view than that of the depth map. A depth map distortion, that results from coding the one or more portions of the depth map using the particular coding mode, is determined. A value of distortion for the rendered video, based on the depth map distortion and on a particular relationship between the depth map distortion and values of distortion for the rendered video, is determined. It is determined whether to use the particular coding mode to code the one or more portions of the depth map, and the determination is based on the value of distortion for the rendered video and the depth coding rate. | 12-01-2011 |
20110292044 | DEPTH MAP CODING USING VIDEO INFORMATION - Several implementations relate to depth map coding. In one implementation, it is determined that differences between collocated video blocks are small enough to be interchanged. Based on that determination, a depth block corresponding to a first of the video blocks is coded using an indicator that instructs a decoder to use a collocated depth block, corresponding to a second of the video blocks, in place of the depth block. In another implementation, a video signal includes a coding of at least a single indicator that instructs a decoder to decode both a depth block and a corresponding video block using collocated blocks, from other pictures, in place of the depth block and the corresponding video block. In another implementation, the depth block and the corresponding video block are decoded, based on the single indicator, using the collocated blocks in place of the depth block and the corresponding video block. | 12-01-2011 |
20110298895 | 3D VIDEO FORMATS - Several implementations relate to 3D video formats. One or more implementations provide adaptations to MVC and SVC to allow 3D video formats to be used. According to a general aspect, a set of images including video and depth is encoded. The set of images is related according to a particular 3D video format, and are encoded in a manner that exploits redundancy between the set of images. The encoded images are arranged in a bitstream in a particular order, based on the particular 3D video format that relates to the images. The particular order is indicated in the bitstream using signaling information. According to another general aspect, a bitstream is accessed that includes the encoded set of images. The signaling information is also accessed. The set of images is decoded using the signaling information. | 12-08-2011 |
20120044322 | 3D VIDEO CODING FORMATS - Several implementations relate to 3D video (3DV) coding formats. One implementation encodes multiple pictures that describe different three-dimensional (3D) information for a given view at a given time. Syntax elements are generated that indicate, for the encoded multiple pictures, how the encoded picture fits into a structure that supports 3D processing. The structure defines content types for the multiple pictures. A bitstream is generated that includes the encoded multiple pictures and the syntax elements. The inclusion of the syntax elements provides, at a coded-bitstream level, indications of relationships between the encoded multiple pictures in the structure. The syntax elements also enable efficient inter-layer coding of the 3DV content, thereby reducing the bandwidth used to transmit the 3DV content. Corresponding decoding implementations are also provided. Extraction methods are also provided from extracting pictures of interest from such a 3DV multiple pictures and the syntax elements, the video stream characterized by such a 3D structure. | 02-23-2012 |
20120056981 | INTER-LAYER DEPENDENCY INFORMATION FOR 3DV - Various implementations are directed to providing inter-layer dependency information. In one implementation, syntax elements are generated that indicate an inter-layer dependency structure among three-dimensional video (3DV) layers. Based on the inter-layer dependency structure, an inter-layer reference is identified for a picture from a layer of the 3DV layers. The picture is encoded based, at least in part, on the inter-layer reference. Corresponding decoding implementations are also provided. Additionally, in another implementation, a transmission priority and an indication of network congestion are used to determine whether to transmit data for a particular 3DV layer. The transmission priority is based on an inter-layer dependency structure among multiple 3DV layers. Another implementation is directed to a network abstraction layer unit that can explicitly identify and convey inter-layer references and corresponding dependencies. | 03-08-2012 |
20120140819 | DEPTH MAP CODING - Various implementations relate to depth map coding. In one method, a depth coding rate and depth distortion are determined for a coding mode. Based on the value of depth distortion, a correlation coefficient is determined between at least a portion of a video picture and a translated version of the video picture. The video picture is one or more of a video picture corresponding to the depth being coded, or a rendered video picture for a different view. A video distortion is determined based on the correlation coefficient, and is used to evaluate the coding mode. Another implementation determines a multiplier, to be used in a rate-distortion cost, based on pixel values from one or more of a video picture from a particular view or a rendered video picture for a different view. | 06-07-2012 |
20120200669 | FILTERING AND EDGE ENCODING - Several implementations relate, for example, to depth encoding and/or filtering for 3D video (3DV) coding formats. A sparse dyadic mode for partitioning macroblocks (MBs) along edges in a depth map is provided as well as techniques for trilateral (or bilateral) filtering of depth maps that may include adaptive selection between filters sensitive to changes in video intensity and/or changes in depth. One implementation partitions a depth picture, and then refines the partitions based on a corresponding image picture. Another implementation filters a portion of a depth picture based on values for a range of pixels in the portion. For a given pixel in the portion that is being filtered, the filter weights a value of a particular pixel in the range by a weight that is based on one or more of location distance, depth difference, and image difference. | 08-09-2012 |
20120206440 | Method for Generating Virtual Images of Scenes Using Trellis Structures - An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depth values associated with each pixel of a selected image is determined. For each candidate depth value, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth value with a least cost is selected to produce an optimal depth value for the pixel. Then, the virtual image is synthesized based on the optimal depth value of each pixel and the texture images. | 08-16-2012 |
20120249751 | IMAGE PAIR PROCESSING - At least one implementation determines whether two cameras are in parallel or are converging, based on an automated analysis of images from the cameras. One particular implementation determines the disparity of a foreground point and a background point. If the sign of the two disparities are the same, then the particular implementation decides that the cameras are in parallel. Otherwise, the particular implementation decides that the two cameras are converging. More generally, various implementations access a first image and a second image that form a stereo image pair. Multiple features are selected that exist in the first image and in the second image. An indicator of depth is determined for each of the multiple features. It is determined whether the first camera and the second camera were arranged in a parallel arrangement or a converging arrangement based on the values of the determined depth indicators. | 10-04-2012 |
20120249869 | STATMUX METHOD FOR BROADCASTING - A statistical multiplexing method is provided that comprises accessing a plurality of video sequences, wherein the video sequences are each assigned to a unique channel in a common broadcast system; collecting information from a plurality of the unique channels assigned to encode the corresponding video sequences; applying rho-domain analysis to the video sequences; and determining bitrate allocation for the channels responsive to the information collect and the rho-domain analysis. | 10-04-2012 |
20130039597 | Comfort Noise and Film Grain Processing for 3 Dimensional Video - Noise, either in the form of comfort noise or film grain, is added to a three dimensional image in accordance with image depth information to reduce human sensitivity to coding artifacts, thereby improving subjective image quality. | 02-14-2013 |
20140184744 | DEPTH CODING - Various implementations address depth coding and related disciplines. In one particular implementation, a segmentation is determined for a particular portion of a video image in a sequence of video images. The segmentation is determined based on reference depth indicators that are associated with at least a portion of one video image in the sequence of video images. Target depth indicators associated with the particular portion of the video image are processed. The processing is based on the determined segmentation in the particular portion of the video image. In another particular implementation, a segmentation is determined for at least a given portion of a video image based on depth indicators associated with the given portion. The segmentation is extended from the given portion into a target portion of the video image based on pixel values in the given portion and on pixel values in the target portion. | 07-03-2014 |
Patent application number | Description | Published |
20080227951 | Process for preparing high molecular weight polyesters - A process for producing higher molecular weight polyester includes heating a polyester to form a melt, and applying and maintaining a vacuum of between about 5 mm and about 85 mm of mercury to the melt while passing bubbles of gas through the melt until molecular weight has increased. The process may involve esterification of a diacid component and a diol component at elevated temperature. Typically, an excess of diol was employed. After the acid functional groups have essentially reacted, a vacuum of about 5 mm of mercury or less was applied and excess diol stripped off during transesterification, thereby increasing molecular weight. | 09-18-2008 |
20090238980 | Variable texture floor covering - A floor covering has an exposed surface with substantially the same gloss level and at least two portions having different tactile surface characteristics. The difference in the tactile surface characteristics between the two portions is at least an average RPc of 4. The floor covering includes a substrate and a high performance coating overlying the substrate. The high performance coating comprises texture particles, which may be organic polymer particles. The floor covering is made by forming a high performance coating including the texture particles on a substrate, at least partially curing the high performance coating, and then while controlling the temperature of the high performance coating below the melting point temperature or softening point temperature of the texture particles and above the temperature at which the texture particles deform under the applied mechanical embossing pressure, subjecting the first and second portions to different mechanical embossing conditions. | 09-24-2009 |
20090274919 | Biobased Resilient Floor Tile - A biobased resilient tile includes at least one base layer, at least one film layer, and a topcoat. The base layer includes a polymeric binder and a filler. The base layer has at least about 20-95% weight of the filler and at least about 5% weight of recycled material. The film layer is supported by the base layer. The film layer is a rigid film selected from the group consisting of polyethyleneterephthalate, glycolated polyethyleneterephthalate, polybutylene terephthalate, polypropylene terephthalte, or a thermoplastic ionomer resin. The film layer includes recycled material. The topcoat is provided on the film layer. The topcoat is a radiation curable biobased coating comprising a biobased component selected from the group consisting of a biobased resin, a biobased polyol acrylate, or a biobased polyol. | 11-05-2009 |
20090275674 | UV/EB Curable Biobased Coating for Flooring Application - A radiation curable biobased coating, such as a UV/EB curable biobased coating, for flooring applications includes a biobased component comprising renewable and/or biobased materials. The biobased component is selected from the group consisting of a biobased resin, a biobased polyol acrylate, or a biobased polyol. The biobased component is blended with a coating formula. The coating formula includes at least one initiator. The radiation curable biobased coating contains at least about 5% weight of renewable materials or biobased content. | 11-05-2009 |
20100276059 | UVV curable coating compositions and method for coating flooring and other substrates with same - A floor covering includes a wear layer including a resin and a photoinitiator in which the composition of the wear layer is curable by radiation having the strongest wavelength in the UVV range of 400 to 450 nm. The gloss of the wear layer can be controlled by controlling the amount of flatting agent in the composition applied to the surface, the amount of power applied to the surface coated with the composition or the temperature of the surface coated with the composition when the coated surface is subjected to the UVV radiation. | 11-04-2010 |
20120015110 | Ultraviolet curable coating - Ultraviolet curable compositions are disclosed that can be applied to achieve a uniform gloss after curing even under conditions of varying ultraviolet intensity during curing, that do not require continuous agitation to keep flattening agents and other additives suspended prior to application of the composition, and/or which do not exhibit a significant increase in viscosity over time in a roll coater application. | 01-19-2012 |
20130085218 | ENERGY CURED COATING COMPOSITION AND PROCESS OF APPLYING SAME TO SUBSTRATE - Disclosed is an energy curable coating composition for roll coating, a product, and continuous process of applying the energy curable coating composition to a substrate. The energy curable coating composition has a substantially constant viscosity and includes an energy curable resin having plurality of texturing particles suspended therein. The substantially constant viscosity of the energy curable coating composition remains at least less than about 1500 centipoise at approximately 15° C. to approximately 40° C., during recirculation, in a coating pan, and prior to application using a roll coating apparatus. The plurality of texturing particles provide a predetermined texture to a cured energy coating composition. The continuous process of applying the energy curable coating to a substrate allows the process to continuously operate until the energy curable coating composition is depleted from the container. | 04-04-2013 |
20130230729 | UV/EB CURABLE BIOBASED COATING FOR FLOORING APPLICATION - A coating composition and a floor product are disclosed. The coating composition has a biobased component that includes urethane acrylate, vinyl ether, or polyester acrylate. The coating composition includes at least about 5% by weight of renewable and/or biobased component. The coating composition is radiation curable, formed by acrylating a biobased polyol acrylate, and reacting the biobased polyol acrylate with polyisocyanate to form a biobased resin. The floor product includes a cellulosic substrate and a biobased coating applied to the cellulosic substrate. | 09-05-2013 |
20130237665 | PVC/POLYESTER BINDER FOR PRODUCTS - A product is provided having a biobased component where the biobased component includes recycle polyester resin and the recycle polyester resin includes polyethylene terephthalate, polybutylene, or polypropylene terephthalate. The biobased component includes a polyester resin where the polyester resin is the co-reaction product of an aliphatic polyester having renewable components and a recycle polyester resin. Included is a composition having a filler and a polymeric binder. The filler includes inorganic biobased filler or recycle thermoset resin based filler. | 09-12-2013 |
20130239852 | Biobased Plasticizer and Surface Covering Employing Same - A biobased plasticizer and a surface covering are disclosed. The biobased plasticizer includes an ester formed as a reaction product of a furan derivative selected from the group consisting of furoic acid, furfural and furfuryl alcohol reacted with a carboxylic acid or an alcohol, or includes an ester formed as a reaction product of a biobased aromatic compound and a biobased aliphatic compound. The surface covering is plasticized with a composition comprising an ester formed as a reaction product of a functionalized aromatic heterocyclic compound reacted with a carboxylic acid or an alcohol. | 09-19-2013 |
20130273321 | VARIABLE TEXTURE FLOOR COVERING - A floor covering has an exposed surface with substantially the same gloss level and at least two portions having different tactile surface characteristics. The difference in the tactile surface characteristics between the two portions is at least an average RPc of 4. The floor covering includes a substrate and a high performance coating overlying the substrate. The high performance coating comprises texture particles, which may be organic polymer particles. The floor covering is made by forming a high performance coating including the texture particles on a substrate, at least partially curing the high performance coating, and then while controlling the temperature of the high performance coating below the melting point temperature or softening point temperature of the texture particles and above the temperature at which the texture particles deform under the applied mechanical embossing pressure, subjecting the first and second portions to different mechanical embossing conditions. | 10-17-2013 |
20140039111 | FLOORING PRODUCT HAVING REGIONS OF DIFFERENT RECYCLE OR RENEWABLE CONTENT - A flooring product comprises a heterogeneous design layer having multiple regions wherein two regions comprise compositions having different recycle content or renewable content. The thickness of the design layer may be greater than the dimensions of the regions or the regions may extending from the top surface of the layer to the bottom surface of the design layer. The heterogeneous layer may be composed of consolidated particles/chips which may contain a polyester binder system with a renewable component. Additionally, a method for making a heterogeneous layer having a target wt % recycle content or renewable content is also disclosed. | 02-06-2014 |
20140102335 | BIOBASED PLASTICIZER AND SURFACE COVERING EMPLOYING SAME - Described herein are biobased plasticizer compositions comprising a compound having the structure of Formula I: | 04-17-2014 |
20140106080 | ULTRAVIOLET CURABLE COATING - Ultraviolet curable compositions are disclosed that can be applied to achieve a uniform gloss after curing even under conditions of varying ultraviolet intensity during curing, that do not require continuous agitation to keep flattening agents and other additives suspended prior to application of the composition, and/or which do riot exhibit a significant increase in viscosity over time in a roll coater application. | 04-17-2014 |
20140135434 | POLYESTER BINDER FOR FLOORING PRODUCTS - A flooring product is provided comprising at least one layer including a polymeric binder comprising at least one thermoplastic polyester resin, wherein the polyester resin comprises at least one renewable component. A flooring product is also provided that includes at least one layer comprising filler and at least one thermoplastic, high molecular weight polyester resin. The flooring product may also qualify for at least one point under the LEED System. A composition is also provided that can be melt mixed in low intensity mixers and processed into flooring layers. | 05-15-2014 |
20140295195 | BIOBASED RESILIENT FLOOR TILE - A biobased resilient tile includes at least one base layer, at least one film layer, and a topcoat. The base layer includes a polymeric binder and a filler. The base layer has at least about 20-95% weight of the filler and at least about 5% weight of recycled material. The film layer is supported by the base layer. The film layer is a rigid film selected from the group consisting of polyethyleneterephthalate, glycolated polyethyleneterephthalate, polybutylene terephthalate, polypropylene terephthalte, or a thermoplastic ionomer resin. The film layer includes recycled material. The topcoat is provided on the film layer. The topcoat is a radiation curable biobased coating comprising a biobased component selected from the group consisting of a biobased resin, a biobased polyol acrylate, or a biobased polyol. | 10-02-2014 |
Patent application number | Description | Published |
20120062756 | Method and System for Processing Multiview Videos for View Synthesis Using Skip and Direct Modes - Multiview videos are acquired by overlapping cameras. Side information is used to synthesize multiview videos. A reference picture list is maintained for current frames of the multiview videos, the reference picture indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list with a skip mode and a direct mode, whereby the side information is inferred from the synthesized reference picture. Alternatively, the depth images corresponding to the multiview videos of the input data, and this data are encoded as part of the bitstream depending on a SKIP type. | 03-15-2012 |
20120206442 | Method for Generating Virtual Images of Scenes Using Trellis Structures - An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depth values associated with each pixel of a selected image is determined. For each candidate depth value, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth value with a least cost is selected to produce an optimal depth value for the pixel. Then, the virtual image is synthesized based on the optimal depth value of each pixel and the texture images. | 08-16-2012 |
20120206451 | Method for Enhancing Depth Images of Scenes Using Trellis Structures - An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depths associated with each pixel of a selected image is determined. For each candidate depth, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth with a least cost is selected to produce an optimal depth for the pixel. Then, the virtual image is synthesized based on the optimal depth of each pixel and the texture images. The method also applies first and second depth enhancement before, and during view synthesis to correct errors or suppress noise due to the estimation or acquisition of the dense depth images and sparse depth features. | 08-16-2012 |
20120269458 | Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers - A method interpolates and filters a depth image with reduced resolution to recover a high resolution depth image using edge information, wherein each depth image includes an array of pixels at locations and wherein each pixel has a depth. The reduced depth image is first up-sampled, interpolating the missing positions by repeating the nearest-neighboring depth value. Next, a moving window is applied to the pixels in the up-sampled depth image. The window covers a set of pixels centred at each pixel. The pixels covered by the window are selected according to their relative offset to the depth edge, and only pixels that are within the same side of the depth edge of the centre pixel are used for the filtering procedure. | 10-25-2012 |
20120314027 | Method and System for Processing Multiview Videos for View Synthesis Using Motion Vector Predictor List - Multiview videos are acquired by overlapping cameras. Side information is used to synthesize multiview videos. A reference picture list is maintained for current frames of the multiview videos, the reference picture indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list with a skip mode and a direct mode, whereby the side information is inferred from the synthesized reference picture. In addition, the skip and merge modes for single view video coding are modified to support multiview video coding by generating a motion vector prediction list by also considering neighboring blocks that are associated with synthesized reference pictures. | 12-13-2012 |
20130162773 | COMPRESSION METHODS AND APPARATUS FOR OCCLUSION DATA - Methods and apparatuses for coding occlusion layers, such as occlusion video data and occlusion depth data in 3D video, are disclosed. A decoding method comprising the steps of: extracting an indicator representative of an original format for received occlusion data , the original format selected from one of a sparse occlusion data format and a filled occlusion data format; decoding the received occlusion data to produce decoded occlusion data ; and when the indicator indicates the original format as a filled occlusion data format, converting the decoded occlusion data from a sparse occlusion data format to the filled occlusion data format , the converting further including; replacing non-occlusion area data, which is represented with a defined characteristic, by respective collocated samples from 2D data in the video data frame associated with the occlusion data ; outputting the decoded occlusion data and, when present, converted decoded occlusion data . | 06-27-2013 |
20130162774 | COMPRESSION METHODS AND APPARATUS FOR OCCLUSION DATA - Methods and apparatuses for coding occlusion layers, such as occlusion video data and occlusion depth data in 3D video, are disclosed. A decoding method comprising the steps of: extracting an indicator representative of an original format for received occlusion data, the original format selected from a one of a sparse occlusion data format and a filled occlusion data format; arranging 2D data, which is associated with said occlusion data, at location after temporal and inter-view pictures in a reference picture list; identifying at least one of an occlusion area macroblock and a non-occlusion area macroblock for the occlusion data; decoding said occlusion data to produce decoded occlusion data, wherein said decoding includes: for each non-occlusion macroblock, when said indicator indicates the filled occlusion data format, replacing the occlusion data in said non-occlusion macroblock with a corresponding macroblock of associated 2D data to produce a decoded occlusion data; and when said indicator indicates the sparse occlusion data format, filling said non-occlusion macroblock with data indicative of a defined characteristic to produce decoded occlusion data; and otherwise for each occlusion macroblock, decoding said occlusion macroblock to produce decoded occlusion data; and outputting the decoded occlusion data. | 06-27-2013 |
20130176394 | COMPRESSION METHODS AND APPARATUS FOR OCCLUSION DATA - Methods and apparatus for coding occlusion layers, such as occlusion video data and occlusion depth data in 3D video, are disclosed. A decoding method comprising the steps of: extracting an indicator representative of an original format for received occlusion data, the original format selected from one of a sparse occlusion data format and a filled occlusion data format; arranging 2D data, which is associated with the occlusion data, at location | 07-11-2013 |
20130194511 | REMOTE CONTROL DEVICE FOR 3D VIDEO SYSTEM - A remote control device is operative to enable and facilitate user control of video systems that are operative to provide one or more three-dimensional (3D) viewing effects. According to an exemplary embodiment, the remote control device includes a user input terminal having an input element operative to receive user inputs to adjust at least one of a volume setting and a channel setting of a video system, and further operative to receive user inputs to adjust a three-dimensional (3D) viewing effect of the video system. A transmitter is operative to transmit control signals to the video system in response to the user inputs. | 08-01-2013 |
20130201177 | Method for Modeling and Estimating Rendering Errors in Virtual Images - A quality of a virtual image for a synthetic viewpoint in a 3D scene is determined. The 3D scene is acquired by texture images, and each texture image is associated with a depth image acquired by a camera arranged at a real viewpoint. A texture noise power is based on the acquired texture images and reconstructed texture images corresponding to a virtual texture image. A depth noise power is based on the depth images and reconstructed depth images corresponding to a virtual depth image. The quality of the virtual image is based on a combination of the texture noise power and the depth noise power, and the virtual image is rendered from the reconstructed texture images and the reconstructed depth images. | 08-08-2013 |
20130202194 | Method for generating high resolution depth images from low resolution depth images using edge information - A method interpolates and filters a depth image with reduced resolution to recover a high resolution depth image using edge information, wherein each depth image includes an array of pixels at locations and wherein each pixel has a depth. The reduced depth image is first up-sampled, interpolating the missing positions by repeating the nearest-neighboring depth value. Next, a moving window is applied to the pixels in the up-sampled depth image. The window covers a set of pixels centred at each pixel. The pixels covered by the window are selected according to their relative position to the edge, and only pixels that are within the same side of the edge of the centre pixel are used for the filtering procedure. A single representative depth from the set of selected pixel in the window is assigned to the pixel to produce a processed depth image. | 08-08-2013 |
20130287289 | Synthetic Reference Picture Generation - A synthetic image block in a synthetic picture is generated for a viewpoint based on a texture image and a depth image. A subset of samples from the texture image are warped to the synthetic image block. Disoccluded samples are marked, and the disoccluded samples in the synthetic image block are filled based on samples in a constrained area. The method and system enables both picture level and block level processing for synthetic reference picture generation. The method can be used for power limited devices, and can also refine the synthetic reference picture quality at a block level to achieve coding gains. | 10-31-2013 |
20140092208 | Method and System for Backward 3D-View Synthesis Prediction using Neighboring Blocks - Videos of a scene are processed for view synthesis. The videos are acquired by corresponding cameras arranged so that a view of each camera overlaps with the view of at least one other camera. For each current block, motion or disparity vector is obtained from neighboring blocks. A depth block is based on a corresponding reference depth image and the motion or disparity vector. A prediction block is generated based on the depth block using backward warping. Then, predictive coding for the current block using the prediction block. | 04-03-2014 |
20140092210 | Method and System for Motion Field Backward Warping Using Neighboring Blocks in Videos - Videos of a scene are processed for view synthesis. The videos are acquired by corresponding cameras arranged so that a view of each camera overlaps with the view of at least one other camera. For each current block, motion or disparity vector is obtained from neighboring blocks. A depth block is based on a corresponding reference depth image and the motion or disparity vector. A prediction block is generated based on the depth block using backward warping of a motion field. Then, predictive coding for the current block using the prediction block. Backward mapping can also be performed in the spatial domain. | 04-03-2014 |
20140147031 | Disparity Estimation for Misaligned Stereo Image Pairs - A disparity vector for a pixel in a right image corresponding to a pixel in a left image in a pair of stereo images is determined. The disparity vector is based on a horizontal disparity and a vertical disparity and the pair of stereo images is unrectified. First, a set of candidate horizontal disparities is determined. For each candidate horizontal disparity, a cost associated with a particular horizontal disparity and corresponding vertical disparities is determined. The vertical disparity associated with a first optimal cost is assigned to each candidate horizontal disparity, so that the candidate horizontal disparity and the vertical disparity yield a candidate disparity vector. Lastly, the candidate disparity vector with a second optimal cost is selected as the disparity vector of the pixel in the right image. | 05-29-2014 |
20140219330 | Method and System for Encoding Collections of Images and Videos - An input segment of an input video is encoded by first extracting and storing, for each segment of previously encoded videos, a set of reference features. The set of input features are matched with each set of the reference features to produce a set of scores. The reference segments having largest scores are selected to produce a first reduced set of reference segments. A rate-distortion cost for each reference segment in the first reduced set of reference segments is estimated. The reference segments in the first reduced set of reference segments is selected to produce a second reduced set of reference segments. Then, the input segment are encoded based on second reduced set of reference segments. | 08-07-2014 |
20140301479 | TILING IN VIDEO ENCODING AND DECODING - Implementations are provided that relate, for example, to view tiling in video encoding and decoding. A particular method includes accessing a video picture that includes multiple pictures combined into a single picture ( | 10-09-2014 |