Patent application number | Description | Published |
20110043524 | METHOD AND SYSTEM FOR CONVERTING A 3D VIDEO WITH TARGETED ADVERTISEMENT INTO A 2D VIDEO FOR DISPLAY - A video receiver receives a compound transport stream (TS) comprising 3D program video streams and spliced advertising streams. The received one or more 3D program video streams are extracted and decoded. Targeted advertising streams are extracted from the received advertising streams according to user criteria. Targeted advertising graphic objects of the extracted or replaced targeted advertising streams are spliced into the decoded 3D program video streams. The decoded 3D program video with the spliced targeted advertising graphic objects is presented in a 2D video. The extracted or replaced targeted advertising streams are processed to generate the targeted advertising graphic objects to be spliced based on focal point of view. The generated targeted advertising graphic objects are located according to associated scene graph information. The decoded 3D program video streams and the spliced targeted advertising graphic objects are converted into a 2D video for display. | 02-24-2011 |
20110058016 | METHOD AND SYSTEM FOR PROCESSING 2D/3D VIDEO - A video processor decompresses stereoscopic left and right reference frames of compressed 3D video. New left and right frames are interpolated. The frames may be stored and/or communicated for display. The left and right frames are combined into a single frame of a single stream or may be sequenced in separate left and right streams. The left and right frames are interpolated based on the combined single stream and/or based on the separate left and right streams. Motion vectors are determined for one of the separate left or right streams. The frames are interpolated utilizing motion compensation. Areas of occlusion are determined in the separate left and right streams. Pixels are interpolated for occluded areas of left or right frames of separate streams from uncovered areas in corresponding opposite side frames. The left and right interpolated and/or reference frames are displayed as 3D and/or 2D video. | 03-10-2011 |
20110063298 | METHOD AND SYSTEM FOR RENDERING 3D GRAPHICS BASED ON 3D DISPLAY CAPABILITIES - A first 3D graphics and/or 3D video processing device generates left and right view 3D graphics frames comprising 3D content which are communicated to a 3D display device for display. The 3D frames are generated based on a display format utilized by the 3D display device. The first 3D device may comprise a set-top-box and/or computer. The left and/or right 3D graphics frames may be generated based on time sequential display and/or polarizing display. Sub-sampling 3D graphics frames may be based on odd and even row display polarization patterns and/or checkerboard polarization patterns. Left and right 3D graphics pixels may be blended with video pixels. Left and/or right 3D graphics frames may be displayed sequentially in time. Left and/or right 3D graphics frames may be sub-sampled in complimentary pixel patterns, interleaved in a single frame and displayed utilizing varying polarization orientations for left and right pixels. | 03-17-2011 |
20110063414 | METHOD AND SYSTEM FOR FRAME BUFFER COMPRESSION AND MEMORY RESOURCE REDUCTION FOR 3D VIDEO - A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching. | 03-17-2011 |
20110064220 | METHOD AND SYSTEM FOR PROTECTING 3D VIDEO CONTENT - A video receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The base view video and the enhancement view video are encrypted using same encryption engine and buffered into corresponding coded data buffers (CDBs), respectively. The buffered base view and enhancement view videos are decrypted using same decryption engine corresponding to the encryption engine. The decrypted base view and enhancement view videos are decoded for viewing. The video receiver is also operable to encrypt video content of the received compressed 3D video according to corresponding view information and/or coding layer information. The resulting encrypted video content and unencrypted video content of the received compressed 3D video are buffered into corresponding CDBs, respectively. The buffered encrypted video content are decrypted and are decoded together with the buffered unencrypted video content of the received compressed 3D video for reviewing. | 03-17-2011 |
20110064262 | METHOD AND SYSTEM FOR WATERMARKING 3D CONTENT - A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively. | 03-17-2011 |
20110080948 | METHOD AND SYSTEM FOR 3D VIDEO DECODING USING A TIER SYSTEM FRAMEWORK - A video receiver receives a layered and predicted compressed 3D video comprising a base view video and an enhancement view video. A portion of pictures in the received compressed 3D video are selected to be decoded for display at an intended pace. Pictures in the received compressed 3D video are generated based on a tier system framework with tiers ordered hierarchically according to corresponding decodability. Each picture in the base view and enhancement view videos belongs to one of the plurality of tiers. A picture in a particular tier does not depend directly or indirectly on pictures in a higher tier. Each tier comprises one or more pictures with the same coding order. The video receiver decodes the pictures with the same coding order in parallel, and adaptively decodes the selected pictures according to corresponding coding layer information. The selected pictures are determined based on a particular display rate. | 04-07-2011 |
20110081133 | METHOD AND SYSTEM FOR A FAST CHANNEL CHANGE IN 3D VIDEO - Receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The video receiver determines a random access that occurs at a two-view misaligned base view RAP to start decoding activities on the received compressed 3D video based on a corresponding two-view aligned random access point (RAP). The corresponding two-view aligned RAP is adjacent to the two-view misaligned base view RAP. Pictures in the received compressed 3D video are buffered for the two-view misaligned base view RAP to be decoded staring from the corresponding two-view aligned RAP. One or more pictures in the enhancement view video are interpolated based on the two-view misaligned base view RAP. The video receiver selects a portion of the buffered pictures to be decoded to facilitate a trick mode in personal video recording (PVR) operations for random access at the two-view misaligned RAP. | 04-07-2011 |
20110085023 | Method And System For Communicating 3D Video Via A Wireless Communication Link - A first video processing device, for example, a set-top-box, receives and decodes left and right video streams and generates left and right graphics streams. The left and right video streams and left and right graphics streams are compressed and wirelessly communicated to a second video processing device, for example, a 3D and/or 2D television. The graphics streams are generated by a graphics processor on the first video processing device utilizing stored and/or received graphics information. The second video processing device wirelessly receives and decompresses the video and graphics. Blending of the left video with graphics and/or blending the right video with graphics may be done prior to wireless communication by the first video processing device or after wireless reception by the second video processing device. The second video processing device displays the blended left video and graphics and/or the blended right video and graphics. | 04-14-2011 |
20110096146 | Method and system for response time compensation for 3D video processing - A sequential pattern comprising contiguous black frames inserted between left and right 3D video and/or graphics frames may be displayed on an LCD display. The pattern may comprise two or three contiguous left frames followed by contiguous black frames followed by two or three contiguous right frames followed by contiguous black frames. The left and/or right frames may comprise interpolated frames and/or may be displayed in ascending order. The contiguous black frames are displayed longer than liquid crystal response time. 3D shutter glasses are synchronized with the black frames. A left lens transmits light when left frames followed by contiguous black frames are displayed and a right lens transmits light when right frames followed by contiguous black frames are displayed. A 3D pair of 24 Hz frames or two 3D pairs of 60 Hz frames per pattern are displayed on a 240 Hz display. | 04-28-2011 |
20110096151 | METHOD AND SYSTEM FOR NOISE REDUCTION FOR 3D VIDEO CONTENT - A video processing system receives left and right 3D video and/or graphics frames and generates noise reduced left 3D video, right 3D video and/or graphics frames based on parallax compensated left and right frames. Displacement of imagery and/or pixel structures is determined relative to opposite side left and/or right frames. Parallax vectors are determined for parallax compensated left 3D video, right 3D video and/or graphics frames. A search area for displacement may be bounded by parallax limitations. Left 3D frames may be blended with the parallax compensated right 3D frames. Right 3D frames may be blended with the parallax compensated left 3D frames. The left 3D video, right 3D video and/or graphics frames comprise images that are captured, representative of and/or are displayed at a same time instant or at different time instants. Motion estimation, motion adaptation and/or motion compensation techniques may be utilized with parallax techniques. | 04-28-2011 |
20110115883 | Method And System For Adaptive Viewport For A Mobile Device Based On Viewing Angle - A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces. | 05-19-2011 |
20110149019 | METHOD AND SYSTEM FOR ENHANCED 2D VIDEO DISPLAY BASED ON 3D VIDEO INPUT - A video processing device may generate a two dimensional (2D) output video stream from a three dimensional (3D) input video stream that comprises a plurality of view sequences. The plurality of view sequences may comprise sequences of stereoscopic left and right reference fields or frames. A view sequence may initially be selected as a base sequence for the 2D output video stream, and the 2D output video stream may be enhanced using video content and/or information from unselected view sequences. The video content and/or information utilized in enhancing the 2D output video stream may comprise depth information, and/or foreground and/or background information. The enhancement of the 2D input video stream may comprise improving depth, contrast, sharpness, and/or rate upconversion using frame and/or field based interpolation of images in the 2D output video stream. | 06-23-2011 |
20110149020 | METHOD AND SYSTEM FOR VIDEO POST-PROCESSING BASED ON 3D DATA - A media player may read three-dimensional (3D) video data comprising a plurality of view sequences of frames or fields from a media storage device, and may decimate one or more of the view sequences to enable transferring the video data to a display device. The media player may determine operational parameter(s) and/or transfer limitation(s) of a connecting subsystem used to transfer the video data to the display device. The decimation may be performed based on this determination of transfer limitation(s). The decimation may be performed temporally and/or spatially. The plurality of view sequences may comprise sequences of stereoscopic left and right view reference frames or fields. The decimation may be performed such that the removed data for each view sequence may be reconstructed, after reception, based on remaining data in the same view sequence and/or video data of other corresponding view sequences. | 06-23-2011 |
20110149021 | METHOD AND SYSTEM FOR SHARPNESS PROCESSING FOR 3D VIDEO - A video processing device may enhance sharpness of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream. The plurality of extracted view sequences may comprise stereoscopic left and right view sequences of reference fields or frames. The sharpness enhancement processing may be performed based on sharpness related video information, which may be derived from other sequences in the plurality of view sequences, user input, embedded control data, and/or preconfigured parameters. The sharpness related video information may enable classifying images in the 3D input video streams into different regions, and may comprise depth related data and/or point-of-focus related data. Sharpness enhancement processing may be performed variably on background and foreground regions, and/or on in-focus or out-of-focus regions. A 3D output video stream for display may be generated from the plurality of view sequences based on the sharpness processing. | 06-23-2011 |
20110149022 | METHOD AND SYSTEM FOR GENERATING 3D OUTPUT VIDEO WITH 3D LOCAL GRAPHICS FROM 3D INPUT VIDEO - A video processing device may extract a plurality of view sequences from a three-dimensional (3D) input video stream and generate a plurality of graphics sequences that correspond to local graphics content. Each of the plurality of graphics sequences may be blended with a corresponding view sequence from the extracted plurality of view sequences to generate a plurality of combined sequences The local graphics content may comprise on-screen display (OSD) graphics, and may initially be generated as two-dimensional (2D) graphics. The plurality of graphics sequences may be generated from the local graphics content, based on, for example, video information for the input 3D video stream, user input, and/or preconfigured conversion data. After blending the view sequences with the graphics sequences, the video processing device may generate a 3D output video stream. The generated 3D output video stream may then be transformed to 2D video stream if 3D playback is not available. | 06-23-2011 |
20110149028 | METHOD AND SYSTEM FOR SYNCHRONIZING 3D GLASSES WITH 3D VIDEO DISPLAYS - 3D glasses may communicate with a video device that is used for playback of 3D video content to determine an operating mode used during the 3D video content playback and to synchronize viewing operations via the 3D glasses during the 3D video content playback based on the determined operating mode. Exemplary operating modes include polarization mode or shutter mode. The 3D video content may comprise stereoscopic left and right views. Polarization of the 3D glasses may be synchronized to polarization of the right and left views in polarization mode; whereas shuttering of the 3D glasses may be synchronized to the frequency of alternating rendering of right and left views in shuttering mode. Synchronization of the 3D glasses may be performed prior to start of the 3D video content playback and/or dynamically during the 3D video content playback. The 3D glasses may communicate with the video device via wireless interfaces. | 06-23-2011 |
20110149029 | METHOD AND SYSTEM FOR PULLDOWN PROCESSING FOR 3D VIDEO - A video processing device may perform pulldown when generating an output video stream that corresponds to received input 3D video stream. The pulldown may be performed based on determined native characteristics of the received input 3D video stream and display parameters corresponding to display device used for presenting the generated output video stream. The native characteristics of the received input 3D video stream may comprise film mode, which may be used to determine capture frame rate. The display parameters may comprise scan mode and/or display frame rate. A left view or a right view frame in every group of frames in the input 3D video stream comprising two consecutive left view frames and corresponding two consecutive right view frames may be duplicated when the input 3D video stream comprises a film mode with 24 fps capture frame rate and the display device uses 60 Hz progressive scanning. | 06-23-2011 |
20110149040 | METHOD AND SYSTEM FOR INTERLACING 3D VIDEO - A video processing device may generate and/or capture a plurality of view sequences of video frames, decimate at least some of the plurality of view sequences, and may generating a three-dimension (3D) video stream comprising the plurality of view sequences based on that decimation. The decimation may be achieved by converting one or more of the plurality of view sequences from progressive to interlaced video. The interlacing may be performed by removing top or bottom fields in each frame of those one or more view sequences during the conversion to interlaced video. The removed fields may be selected based on corresponding conversion to interlaced video of one or more corresponding view sequences. The video processing device may determine bandwidth limitations existing during direct and/or indirect transfer or communication of the generated 3D video stream. The decimation may be performed based on this determination of bandwidth limitations. | 06-23-2011 |
20110150355 | METHOD AND SYSTEM FOR DYNAMIC CONTRAST PROCESSING FOR 3D VIDEO - A video processing device may enhance contrast of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream based on contrast information derived from other sequences in the plurality of view sequences. The view sequences that are subjected to contrast enhancement and/or whose contrast information may be utilized during contrast enhancement may be selected based on one or more selection criteria, which may comprise compression bitrate utilized during communication of the input video stream. The video processing device may also perform noise reduction on one or more of the plurality of extracted view sequences during contrast enhancement operations. Noise reduction may be performed using digital noise reduction (DNR). The nose reduction may be performed separately and/or independently on each view sequence in the plurality of extracted view sequences. | 06-23-2011 |
20130235157 | METHOD AND SYSTEM FOR FRAME BUFFER COMPRESSION AND MEMORY RESOURCE REDUCTION FOR 3D VIDEO - A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching. | 09-12-2013 |
20130272566 | Method and System for Watermarking 3D Content - A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively. | 10-17-2013 |