Patent application number | Description | Published |
20110194024 | CONTENT ADAPTIVE AND ART DIRECTABLE SCALABLE VIDEO CODING - Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms. | 08-11-2011 |
20110261050 | Intermediate View Synthesis and Multi-View Data Signal Extraction - An intermediate view synthesis apparatus for synthesizing an intermediate view image from a first image corresponding to a first view and a second image corresponding to a second view different from the first view, the first and second images including depth information, wherein the second image being is divided-up into a non-boundary portion, and a foreground/background boundary region, wherein the intermediate view synthesis apparatus is configured to project and merge the first image and the second image into the intermediate view to obtain an intermediate view image, with treating the foreground/background boundary region subordinated relative to the non-boundary portion. A multi-view data signal extraction apparatus for extracting a multiview data signal from a multi-view representation including a first image corresponding to a first view and a second image corresponding to a second view being different from the first view is also described, the first and second images including depth information. | 10-27-2011 |
20140028695 | IMAGE AESTHETIC SIGNATURES - The disclosure provides an approach for determining transducer functions for mapping objective image attribute values to estimated subjective attribute values. The approach includes determining objective attribute values for each of one or more aesthetic attributes for each image in a first set of images. The approach further includes determining, for each aesthetic attribute, a mapping from the objective attribute values to respective estimated subjective attribute values based on the objective attribute values and corresponding experimentally-determined attribute values. Using the determined mappings, aesthetic signatures, which include estimates of subjective image aesthetics across multiple dimensions, may be generated. | 01-30-2014 |
20150063709 | METHODS AND SYSTEMS OF DETECTING OBJECT BOUNDARIES - Methods and systems described herein detect object boundaries of videos. A window around the pixel may be followed in adjacent image frames of the image frame to determine object boundaries. Inconsistencies in image patches over a temporal window are detected, and each pixel of the image frame of a video is assigned an object boundary probability. The pixel may belong to a texture edge if the window content does not change throughout the adjacent image frames, or the pixel may belong to an object boundary if the window content changes. A probability value indicating the likelihood of the pixel belonging to an object boundary is determined based on the window content change and is assigned to the corresponding pixel. | 03-05-2015 |
20150193950 | SIMULATING COLOR DIFFUSION IN A GRAPHICAL DISPLAY - As described herein, an electronic device with a display screen may simulate the color diffusion that occurs in a physical painting process. For instance, the user may perform one or more actions that simulate a brushstroke on the display screen such as swiping a touch-sensitive area or dragging a cursor across the screen. The electronic device then calculates a geodesic distance between a pixel inside a region defined by the brushstroke and a pixel located outside this region based on the physical distance between the two pixels and a weighting factor that varies depending on whether an image boundary is between the two pixels. Based on the geodesic distance, the electronic device uses a color diffusion relationship that defines the effect of the color of the brushstroke on the pixel and a time delay controlling when the color of the brushstroke reaches the pixel in order to simulate color diffusion. | 07-09-2015 |
Patent application number | Description | Published |
20110109720 | STEREOSCOPIC EDITING FOR VIDEO PRODUCTION, POST-PRODUCTION AND DISPLAY ADAPTATION - Systems, methods and articles of manufacture are disclosed for stereoscopically editing video content. In one embodiment, image pairs of a sequence may be stereoscopically modified by altering at least one image of the image pair. The at least one image may be altered using at least one mapping function. The at least one image may also be altered based on a saliency of the image pair. The at least one image may also be altered based on disparities between the image pair. Advantageously, stereoscopic properties of video content may be edited more conveniently and efficiently. | 05-12-2011 |
20130342758 | VIDEO RETARGETING USING CONTENT-DEPENDENT SCALING VECTORS - Techniques are disclosed for retargeting images. The techniques include receiving one or more input images, computing a two-dimensional saliency map based on the input images in order to determine one or more visually important features associated with the input images, projecting the saliency map horizontally and vertically to create at least one of a horizontal and vertical saliency profile, and scaling at least one of the horizontal and vertical saliency profiles. The techniques further include creating an output image based on the scaled saliency profiles. Low saliency areas are scaled non-uniformly while high saliency areas are scaled uniformly. Temporal stability is achieved by filtering the horizontal resampling pattern and the vertical resampling pattern over time. Image retargeting is achieved with greater efficiency and lower compute power, resulting in a retargeting architecture that may be implemented in a circuit suitable for mobile applications such as mobile phones and tablet computers. | 12-26-2013 |
20140104380 | EFFICIENT EWA VIDEO RENDERING - Techniques are disclosed for rendering images. The techniques include receiving an input image associated with a source space, the input image comprising a plurality of source pixels, and applying an adaptive transformation to a source pixel, where the adaptive transformation maps the source pixel to a target space associated with an output image comprising a plurality of target pixels. The techniques further include determining a target pixel affected by the source pixel based on the adaptive transformation. The techniques further include writing the transformed source pixel into a location in the output image associated with the target pixel. | 04-17-2014 |
20140146235 | PRACTICAL TEMPORAL CONSISTENCY FOR VIDEO APPLICATIONS - A video sequence having a plurality of frames is received. A feature in a first frame from the plurality of frames and a first position of the feature in the first frame are detected. The position of the feature in a second frame from the plurality of frames is estimated to determine a second position. A displacement vector between the first position and the second position is also computed. A plurality of content characteristics is determined for the first frame and the second frame. The displacement vector is spatially diffused with a spatial filter over a frame from the plurality of frames to generate a spatially diffused displacement vector field. The spatial filter utilizes the plurality of content characteristics. A temporal filter temporally diffuses over a video volume the spatially diffused displacement vector field to generate a spatiotemporal vector field. The temporal filter utilizes the plurality of content characteristics. | 05-29-2014 |
Patent application number | Description | Published |
20120182397 | COMPUTATIONAL STEREOSCOPIC CAMERA SYSTEM - A closed-loop control system for stereoscopic video capture is provided. At least two motorized lenses are positioned in accordance with specified parameters to capture spatially-disparate images of a scene. The motorized lenses focus light on a corresponding one of the at least two sensors, which generate image streams. One or more processors execute instructions to provide a stream analyzer and a control module. The stream analyzer receives the image streams from the sensors and analyzes the image streams and the specified parameters in real time; the stream analyzer then modifies the image streams and generates metadata. The control module then receives and analyzes the image streams and metadata and transmits updated parameters to a control mechanism that is coupled to the at least two motorized lenses. The control mechanism then modifies operation of the at least two motorized lenses in real time in accordance with the updated parameters. | 07-19-2012 |
20150062351 | Device and Method for Calibrating a Temporal Contrast Sensor with a Frame-Based Camera Sensor - A device and method incorporates features of a temporal contrast sensor with a camera sensor in an imager. The method includes registering the camera sensor with the temporal contrast sensor as a function of a calibration target. The method includes receiving camera sensor data from the camera sensor and temporal contrast sensor data from the temporal contrast sensor. The method includes generating a plurality of images as a function of incorporating the temporal contrast sensor data with the camera sensor data. | 03-05-2015 |
20150070346 | Method and System for Rendering Virtual Views - A method including receiving a first image of a scene captured from a first perspective, the first image including an object and a background; segmenting the first image to extract a first two-dimensional contour of the object; approximating a plurality of three-dimensional locations of a plurality of points on the first contour; generating a three-dimensional billboard of the object based on the three-dimensional locations; and projecting the first image onto the three-dimensional billboard. | 03-12-2015 |
20150135212 | Method and System for Providing and Displaying Optional Overlays - A method including receiving video of an event; generating an overlay for the video; generating an information message containing information enabling a receiver of the video and the overlay to selectively display or hide the overlay; and transmitting the video, the overlay, and the information message. The video is transmitted in a primary stream of a multi-stream transmission including a primary stream and one or more auxiliary streams. The overlay is transmitted in a first one of the auxiliary streams. | 05-14-2015 |