Patent application number | Description | Published |
20080267494 | JOINT BILATERAL UPSAMPLING - A “Joint Bilateral Upsampler” uses a high-resolution input signal to guide the interpolation of a low-resolution solution set (derived from a downsampled version of the input signal) from low-to high-resolution. The resulting high-resolution solution set is then saved or applied to the original input signal to produce a high-resolution output signal. The high-resolution solution set is close to what would be produced directly from the input signal without downsampling. However, since the high-resolution solution set is constructed in part from a downsampled version of the input signal, it is computed using significantly less computational overhead and memory than a solution set computed directly from a high-resolution signal. Consequently, the Joint Bilateral Upsampler is advantageous for use in near real-time operations, in applications where user wait times are important, and in systems where computational costs and available memory are limited. | 10-30-2008 |
20090022421 | GENERATING GIGAPIXEL IMAGES - A gigapixel image is generated from a set of images in raw format depicting different portions of a panoramic scene that has up to a full spherical field of view. Radiometric alignment of the images creates a set of images in radiance format. Geometric alignment of the radiance format images creates a set of true poses for the images in radiance format. A gigapixel image depicting the entire scene is assembled from the set of radiance format images and radiance format true poses for the images. The set of images in raw format is captured using a conventional digital camera, equipped with a telephoto lens, attached to a motorized head. The head is programmed to pan and tilt the camera in prescribed increments to individually capture the images at a plurality of exposures and with a prescribed overlap between images depicting adjacent portions of the scene. | 01-22-2009 |
20090041375 | VIEWING WIDE ANGLE IMAGES USING DYNAMIC TONE MAPPING - A dynamic tone mapping technique is presented that produces a local tone map for a sub-image of a wide-angle, high dynamic range (HDR), which is used in rendering the sub-image for display. The technique generally involves first computing a global tone map of the wide-angle, HDR image in advance of rendering the sub-image. The global tone map is then used during rendering to compute a local tone map based on the average luminance and contrast of the pixels of the sub-image. In addition, the sub-image can be tone mapped as part of the rendering of a sequence of sub-images during a viewer-executed panning and/or zooming session. In this case, the local tone maps can be kept from changing too rapidly by adding a hysteresis feature to smooth out the intensity changes between successive sub-images. | 02-12-2009 |
20090232415 | PLATFORM FOR THE PRODUCTION OF SEAMLESS ORTHOGRAPHIC IMAGERY - Systems and methods are provided for the production of seamless, geo-referenced orthographic images that can comprise a composite of two or more underlying images. Illustratively, an exemplary image processing environment comprises an image processing engine and an instruction set comprising at least one instruction to instruct the image processing engine to process data representative of two or more images. Illustratively, the two or more images can comprise data representative of correspondence points between the two or more images and the underlying area (e.g., ground control points). Illustratively, the exemplary image processing engine can identify features that the overlapping photos have in common (e.g., feature match points) and place and re-project (e.g., distort) each of the two or more images to achieve a selected balance of correct position (e.g., based on ground control points) and seamless overlap (e.g., based on feature match points) which can be composited into a single image. | 09-17-2009 |
20090263045 | IMAGE BLENDING USING MULTI-SPLINES - Multi-spline image blending technique embodiments are presented which generally employ a separate low-resolution offset field for every image region being blended, rather than a single (piecewise smooth) offset field for all the regions to produce a visually consistent blended image. Each of the individual offset fields is smoothly varying, and so is represented using a low-dimensional spline. A resulting linear system can be rapidly solved because it involves many fewer variables than the number of pixels being blended. | 10-22-2009 |
20090310888 | MULTI-PASS IMAGE RESAMPLING - Multi-pass image resampling technique embodiments are presented that employ a series of one-dimensional filtering, resampling, and shearing stages to achieve good efficiency while maintaining high visual fidelity. In one embodiment, high-quality (multi-tap) image filtering is used inside each one-dimensional resampling stage. Because each stage only uses one-dimensional filtering, the overall computation efficiency is very good and amenable to graphics processing unit (GPU) implementation using pixel shaders. This embodiment also upsamples the image before shearing steps in a direction orthogonal to the shearing to prevent aliasing, and then downsamples the image to its final size with high-quality low-pass filtering. This ensures that none of the stages causes excessive blurring or aliasing. | 12-17-2009 |
20100085383 | RENDERING ANNOTATIONS FOR IMAGES - Techniques are described for rendering annotations associated with an image. A view of an image maybe shown on a display, and different portions of the image are displayed and undisplayed in the view according to panning and/or zooming of the image within the view. The image may have annotations. An annotation may have a location in the image and may have associated renderable media. The location of the annotation relative to the view may change according to the panning and/or zooming. A strength of the annotation may be computed, the strength changing based the panning and/or zooming of the image. The media may be rendered according to the strength. Whether to render the media may be determined by comparing the strength to a threshold. | 04-08-2010 |
20120114037 | COMPRESSING AND DECOMPRESSING MULTIPLE, LAYERED, VIDEO STREAMS EMPLOYING MULTI-DIRECTIONAL SPATIAL ENCODING - A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe. | 05-10-2012 |
20140267587 | PANORAMA PACKET - One or more techniques and/or systems are provided for generating a panorama packet and/or for utilizing a panorama packet. That is, a panorama packet may be generated and/or consumed to provide an interactive panorama view experience of a scene depicted by one or more input images within the panorama packet (e.g., a user may explore the scene through multi-dimensional navigation of a panorama generated from the panorama packet). The panorama packet may comprise a set of input images may depict the scene from various viewpoints. The panorama packet may comprise a camera pose manifold that may define one or more perspectives of the scene that may be used to generate a current view of the scene. The panorama packet may comprise a coarse geometry corresponding to a multi-dimensional representation of a surface of the scene. An interactive panorama of the scene may be generated based upon the panorama packet. | 09-18-2014 |
20140267600 | SYNTH PACKET FOR INTERACTIVE VIEW NAVIGATION OF A SCENE - One or more techniques and/or systems are provided for generating a synth packet and/or for providing an interactive view experience of a scene utilizing the synth packet. In particular, the synth packet comprises a set of input images depicting a scene from various viewpoints, a local graph comprising navigational relationships between input images, a coarse geometry comprising a multi-dimensional representation of a surface of the scene, and/or a camera pose manifold specifying view perspectives of the scene. An interactive view experience of the scene may be provided using the synth packet, such that a user may seamlessly navigate the scene in multi-dimensional space based upon navigational relationship information specified within the local graph. | 09-18-2014 |