Patent application number | Description | Published |
20090244062 | USING PHOTO COLLECTIONS FOR THREE DIMENSIONAL MODELING - A collection of photos and a three-dimensional reconstruction of the photos are used to construct and texture a mesh model. In one embodiment, a first digital image of a first view of a real world scene is analyzed to identify lines in the first view. Among the lines, parallel lines are identified. A three-dimensional vanishing direction in a three-dimensional space is determined based on the parallel lines and an orientation of the digital image in the three-dimensional space. A plane is automatically generated by fitting the plane to the vanishing direction. A rendering of a three-dimensional model with the plane is displayed. Three-dimensional points corresponding to features common to the photos may be used to constrain the plane. The photos may be projected onto the model to provide visual feedback when editing the plane. Furthermore, the photos may be used to texture the model. | 10-01-2009 |
20100238164 | IMAGE STITCHING USING PARTIALLY OVERLAPPING VIEWS OF A SCENE - An “Oblique Image Stitcher” provides a technique for constructing a photorealistic oblique view from a set of input images representing a series of partially overlapping views of a scene. The Oblique Image Stitcher first projects each input image onto a geometric proxy of the scene and renders the images from a desired viewpoint. Once the images have been projected onto the geometric proxy, the rendered images are evaluated to identify optimum seams along which the various images are to be blended. Once the optimum seams are selected, the images are remapped relative to those seams by leaving the mapping unchanged at the seams and interpolating a smooth mapping between the seams. The remapped images are then composited to construct the final mosaiced oblique view of the scene. The result is a mosaic image constructed by warping the input images in a photorealistic manner which agrees at seams between images. | 09-23-2010 |
20120237111 | Performing Structure From Motion For Unordered Images Of A Scene With Multiple Object Instances - A technology is described for performing structure from motion for unordered images of a scene with multiple object instances. An example method can include obtaining a pairwise match graph using interest point detection for obtaining interest points in images of the scene to identify pairwise image matches using the interest points. Multiple metric two-view and three-view partial reconstructions can be estimated by performing independent structure from motion computation on a plurality of match-pairs and match-triplets selected from the pairwise match graph. Pairwise image matches can be classified into correct matches and erroneous matches using expectation maximization to generate geometrically consistent match labeling hypotheses and a scoring function to evaluate the match labeling hypotheses. A structure from motion computation can then be performed on the subset of match pairs which have been inferred as correct. | 09-20-2012 |
20130100128 | USING PHOTO COLLECTIONS FOR THREE DIMENSIONAL MODELING - A collection of photos and a three-dimensional reconstruction of the photos are used to construct and texture a mesh model. In one embodiment, a first digital image of a first view of a real world scene is analyzed to identify lines in the first view. Among the lines, parallel lines are identified. A three-dimensional vanishing direction in a three-dimensional space is determined based on the parallel lines and an orientation of the digital image in the three-dimensional space. A plane is automatically generated by fitting the plane to the vanishing direction. A rendering of a three-dimensional model with the plane is displayed. Three-dimensional points corresponding to features common to the photos may be used to constrain the plane. The photos may be projected onto the model to provide visual feedback when editing the plane. Furthermore, the photos may be used to texture the model. | 04-25-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20140145914 | HEAD-MOUNTED DISPLAY RESOURCE MANAGEMENT - A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode. | 05-29-2014 |
20140375680 | TRACKING HEAD MOVEMENT WHEN WEARING MOBILE DEVICE - Methods for tracking the head position of an end user of a head-mounted display device (HMD) relative to the HMD are described. In some embodiments, the HMD may determine an initial head tracking vector associated with an initial head position of the end user relative to the HMD, determine one or more head tracking vectors corresponding with one or more subsequent head positions of the end user relative to the HMD, track head movements of the end user over time based on the initial head tracking vector and the one or more head tracking vectors, and adjust positions of virtual objects displayed to the end user based on the head movements. In some embodiments, the resolution and/or number of virtual objects generated and displayed to the end user may be modified based on a degree of head movement of the end user relative to the HMD. | 12-25-2014 |
20140375681 | ACTIVE BINOCULAR ALIGNMENT FOR NEAR EYE DISPLAYS - A system and method are disclosed for detecting angular displacement of a display element relative to a reference position on a head mounted display device for presenting a mixed reality or virtual reality experience. Once the displacement is detected, it may be corrected for to maintain the proper binocular disparity of virtual images displayed to the left and right display elements of the head mounted display device. In one example, the detection system uses an optical assembly including collimated LEDs and a camera which together are insensitive to linear displacement. Such a system provides a true measure of angular displacement of one or both display elements on the head mounted display device. | 12-25-2014 |
20140375790 | EYE-TRACKING SYSTEM FOR HEAD-MOUNTED DISPLAY - Embodiments are disclosed for a see-through head-mounted display system. In one embodiment, the see-through head-mounted display system comprises a freeform prism, and a display device configured to emit display light through the freeform prism to an eye of a user. The see-through head-mounted display system may also comprise an imaging device having an entrance pupil positioned at a back focal plane of the freeform prism, the imaging device configured to receive gaze-detection light reflected from the eye and directed through the freeform prism. | 12-25-2014 |