Patent application number | Description | Published |
20090110327 | Semi-automatic plane extrusion for 3D modeling - In accordance with one or more aspects, a plane in a 3D coordinate system in which a 3D model is to be generated based on one or more 2D images is identified. A direction of extrusion for the plane is also identified. Additionally, a user identification of a region of interest on a 2D image is received and projected onto the plane. A location in the 3D model of the region of interest is then automatically identified by extruding the plane along the direction of extrusion until the region of interest in the plane matches a corresponding region of at least one of the one or more 2D images. | 04-30-2009 |
20090141966 | INTERACTIVE GEO-POSITIONING OF IMAGERY - An interactive user-friendly incremental calibration technique that provides immediate feedback to the user when aligning a point on a 3D model to a point on a 2D image. A can drag-and-drop points on a 3D model to points on a 2D image. As the user drags the correspondences, the application updates current estimates of where the camera would need to be to match the correspondences. The 2D and 3D images can be overlayed on each other and are sufficiently transparent for visual alignment. The user can fade between the 2D/3D views providing immediate feedback as to the improvements in alignment. The user can begin with a rough estimate of camera orientation and then progress to more granular parameters such as estimates for focal length, etc., to arrive at the desired alignment. While one parameter is adjustable, other parameters are fixed allowing for user adjustment of one parameter at a time. | 06-04-2009 |
20090237510 | VISUALIZING CAMERA FEEDS ON A MAP - Feeds from cameras are better visualized by superimposing images based on the feeds onto map based on a two- or three-dimensional virtual map. For example, a traffic camera feed can be aligned with a roadway included in the map. Multiple videos can be aligned with roadways in the map and can also be aligned in time. | 09-24-2009 |
20100020026 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is enabled through various user interface (UI) features. In an example embodiment, a curved display is monitored to detect a touch input. If a touch input is detected based on the act of monitoring, then one or more locations of the touch input are determined. Responsive to the determined one or more locations of the touch input, at least one user UI feature is implemented. Example UI features include an orb-like invocation gesture feature, a rotation-based dragging feature, a send-to-dark-side interaction feature, and an object representation and manipulation by proxy representation feature. | 01-28-2010 |
20100023895 | Touch Interaction with a Curved Display - Touch interaction with a curved display (e.g., a sphere, a hemisphere, a cylinder, etc.) is facilitated by preserving a predetermined orientation for objects. In an example embodiment, a curved display is monitored to detect a touch input on an object. If a touch input on an object is detected based on the monitoring, then one or more locations of the touch input are determined. The object may be manipulated responsive to the determined one or more locations of the touch input. While manipulation of the object is permitted, a predetermined orientation is preserved. | 01-28-2010 |
20100080466 | Smart Navigation for 3D Maps - An interest center-point and a start point are created in an image. A potential function is created where the potential function creates a potential field and guides traversal from the starting point to the interest center-point. The potential field is adjusted to include a sum of potential fields directed toward the center-point where each potential field corresponds to an image. Images are displayed in the potential field at intervals in the traversal from the start point toward the interest center point. | 04-01-2010 |
20100080489 | Hybrid Interface for Interactively Registering Images to Digital Models - The first image may be displayed adjacent to the second image where the second image is a three dimensional image. An element may be selected in the first image and a matching element may be selected in the second image. A selection may be permitted to view a merged view where the merged view is the first image displayed over the second image by varying the opaqueness of the images. If the merged view is not acceptable, the method may repeat and if the merged view is acceptable; the first view onto the second view and the merged view may be stored as a merged image. | 04-01-2010 |
20100080551 | Geotagging Photographs Using Annotations - Labels of elements in images may be compared to known elements to determine a region from which an image was created. Using this information, the approximate image position can be found, additional elements may be recognized, labels may be checked for accuracy and additional labels may be added. | 04-01-2010 |
20100085371 | OPTIMAL 2D TEXTURING FROM MULTIPLE IMAGES - One or more images of an object are obtained. These are then warped onto the object. The object may be divided into sites where sites are overlapping circular regions of the object. For each site, a neighborhood graph may be created where each site is a node in the graph and each pair of sites with overlapping regions is connected by an edge. A list of covers of each site may be created where the list contains all the possible labels for that node. Each image that covers part of the site may be reviewed including all possible shifts up to some number of pixels. A cost may be assigned to each cover and costs for each of the covers may be calculated. The cover with the lowest cost may be selected. If the costs are too high, the resolution may be lowered, one or more possible covers may be selected and then the analysis may be performed using the selected covers at a higher resolution. | 04-08-2010 |
20100134484 | THREE DIMENSIONAL JOURNALING ENVIRONMENT - A three-dimensional journaling system is described herein. The three-dimensional journaling system comprises a data repository that includes journal data of a user, wherein the journal data corresponds to at least one location in a geographic region. The system additionally includes a display component that causes at least a portion of the journal data to be displayed on a display screen as a journal entry in a computer-implemented three-dimensional representation of the geographic region at the location that corresponds to the journal data. | 06-03-2010 |
20100235078 | DRIVING DIRECTIONS WITH MAPS AND VIDEOS - The illustration may have a separate display window that displays illustrations which may be moving illustration related to the current spot on the map or to future spots on the map. The illustration may be viewed while traveling or may be viewed in advance. The moving illustration may display segments of the travel path with points of interest and substantial changes at a slow speed and/or low altitude and may display segments without points of interest and/or few substantial changes at a high speed and or high altitude. | 09-16-2010 |
20100241946 | ANNOTATING IMAGES WITH INSTRUCTIONS - A method described herein includes the acts of receiving an image captured by a mobile computing device and automatically annotating the image to create an annotated image, wherein annotations on the annotated image provide instructions to a user of the mobile computing device. The method further includes transmitting the annotated image to the mobile computing device. | 09-23-2010 |
20100245344 | ANNOTATING OR EDITING THREE DIMENSIONAL SPACE - In one example, images may be used to create a model of a three-dimensional space, and the three-dimensional space may be annotated and/or edited. When a three-dimensional model of a space has been created, a user may associate various items with points in the three-dimensional space. For example, the user may create a note or a hyperlink, and may associate the note or hyperlink with a specific point in the space. Additionally, a user may experiment with the space by adding images to, or deleting images from, the space. Annotating and editing the space, rather than the underlying images, allows annotations and edits to be associated with the underlying objects depicted in the images, rather than with the images themselves. | 09-30-2010 |
20100250120 | MANAGING STORAGE AND DELIVERY OF NAVIGATION IMAGES - The storage and/or transmission of image bubbles may be managed for effective use of space and/or time. In one example, a street-view application allows a user to navigate through an image at ground level. The application makes use of panoramic images called “bubbles,” which are captured at spatial intervals. The user can navigate through the images by changing position, or by changing the direction of view. Various aspects of how the bubbles are stored or transmitted may be controlled, in order to make effective use of the bandwidth that is available to transmit the bubbles. Examples of these aspects may include: how much of a given bubble is transmitted; the resolution at which the bubble is transmitted; and/or the spatial frequency at which the user moves through the bubbles. | 09-30-2010 |
20100265178 | CAMERA-BASED MULTI-TOUCH MOUSE - Technologies for a camera-based multi-touch input device operable to provide conventional mouse movement data as well as three-dimensional multi-touch data. Such a device is based on an internal camera focused on a mirror or set of mirrors enabling the camera to image the inside of a working surface of the device. The working surface allows light to pass through. An internal light source illuminates the inside of the working surface and reflects off of any objects proximate to the outside of the device. This reflected light is received by the mirror and then directed to the camera. Imaging from the camera can be processed to extract touch points corresponding to the position of one or more objects outside the working surface as well as to detect gestures performed by the objects. Thus the device can provide conventional mouse functionality as well as three-dimensional multi-touch functionality. | 10-21-2010 |
20100313113 | Calibration and Annotation of Video Content - Various embodiments provide techniques for calibrating and annotating video content. In one or more embodiments, an instance of video content can be calibrated with one or more geographical models and/or existing calibrated video content to correlate the instance of video content with one or more geographical locations. According to some embodiments, geographical information can be used to annotate the video content. Geographical information can include identification information for one or more structures, natural features, and/or locations included in the video content. Some embodiments enable a particular instance of video content to be correlated with other instances of video content based on common geographical information and/or common annotation information. Thus, a user can access video content from other users with similar travel experiences and/or interests. A user may also access annotations provided by other users that may be relevant to a particular instance of video content. | 12-09-2010 |
20110187704 | GENERATING AND DISPLAYING TOP-DOWN MAPS OF RECONSTRUCTED 3-D SCENES - Technologies are described herein for generating and displaying top-down maps of reconstructed structures to improve navigation of photographs within a 3-D scene. A 3-D point cloud is computed from a collection of photographs of the scene. A top-down map is generated from the 3-D point cloud by projecting the points in the point cloud into a two-dimensional plane. The points in the projection may be filtered and/or enhanced to enhance the display of the top-down map. Finally, the top-down map is displayed to the user in conjunction with or as an alternative to the photographs from the reconstructed structure or scene. | 08-04-2011 |
20110187716 | USER INTERFACES FOR INTERACTING WITH TOP-DOWN MAPS OF RECONSTRUCTED 3-D SCENES - Technologies are described herein for providing user interfaces through which a user may interact with a top-down map of a reconstructed structure within a 3-D scene. An application provides one or more user interfaces allowing a user to select a camera pose, a reconstruction element, a point, or a group of points on the top-down map. The application then determines at least one representative photograph from the visual reconstruction based on the selection of the user, and the displays a preview of the representative photograph on the top-down map as a thumbnail image. The provided user interfaces may further allow the user to navigate to the representative photograph in the local-navigation display of the visual reconstruction. | 08-04-2011 |
20110187723 | TRANSITIONING BETWEEN TOP-DOWN MAPS AND LOCAL NAVIGATION OF RECONSTRUCTED 3-D SCENES - Technologies are described herein for transitioning between a top-down map of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map. | 08-04-2011 |
20110195781 | MULTI-TOUCH MOUSE IN GAMING APPLICATIONS - Keyboards, mice, joysticks, customized gamepads, and other peripherals are continually being developed to enhance a user's experience when playing computer video games. Unfortunately, many of these devices provide users with limited input control because of the complexity of today's gaming applications. For example, many computer video games require a combination of mouse and keyboard to control even the simplest of in-game tasks (e.g., walking into a room and looking around may require several keyboard keystrokes and mouse movements). Accordingly, one or more systems and/or techniques for performing in-game tasks based upon user input within a multi-touch mouse are disclosed herein. User input comprising one or more user interactions detect by spatial sensors within the multi-touch mouse may be received. A wide variety of in-game tasks (e.g., character movements, character actions, character view, etc.) may be performed based upon the user interactions (e.g., a swipe gesture, a mouse position change, etc.). | 08-11-2011 |
20110221664 | VIEW NAVIGATION ON MOBILE DEVICE - Users may view web pages, play games, send emails, take photos, and perform other tasks using mobile devices. Unfortunately, the limited screen size and resolution of mobile devices may restrict users from adequately viewing virtual objects, such as maps, images, email, user interfaces, etc. Accordingly, one or more systems and/or techniques for displaying portions of virtual objects on a mobile device are disclosed herein. A mobile device may be configured with one or more sensors (e.g., a digital camera, an accelerometer, or a magnetometer) configured to detect motion of the mobile device (e.g., a pan, tilt, or forward/backward motion). A portion of a virtual object may be determined based upon the detected motion and displayed on the mobile device. For example, a view of a top portion of an email may be displayed on a cell phone based upon the user panning the cell phone in an upward direction. | 09-15-2011 |
20110294515 | HYBRID MOBILE PHONE GEOPOSITIONING - A hybrid positioning system for continuously and accurately determining a location of a mobile device is provided. Samples of GPS locations from a pool of mobile devices and accompanying cell tower data, WLAN data, or other comparable network signals are used to construct a dynamic map of particular regions. The dynamic map(s) may be sent to and stored on individual mobile devices such that the mobile device can compare its less accurate, but more readily available, data like cell tower signals to recorded ones and estimate its position more accurately and continuously. The position data may be sent to a server for user in location based services. | 12-01-2011 |
20110298928 | SIMULATED VIDEO WITH EXTRA VIEWPOINTS AND ENHANCED RESOLUTION FOR TRAFFIC CAMERAS - Simulated high resolution, multi-view video based on video input from low resolution, single-direction cameras is provided. Video received from traffic cameras, security cameras, monitoring cameras, and comparable ones is fused with patches from a database of pre-captured images and/or temporally shifted video to create higher quality video, as well as multiple viewpoints for the same camera. | 12-08-2011 |
20110302527 | ADJUSTABLE AND PROGRESSIVE MOBILE DEVICE STREET VIEW - Intuitive and user-friendly user interface (UI) techniques are provided for navigating street view applications on a mobile device enabling users to view different angles and segments of available street level images. Additionally, retrieval and presentation of street view images are managed to mitigate delays in retrieval of desired images from a server over wireless connections through techniques such as textual representations, replacement views, scheduling image requests, and comparable ones. | 12-08-2011 |
20110312374 | MOBILE AND SERVER-SIDE COMPUTATIONAL PHOTOGRAPHY - Automated photographic capture assistance and analysis is effectuated to assist users in capturing sufficient and optimal images of a desired image scene for use in a photographic end product. Photographic capture assistance is implemented on the device that includes a user's camera. Photographic capture assistance can include audio and/or graphic information generated in real time locally on the device that includes the user's camera and informs the user where additional images of the image scene ought to be captured and/or whether or not sufficient captured images currently exist for the image scene. | 12-22-2011 |
20120062748 | VISUALIZING VIDEO WITHIN EXISTING STILL IMAGES - Video from a video camera can be integrated into a still image, with which it shares common elements, to provide greater context and understandability. Pre-processing can derive transformation parameters for transforming and aligning the video to be integrated into the still image in a visually fluid manner. The transformation parameters can then be utilized to transform and align the video in real-time and display it within the still image. Pre-processing can comprise stabilization of video, if the video camera is moveable, and can comprise identification of areas of motion and of static elements. Transformation parameters can be derived by fitting the static elements of the video to portions of one or more existing images. Display of the video in real-time in the still image can include display of the entire transformed and aligned video image, or of only selected sections, to provide for a smoother visual integration. | 03-15-2012 |
20120072302 | Data-Driven Item Value Estimation - Data-driven item value determinations for a user-interested topic are automatically generated and made available to a user for rendering effective, efficient decisional choices on one or more aspects of the user-interested topic. Information on components of the user-interested topic relevant to a user's decisional choices are mined from the internet and collated to generate values that identify optimum user choices. User input is utilized to tailor generated value determinations for specific user preferences, issues and/or concerns. Data-driven item value determinations can be generated for a host of user-interested topics including, but not limited to, eating establishment nutritional choices and shopping mall criteria. | 03-22-2012 |
20140098107 | Transitioning Between Top-Down Maps and Local Navigation of Reconstructed 3-D Scenes - Technologies are described herein for transitioning between a top-down map display of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map display and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map display view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map display. | 04-10-2014 |