Patent application number | Description | Published |
20080266142 | System and method for stitching of video for routes - A system and method are disclosed for displaying video on a computing device for navigation and other purposes. A map database developer collects video data. The video data is collected by traveling along roads in a geographic area and storing the video data along with data indicating the positions at which the video data had been captured. This captured video data is then used in navigation systems and other devices that provide navigation, routing, video games, or other features. An application forms a video that shows a turn at an intersection from a first road onto a second road. The application adds video that depicts travel away from the intersection along the second road to video that depicts travel into the intersection along the first road to form a composite video that shows a turn at the intersection from the first road onto the second road. The composite video is then presented to a user on a display. | 10-30-2008 |
20080266324 | Street level video simulation display system and method - A system and method are disclosed for displaying video on a computing device for navigation and other purposes. Video data is collected by traveling along roads in a geographic area and storing the video data along with data indicating the positions at which the video data had been captured. This captured video data is then used in navigation systems and other devices that provide navigation, routing, or other features. A video is presented to a user on the display of a navigation system (or other device). An application associated with the navigation system uses the previously captured video data to create the video shown to the user. The application selects that video data that shows the end user's position from a vantage point. The application further superimposes an indication on the video at a location that corresponds to the position of the end user. | 10-30-2008 |
20080288545 | Method and System for Forming a Keyword Database for Referencing Physical Locations - An improved method and system for specifying physical locations when using applications run on navigation systems or other computer platforms that provide navigation- or map-related functions. When requesting a navigation- or map-related function from such an application, a user specifies a physical location using a keyword instead of specifying the physical location conventionally, such as by street address. A keyword database relates keywords to physical locations. The application uses the keyword database, or a copy thereof, to find data indicating the physical location associated with the keyword specified by the user. Preferably, physical locations are defined in the keyword database in terms of data in a corresponding geographic database. The application then performs the requested navigation- or map-related function using the data indicating the physical location associated with the keyword. The keyword database is built using input from users. An on-line system is provided that users can access to associate keywords with physical locations. A user accessing the on-line system is presented with a map from which a physical location can be selected. A keyword, which may be selected by the user, is associated with the selected physical location. The keyword is stored in the keyword database along with data indicating the associated physical location. | 11-20-2008 |
20090110239 | System and method for revealing occluded objects in an image dataset - Disclosed are a system and method for identifying objects in an image dataset that occlude other objects and for transforming the image dataset to reveal the occluded objects. In some cases, occluding objects are identified by processing the image dataset to determine the relative positions of visual objects. Occluded objects are then revealed by removing the occluding objects from the image dataset or by otherwise de-emphasizing the occluding objects so that the occluded objects are seen behind it. A visual object may be removed simply because it occludes another object, because of privacy concerns, or because it is transient. When an object is removed or de-emphasized, the objects that were behind it may need to be “cleaned up” so that they show up well. To do this, information from multiple images can be processed using interpolation techniques. The image dataset can be further transformed by adding objects to the images. | 04-30-2009 |
20090153549 | System and method for producing multi-angle views of an object-of-interest from images in an image dataset - Disclosed are a system and method for creating multi-angle views of an object-of-interest from images stored in a dataset. A user specifies the location of an object-of-interest. As the user virtually navigates through the locality represented by the image dataset, his current virtual position is determined. Using the user's virtual position and the location of the object-of-interest, images in the image dataset are selected and interpolated or stitched together, if necessary, to present to the user a view from his current virtual position looking toward the object-of-interest. The object-of-interest remains in the view no matter where the user virtually travels. From the same image dataset, another user can select a different object-of-interest and virtually navigate in a similar manner, with his own object-of-interest always in view. The object-of-interest also can be “virtual,” added by computer-animation techniques to the image dataset. For some image datasets, the user can virtually navigate through time as well as through space. | 06-18-2009 |
20130318078 | System and Method for Producing Multi-Angle Views of an Object-of-Interest from Images in an Image Dataset - Disclosed are a system and method for creating multi-angle views of an object-of-interest from images stored in a dataset. A user specifies the location of an object-of-interest. As the user virtually navigates through the locality represented by the image dataset, his current virtual position is determined. Using the user's virtual position and the location of the object-of-interest, images in the image dataset are selected and interpolated or stitched together, if necessary, to present to the user a view from his current virtual position looking toward the object-of-interest. The object-of-interest remains in the view no matter where the user virtually travels. From the same image dataset, another user can select a different object-of-interest and virtually navigate in a similar manner, with his own object-of-interest always in view. The object-of-interest also can be “virtual,” added by computer-animation techniques to the image dataset. For some image datasets, the user can virtually navigate through time as well as through space. | 11-28-2013 |
20140005929 | Dynamic Natural Guidance | 01-02-2014 |
20140244164 | METHOD AND APPARATUS FOR FORMULATING A POSITIONING EXTENT FOR MAP MATCHING - An approach is provided for formulating a positioning extent for map matching. The positioning extent platform processes and/or facilitates a processing of a plurality of position data points acquired by at least one positioning system to determine one or more variations in the plurality of position data points with respect to one or more thoroughfare segments. Next, the positioning extent platform determines one or more positioning extents associated with the one or more thoroughfare segments for the at least one positioning system based, at least in part, on the one or more variations. | 08-28-2014 |