Patent application number | Description | Published |
20080219508 | VISION BASED NAVIGATION AND GUIDANCE SYSTEM - A method of controlling an autonomous vehicle with a vision-based navigation and guidance system. The method includes capturing image frames at a predetermined frame rate. Detecting image points that define an edge in each image frame. Using image points in each image frame to locate a determined edge. Dynamically predicting the location of image points that define an edge in a subsequent image frame. Using the prediction of the location of image points that define an edge to narrow the search area for the edge in the subsequent image frame. Determining offset distances between determined edges in sequential image frames and controlling the vehicle based at least in part on determined offset distances between determined edges in sequential image frames. | 09-11-2008 |
20080239279 | LADAR-BASED MOTION ESTIMATION FOR NAVIGATION - A method for estimating motion with at least one laser radar is disclosed. The method involves scanning a first range image at a first time, locating a plurality of object features in the first range image, scanning a second range image at a second time, receiving motion data for a time period between the first and second times, and locating the plurality of object features in the second range image based, at least in part, on the motion data to determine an orientation of the object. Based on the orientation of the object, the method estimates motion by comparing the location of the plurality of object features in the first scanned range image to the location of the plurality of object features in the second scanned range image. | 10-02-2008 |
20090076669 | METHODS AND SYSTEMS FOR CONTROLLING MULTI-BODY VEHICLES WITH FUEL SLOSH - Methods and systems for controlling multi-body vehicles with fuel slosh are provided. In a method for stabilizing a vehicle with fuel slosh, a controller for a multi-body assembly is formed by selecting vehicle configuration and velocity as states of a rigid body. Equations of motion are written for the multi-body assembly, and velocity states are determined that are unactuated. A decoupling transformation is defined from the unactuated velocity states. The equations of motion are transformed using the decoupling transformation such that unactuated states are decoupled from actuated states. A slosh state estimator is coupled to the controller such that it is in operative communication with an input to the controller and an output from the controller. | 03-19-2009 |
20090263009 | METHOD AND SYSTEM FOR REAL-TIME VISUAL ODOMETRY - A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique. | 10-22-2009 |
20090279741 | METHOD AND APPARATUS FOR VISION BASED MOTION DETERMINATION - A method for determining motion is provided. The method determines a rotation of an object from a first time to a second time by analyzing a first 2D image obtained at the first time and a second 2D image obtained at the second time. Then, the method determines a translation of the object from the first time to the second time based on the determined rotation, 3D information relating to the first image, and 3D information relating to the second image. | 11-12-2009 |
20090319186 | METHOD AND APPARATUS FOR DETERMINING A NAVIGATIONAL STATE OF A VEHICLE - Methods and apparatus are provided for determining a navigational state of a vehicle, the vehicle having at least one pivotable wheel and a plurality of front wheels. The apparatus comprises a steering angle sensor coupled to the at least one pivotable wheel for determining a steering angle, a plurality of wheel speed sensors each coupled to a different one of the plurality of pivotable wheels for determining an angular velocity of each of the plurality of pivotable wheels, a GPS receiver coupled to the vehicle for receiving GPS positioning data, and a processor coupled to the steering angle sensor, the plurality of wheel speed sensors, and the GPS receiver. The processor is configured to determine a yaw rate for the vehicle based on the positioning data, the steering angle, and the longitudinal angular velocity of each of the plurality of front wheels, and integrate the yaw rate to determine a heading for the vehicle. | 12-24-2009 |
20100092033 | METHOD FOR TARGET GEO-REFERENCING USING VIDEO ANALYTICS - A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location. | 04-15-2010 |
20100092071 | SYSTEM AND METHODS FOR NAVIGATION USING CORRESPONDING LINE FEATURES - A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image. | 04-15-2010 |
20100274481 | SYSTEM AND METHOD FOR COLLABORATIVE NAVIGATION - A system and method for collaborative navigation is provided. The system comprises a first mobile unit, at least one inertial measurement unit on the first mobile unit, and at least one environment sensor on the first mobile unit. A navigator module in the first mobile unit is configured to receive inertial data from the inertial measurement unit. An object characterization module is configured to receive sensor data from the environment sensor and a navigation solution from the navigator module. A common object geo-locator module is configured to receive a first set of descriptors from the object characterization module and a second set of descriptors from another mobile unit. A data association module is configured to receive common descriptors from the common object geo-locator module. The first mobile unit is configured to operatively communicate with one or more additional mobile units that are configured for collaborative navigation with the first mobile unit. | 10-28-2010 |
20110102545 | UNCERTAINTY ESTIMATION OF PLANAR FEATURES - In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane. | 05-05-2011 |
20110118969 | COGNITIVE AND/OR PHYSIOLOGICAL BASED NAVIGATION - A method of navigation comprising receiving, from at least one sensor, physiological sensor data indicative of at least one physiological attribute of a person and generating navigation-related information derived from at least some of the physiological sensor data. | 05-19-2011 |
20110274343 | SYSTEM AND METHOD FOR EXTRACTION OF FEATURES FROM A 3-D POINT CLOUD - A method of extracting a feature from a point cloud comprises receiving a three-dimensional (3-D) point cloud representing objects in a scene, the 3-D point cloud containing a plurality of data points; generating a plurality of hypothetical features based on data points in the 3-D point cloud, wherein the data points corresponding to each hypothetical feature are inlier data points for the respective hypothetical feature; and selecting the hypothetical feature having the most inlier data points as representative of an object in the scene. | 11-10-2011 |