Patent application number | Description | Published |
20110170780 | SCALE SPACE NORMALIZATION TECHNIQUE FOR IMPROVED FEATURE DETECTION IN UNIFORM AND NON-UNIFORM ILLUMINATION CHANGES - A normalization process is implemented at a difference of scale space to completely or substantially reduce the effect that illumination changes has on feature/keypoint detection in an image. An image may be processed by progressively blurring the image using a smoothening function to generate a smoothened scale space for the image. A difference of scale space may be generated by taking the difference between two different smoothened versions of the image. A normalized difference of scale space image may be generated by dividing the difference of scale space image by a third smoothened version of the image, where the third smoothened version of the image that is as smooth or smoother than the smoothest of the two different smoothened versions of the image. The normalized difference of scale space image may then be used to detect one or more features/keypoints for the image. | 07-14-2011 |
20110255781 | EFFICIENT DESCRIPTOR EXTRACTION OVER MULTIPLE LEVELS OF AN IMAGE SCALE SPACE - A local feature descriptor for a point in an image is generated over multiple levels of an image scale space. The image is gradually smoothened to obtain a plurality of scale spaces. A point may be identified as the point of interest within a first scale space from the plurality of scale spaces. A plurality of image derivatives is obtained for each of the plurality of scale spaces. A plurality of orientation maps is obtained (from the plurality of image derivatives) for each scale space in the plurality of scale spaces. Each of the plurality of orientation maps is then smoothened (e.g., convolved) to obtain a corresponding plurality of smoothed orientation maps. Therefore, a local feature descriptor for the point may be generated by sparsely sampling a plurality of smoothed orientation maps corresponding to two or more scale spaces from the plurality of scale spaces. | 10-20-2011 |
20110299770 | PERFORMANCE OF IMAGE RECOGNITION ALGORITHMS BY PRUNING FEATURES, IMAGE SCALING, AND SPATIALLY CONSTRAINED FEATURE MATCHING - A method for feature matching in image recognition is provided. First, image scaling may be based on a feature distribution across scale spaces for an image to estimate image size/resolution, where peak(s) in the keypoint distribution at different scales is used to track a dominant image scale and roughly track object sizes. Second, instead of using all detected features in an image for feature matching, keypoints may be pruned based on cluster density and/or the scale level in which the keypoints are detected. Keypoints falling within high-density clusters may be preferred over features falling within lower density clusters for purposes of feature matching. Third, inlier-to-outlier keypoint ratios are increased by spatially constraining keypoints into clusters in order to reduce or avoid geometric consistency checking for the image. | 12-08-2011 |
20110299782 | FAST SUBSPACE PROJECTION OF DESCRIPTOR PATCHES FOR IMAGE RECOGNITION - A method for generating a feature descriptor is provided. A set of pre-generated sparse projection vectors is obtained. A scale space for an image is also obtained, where the scale space having a plurality scale levels. A descriptor for a keypoint in the scale space is then generated based on a combination of the sparse projection vectors and sparsely sampled pixel information for a plurality of pixels across the plurality of scale levels. | 12-08-2011 |
20120027290 | OBJECT RECOGNITION USING INCREMENTAL FEATURE EXTRACTION - In one example, an apparatus includes a processor configured to extract a first set of one or more keypoints from a first set of blurred images of a first octave of a received image, calculate a first set of one or more descriptors for the first set of keypoints, receive a confidence value for a result produced by querying a feature descriptor database with the first set of descriptors, wherein the result comprises information describing an identity of an object in the received image, and extract a second set of one or more keypoints from a second set of blurred images of a second octave of the received image when the confidence value does not exceed a confidence threshold. In this manner, the processor may perform incremental feature descriptor extraction, which may improve computational efficiency of object recognition in digital images. | 02-02-2012 |
20120263388 | ROBUST FEATURE MATCHING FOR VISUAL SEARCH - Techniques are disclosed for performing robust feature matching for visual search. An apparatus comprising an interface and a feature matching unit may implement these techniques. The interface receives a query feature descriptor. The feature matching unit then computes a distance between a query feature descriptor and reference feature descriptors and determines a first group of the computed distances and a second group of the computed distances in accordance with a clustering algorithm, where this second group of computed distances comprises two or more of the computed distances. The feature matching unit then determines whether the query feature descriptor matches one of the reference feature descriptors associated with a smallest one of the computed distances based on the determined first group and second group of the computed distances. | 10-18-2012 |
20120330967 | Descriptor storage and searches of k-dimensional trees - Various arrangements for using a k-dimensional tree for a search are presented. A plurality of descriptors may be stored. Each of the plurality of descriptors stored is linked with a first number of stored dimensions. The search may be performed using the k-dimensional tree for one or more query descriptors that at least approximately match one or more of the plurality of descriptors linked with the first number of stored dimensions. The k-dimensional tree may be built using the plurality of descriptors wherein each of the plurality of descriptors is linked with a second number of dimensions when the k-dimensional tree is built. The second number of dimensions may be a greater number of dimensions than the first number of stored dimensions. | 12-27-2012 |
20130039566 | CODING OF FEATURE LOCATION INFORMATION - Methods and devices for coding of feature locations are disclosed. In one embodiment, a method of coding feature location information of an image includes generating a hexagonal grid, where the hexagonal grid includes a plurality of hexagonal cells, quantizing feature locations of an image using the hexagonal grid, generating a histogram to record occurrences of feature locations in each hexagonal cell, and encoding the histogram in accordance with the occurrences of feature locations in each hexagonal cell. The method of encoding the histogram includes applying context information of neighboring hexagonal cells to encode information of a subsequent hexagonal cell to be encoded in the histogram, where the context information includes context information from first order neighbors and context information from second order neighbors of the subsequent hexagonal cell to be encoded. | 02-14-2013 |
20130046793 | FAST MATCHING OF IMAGE FEATURES USING MULTI-DIMENSIONAL TREE DATA STRUCTURES - A method for generating a descriptor tree data structure is provided. A plurality of descriptors are obtained for one or more images, each descriptor defined within a multi-dimensional descriptor space. The plurality of descriptors are partitioned into nodes of a tree data structure, where the number of nodes in such partitioning is a function of the number of descriptors in the plurality of descriptors. The nodes having more than two descriptors may be sub-partitioned into sub-nodes of the tree data structure until two or fewer descriptors remain per sub-node, where such sub-partitioning is a function of the number of descriptors remaining in each such node and/or a dimensionality of such descriptors. | 02-21-2013 |
20130187905 | METHODS AND SYSTEMS FOR CAPTURING AND MOVING 3D MODELS AND TRUE-SCALE METADATA OF REAL WORLD OBJECTS - In some embodiments, methods and systems are provided for assisting a user in visualizing how a modified real-world setting would appear. An imaging device may capture a plurality of images of one or more objects or settings. A three-dimensional model of each object or setting may be created based on the images. These models may then be used to create a realistic image of a modified setting. For example, an image may display a setting (e.g., a living room) with an additional object (e.g., a couch) in the setting. The image may be realistic, in that it may accurately represent dimensions of the object relative to dimensions in the setting. Because three-dimensional models were created for both the setting and object, a user may be able to manipulate the image to, e.g., re-position and/or re-orient the object within the setting and view the setting from different perspectives. | 07-25-2013 |
20130197793 | CALIBRATED HARDWARE SENSORS FOR ESTIMATING REAL-WORLD DISTANCES - In some embodiments, methods and systems are provided for assisting a user in determining a real-world distance. Hardware-based sensors (e.g., present in a mobile electronic device) may allow for a fast low-power determination of distances. In one embodiment, one or more telemetry-related sensors may be incorporated into a device. For example, data detected by a frequently-calibrated integrated accelerometer may be used to determine a tilt of the device. A device height may be estimated based on empirical data or based on a time difference between a signal (e.g., a sonar signal) emitted towards the ground and a corresponding detected signal. A triangulation technique may use the estimated tilt and height to estimate other real-world distances (e.g., from the device to an endpoint or between endpoints). | 08-01-2013 |
20130201210 | VIRTUAL RULER - In some embodiments, first information indicative of an image of a scene is accessed. One or more reference features are detected, the reference features being associated with a reference object in the image. A transformation between an image space and a real-world space is determined based on the first information. Second information indicative of input from a user is accessed, the second information identifying an image-space distance in the image space corresponding to a real-world distance of interest in the real-world space. The real-world distance of interest is then estimated based on the second information and the determined transformation. | 08-08-2013 |
20130250123 | MULTISPECTRAL IMAGING SYSTEM - Systems and methods for multispectral imaging are disclosed. The multispectral imaging system can include a near infrared (NIR) imaging sensor and a visible imaging sensor. The disclosed systems and methods can be implemented to improve alignment between the NIR and visible images. Once the NIR and visible images are aligned, various types of multispectral processing techniques can be performed on the aligned images. | 09-26-2013 |
20130293532 | SEGMENTATION OF 3D POINT CLOUDS FOR DENSE 3D MODELING - Techniques for segmentation of three-dimensional (3D) point clouds are described herein. An example of a method for user-assisted segmentation of a 3D point cloud described herein includes obtaining a 3D point cloud of a scene containing a target object; receiving a seed input indicative of a location of the target object within the scene; and generating a segmented point cloud corresponding to the target object by pruning the 3D point cloud based on the seed input. | 11-07-2013 |
20140037189 | Fast 3-D point cloud generation on mobile devices - A system, apparatus and method for determining a 3-D point cloud is presented. First a processor detects feature points in the first 2-D image and feature points in the second 2-D image and so on. This set of feature points is first matched across images using an efficient transitive matching scheme. These matches are pruned to remove outliers by a first pass of s using projection models, such as a planar homography model computed on a grid placed on the images, and a second pass using an epipolar line constraint to result in a set of matches across the images. These set of matches can be used to triangulate and form a 3-D point cloud of the 3-D object. The processor may recreate the 3-D object as a 3-D model from the 3-D point cloud. | 02-06-2014 |
20150066427 | CALIBRATED HARDWARE SENSORS FOR ESTIMATING REAL-WORLD DISTANCES - In some embodiments, methods and systems are provided for assisting a user in determining a real-world distance. Hardware-based sensors (e.g., present in a mobile electronic device) may allow for a fast low-power determination of distances. In one embodiment, one or more telemetry-related sensors may be incorporated into a device. For example, data detected by a frequently-calibrated integrated accelerometer may be used to determine a tilt of the device. A device height may be estimated based on empirical data or based on a time difference between a signal (e.g., a sonar signal) emitted towards the ground and a corresponding detected signal. A triangulation technique may use the estimated tilt and height to estimate other real-world distances (e.g., from the device to an endpoint or between endpoints). | 03-05-2015 |