Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


3-D or stereo imaging analysis

Subclass of:

382 - Image analysis

382100000 - APPLICATIONS

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20110002531Object Recognition with 3D Models - An “active learning” method trains a compact classifier for view-based object recognition. The method actively generates its own training data. Specifically, the generation of synthetic training images is controlled within an iterative training process. Valuable and/or informative object views are found in a low-dimensional rendering space and then added iteratively to the training set. In each iteration, new views are generated. A sparse training set is iteratively generated by searching for local minima of a classifier's output in a low-dimensional space of rendering parameters. An initial training set is generated. The classifier is trained using the training set. Local minima are found of the classifier's output in the low-dimensional rendering space. Images are rendered at the local minima. The newly-rendered images are added to the training set. The procedure is repeated so that the classifier is retrained using the modified training set.01-06-2011
20110176720Digital Image Transitions - Among other things, methods, systems and computer program products are disclosed for displaying a sequence of multiple images to provide an appearance of a three-dimensional (3D) effect. A data processing device or system can identify multiple images to be displayed. The data processing device or system can divide a two-dimensional (2D) display area into multiple display portions. The data processing device or system can display a sequence of the identified images on the display portions so as to provide an appearance of a three-dimensional (3D) effect.07-21-2011
20090116731Method and system for detection of concha and intertragal notch point in 3D undetailed ear impressions - A method and system for detecting the concha and intertragal notch in an undetailed 3D ear impression is disclosed. The concha is detected by searching vertical scan lines in a region surrounding the aperture using a two-pass method. The intertragal notch is detected based on a bottom contour of the 3D undetailed ear impression and a local coordinate system defined for the 3D undetailed ear impression.05-07-2009
201300285072D to 3D IMAGE CONVERSION APPARATUS AND METHOD THEREOF - A 2D to 3D image conversion apparatus includes a data queue, a conversion unit and an offset calculation unit. The data queue receives and temporarily stores an input data value corresponding to a current pixel. The conversion unit outputs a current offset table corresponding to a current depth parameter of the current pixel. The current offset table includes (m+1) reference offsets corresponding to the current pixel and neighboring m pixels. The offset calculation unit selects one of the reference offsets corresponding to the current pixel in the current offset table and multiple previous offset tables as a data offset corresponding to the current pixel. The data queue selects and outputs an output data value corresponding to the current pixel according to an integer part of the data offset and the input data value.01-31-2013
20080232679Apparatus and Method for 3-Dimensional Scanning of an Object - A 3-dimensional scanner capable of acquiring the shape, color, and reflectance of an object as a complete 3-dimensional object. The scanner utilizes a fixed camera, telecentric lens, and a light source rotatable around an object to acquire images of the object under varying controlled illumination conditions. Image data are processed using photometric stereo and structured light analysis methods to determine the object shape and the data combined using a minimization algorithm. Scans of adjacent object sides are registered together to construct a 3-dimensional surface model.09-25-2008
20120163705IMAGE FILE PROCESSING APPARATUS WHICH GENERATES AN IMAGE FILE TO INCLUDE STEREO IMAGE DATA AND COLLATERAL DATA RELATED TO THE STEREO IMAGE DATA, AND INFORMATION RELATED TO AN IMAGE SIZE OF THE STEREO IMAGE DATA, AND CORRESPONDING IMAGE FILE PROCESSING METHOD - Stereo image data is generated based on a plurality of monocular images of a same subject with a predetermined parallax, a collateral data generating section generates collateral data related to the stereo image data, and a stereo image size information generating unit generates information related to an image size of the stereo image data. An image file generating unit generates an image file in conversion to a predetermined file format upon synthesizing the stereo image data and the collateral data, and further adds the information related to the image size to the collateral data at inner and outer areas thereof.06-28-2012
20120163703STEREO MATCHING SYSTEM USING DYNAMIC PROGRAMMING AND METHOD THEREOF - Disclosed is a stereo matching system and method using a dynamic programming scheme. The stereo matching system and method using a dynamic programming scheme according to the present invention may perform viterbi type stereo matching using at least two different penalty of disparity discontinuity (PD) values and synthesize the performed stereo matching results, thereby outputting a disparity map.06-28-2012
20100061622METHOD FOR ALIGNING OBJECTS - A computer implemented method for aligning objects receives a reference object and a to-be-moved object; determining feature elements of the reference object. A first coordinate system is constructed according to a plurality of feature elements of the reference object. A second coordinate system is constructed according to a plurality of feature elements of the to-be-moved object. A third coordinate system is constructed according to the first coordinate system and the second coordinate system. An operation matrix is computed according to the three coordinate systems. The two objects are aligned using the operation matrix.03-11-2010
20090196491Method for automated 3d imaging - A method for automated construction of 3D images is disclosed, in which a range measurement device is to initiate and control the processing of 2D images in order to produce a 3D image. The range measurement device may be integrated with an image sensor, for example the range sensor from a digital camera, or may be a separate device. Data indicating the distance to a specific feature obtained from the range sensor may be used to control and automate the construction of the 3D image.08-06-2009
20080260238Method and System for Determining Objects Poses from Range Images - A method and system determines a pose of an object by comparing an input range image acquired of a scene including the input object to each of a set of reference range image of a reference object, such that each reference range images has an associated different pose, and the reference object is similar to the input object. Then, the pose associated with the reference range image which best matches the input range image is selected as the pose of the input object.10-23-2008
20100092072Automated generation of 3D models from 2D computer-aided design (CAD) drawings - The process and method for generating a 3D model from a set of 2D drawings is described herein. Traditionally, many structural components (objects) are communicated through a series of 2D drawings, wherein each drawing describes the components that are visible in a user-selected view direction. No machine-readable information in the drawings define a relationship between the drawings developed from various view directions or the objects' locations in 3D space. Considerable human effort and intervention is required to place objects defined in the 2D drawings into 3D space. With the ability to provide information in each drawing defining a relationship with the other drawings as well as its place in 3D space, the objects defined in 2D drawings can self-assemble in 3D space, thereby reducing a substantial amount of required human effort.04-15-2010
20110194756IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processor includes a main image statistical information generator that detects a parallax of each predetermined unit of a 3D main image from main image data and generates parallax statistical information, a sub-image statistical information generator unit that detects a parallax of each predetermined unit of a 3D sub-image from sub-image data and generates parallax statistical information, a parallax controller that computes, using the statistical information, a correction amount used for correcting at least one of the main image and sub-image parallaxes so that a positional distance between the main image and the sub-image in a depth direction is within a predetermined range, a converter that converts at least one of the main image data and sub-image data so that at least one of the parallaxes of the images is corrected by the correction amount, and a superimposing unit that superimposes the sub-image data on the main image data.08-11-2011
20090123061Depth image generating method and apparatus - A method of and apparatus for generating a depth image are provided. The method of generating a depth image includes: emitting light to an object for a first predetermined time period; detecting first light information of the object for the first predetermined time period from the time when the light is emitted; detecting second light information of the object for the first predetermined time period a second predetermined time period after the time when the light is emitted; and by using the detected first and second light information, generating a depth image of the object. In this way, the method can generate a depth image more quickly.05-14-2009
20080298674Stereoscopic Panoramic imaging system - An imaging system for producing stereoscopic panoramic images using multiple coplanar pairs of image capture devices with overlapping fields of view held in a rigid structural frame for long term calibration maintenance. Pixels are dynamically adjusted within the imaging system for position, color, brightness, aspect ratio, lens imperfections, imaging chip variations and any other imaging system shortcomings that are identified during calibration processes. Correction of pixel information is implemented in various combinations of hardware and software. Corrected image data is then available for storage or display or for separate data processing actions such as object distance or volume calculations.12-04-2008
20120177283FORMING 3D MODELS USING TWO IMAGES - A method for determining a three-dimensional model from two images comprising: receiving first and second images captured from first and second viewpoints, respectively, each image including a two-dimensional image together with a corresponding range map; identifying a set of corresponding features in the first and second two-dimensional images; removing any extraneous corresponding features in the set of corresponding features responsive to the first and second range maps to produce a refined set of corresponding features; determining a geometrical transform for transforming three-dimensional coordinates for the first image to be consistent three-dimensional coordinates for the second image responsive to three-dimensional coordinates for the refined set of corresponding features, the three-dimensional coordinates comprising two-dimensional pixel coordinates from the corresponding two-dimensional image together with a range coordinate from the corresponding range map; and determining a three-dimensional model responsive to the first image, the second image and the geometrical transform.07-12-2012
20100158355Fast Object Detection For Augmented Reality Systems - A detection method is based on a statistical analysis of the appearance of model patches from all possible viewpoints in the scene, and incorporates 3D geometry during both matching and pose estimation processes. By analyzing the computed probability distribution of the visibility of each patch from different viewpoints, a reliability measure for each patch is estimated. That reliability measure is useful for developing industrial augmented reality applications. Using the method, the pose of complex objects can be estimated efficiently given a single test image.06-24-2010
20100158353METHOD FOR RESTORATION OF BUILDING STRUCTURE USING INFINITY HOMOGRAPHIES CALCULATED BASED ON PARALLELOGRAMS - A method for restoration of building structure using infinity homographies calculated based on parallelograms includes: calculating, using two or more parallelograms, an infinity homography between those cameras which refer to an arbitrary camera; restoring cameras and the building structure on an affine space using the computed infinity homography and homologous points between images; and transforming the restored result onto the metric space using constraints on orthogonality of vectors joining the restored three-dimensional points, the ratio of lengths of the vectors and intrinsic camera parameters. As a result, intrinsic camera parameters, camera positions on the metric space and the structure of the building are restored. All the restoration is possible even when intrinsic camera parameters corresponding to all the images are not constant.06-24-2010
20130077854MEASUREMENT APPARATUS AND CONTROL METHOD - A measurement apparatus which measures the relative position and orientation of an image-capturing apparatus capturing images of one or more measurement objects with respect to the measurement object, acquires a captured image using the image-capturing apparatus. The respective geometric features present in a 3D model of the measurement object are projected onto the captured image based on the position and orientation of the image-capturing apparatus, thereby obtaining projection geometric features. Projection geometric features are selected from the resultant projection geometric features based on distances between the projection geometric features in the captured image. The relative position and orientation of the image-capturing apparatus with respect to the measurement object is then calculated using the selected projection geometric features and image geometric features corresponding thereto detected in the captured image.03-28-2013
20090304266CORRESPONDING POINT SEARCHING METHOD AND THREE-DIMENSIONAL POSITION MEASURING METHOD - A plurality of images (I, J) of an object (M) when viewed from different viewpoints are taken in. One of the images is set as a standard image (I), and the other image is set as a reference image (J). One-dimensional pixel data strings with a predetermined width (W) are cut out from the standard image (I) and the reference image (J) along epipolar lines (EP12-10-2009
20130077852METHOD AND APPARATUS FOR GENERATING FINAL DEPTH INFORMATION RELATED MAP THAT IS RECONSTRUCTED FROM COARSE DEPTH INFORMATION RELATED MAP THROUGH GUIDED INTERPOLATION - A method for generating a final depth information related map includes the following steps: receiving a coarse depth information related map, wherein a resolution of the coarse depth information related map is smaller than a resolution of the final depth information related map; and outputting the final depth information related map reconstructed from the coarse depth information related map by receiving an input data and performing a guided interpolation operation upon the coarse depth information related map according to the input data.03-28-2013
20130034296PATTERN DISCRIMINATING APPARATUS - A pattern discriminating apparatus includes a setting unit configured to set at least one area in a three-dimensional space in a three-dimensional image data, a feature value calculating unit configured to calculate a pixel feature value from one pixel to another of the three-dimensional image data, a matrix calculating unit configured to (1) obtain at least one point on a three-dimensional coordinate in the area which is displaced in position from a focused point on the three-dimensional coordinate in the area by a specific mapping, and (2) calculate a co-occurrence matrix which expresses the frequency of occurrence of a combination of the pixel feature value of the focused point in the area and the pixel feature values of the mapped respective points, and a discriminating unit configured to discriminate whether or not an object to be detected is imaged in the area on the basis of the combination of the specific mapping and the co-occurrence matrix and a learning sample of the object to be detected which is learned in advance.02-07-2013
20130077853Image Scaling - The present invention relates to an apparatus, method for adjusting depth characteristics of a three-dimensional image for correcting for errors in perceived depth when scaling the three-dimensional image, the method comprising: receiving three-dimensional image information comprising a stereoscopic image including a first image and a second image, the stereoscopic image having depth characteristics associated with an offset of the first and second images; determining a scaling factor indicative of a scaling for converting the stereoscopic image from an original target size to a new size; determining at least one shifting factor for varying the depth characteristics, the at least one shifting factor indicative of a relative shift to be applied between the first and the second images, wherein the at least one shifting factor is determined in accordance with the scaling factor and at least one depth parameter derived from the depth characteristics; and performing the relative shift between the first and second images in accordance with the shifting factor for adjusting the offset of the first and second images.03-28-2013
20130039566CODING OF FEATURE LOCATION INFORMATION - Methods and devices for coding of feature locations are disclosed. In one embodiment, a method of coding feature location information of an image includes generating a hexagonal grid, where the hexagonal grid includes a plurality of hexagonal cells, quantizing feature locations of an image using the hexagonal grid, generating a histogram to record occurrences of feature locations in each hexagonal cell, and encoding the histogram in accordance with the occurrences of feature locations in each hexagonal cell. The method of encoding the histogram includes applying context information of neighboring hexagonal cells to encode information of a subsequent hexagonal cell to be encoded in the histogram, where the context information includes context information from first order neighbors and context information from second order neighbors of the subsequent hexagonal cell to be encoded.02-14-2013
20130039569METHOD AND APPARATUS OF COMPILING IMAGE DATABASE FOR THREE-DIMENSIONAL OBJECT RECOGNITION - A method of compiling an image database for a three-dimensional object recognition including the steps of: when a plurality of images each showing an object from different viewpoint are inputted, extracting local features from each of the images, and expressing the local features using feature vectors; forming sets of the feature vectors, each set representing a same part of the object from a series of the viewpoints, and generating subspaces, each subspace representing a characteristic of each set; and storing each subspace to the image database with an identifier of the object to perform a recognition process that is realized by the steps of: when at least one image of an object is given as a query, extracting query feature vectors; determining the subspace most similar to each query feature vector; and executing a counting process to the identifiers to retrieve an object most similar to the query.02-14-2013
20130039567METHOD AND APPARATUS TO GENERATE A VOLUME-PANORAMA IMAGE - A method and apparatus to generate a volume-panorama image are provided. A method of generating a volume-panorama image includes receiving conversion relationships between volume images, one of the received conversion relationships being between a first volume image of the volume images and a second volume image of the volume images, the second volume image including an area that is common to an area of the first volume image, generating an optimized conversion relationship from the one of the received conversion relationships based on the received conversion relationships, and generating the volume-panorama image based on the generated optimized conversion relationship.02-14-2013
20130039568IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An image processing apparatus includes: a separation unit that separates each of the left image and the right image into a background area and a foreground area, regarding a first three-dimensional image data acquired by a acquisition unit; a background image generation unit that generates data of a background image by executing image processing on at least one of the background area of the left image and the background area of the right image separated by the separation unit; and a three-dimensional image data generation unit that generates second three-dimensional image data composed of two images having a parallax between a left image and a right image, by combining data of the background image generated by the background image generation unit and data of foreground images regarding the foreground area separated from each of the left image and the right image by the separation unit.02-14-2013
20130034297METHOD AND DEVICE FOR CALCULATING A DEPTH MAP FROM A SINGLE IMAGE - A method for calculating a depth map from an original matrix image, comprising the steps of: 02-07-2013
20100098323Method and Apparatus for Determining 3D Shapes of Objects - An apparatus and method determine a 3D shape of an object in a scene. The object is illuminated to cast multiple silhouettes on a diffusing screen coplanar and in close proximity to a mask. A single image acquired of the diffusing screen is partitioned into subview according to the silhouettes. A visual hull of the object is then constructed according to isosurfaces of the binary images to approximate the 3D shape of the object.04-22-2010
20100322507SYSTEM AND METHOD FOR DETECTING DROWSY FACIAL EXPRESSIONS OF VEHICLE DRIVERS UNDER CHANGING ILLUMINATION CONDITIONS - The present invention includes a method of detecting drowsy facial expressions of vehicle drivers under changing illumination conditions. The method includes capturing an image of a person's face using an image sensor, detecting a face region of the image using a pattern classification algorithm, and performing, using an active appearance model algorithm, local pattern matching to identify a plurality of landmark points on the face region of the image. The method also includes generating a 3D face model with facial muscles of the face region, determining photometric flows from the 3D face model using an extract photometric flow module, determining geometric flows from the 3D face model using a compute geometric flow module, determining a noise component generated by varying illuminations by comparing the geometric flows to the photometric flows, and removing the noise component by subtracting two photometric flows.12-23-2010
20100104175Integrated image processor - A system is disclosed. An input interface is configured to receive pixel data from two or more images. A pixel handling processor disposed on the substrate is configured to convert the pixel data into depth and intensity pixel data. In some embodiments, a foreground detector processor disposed on the substrate is configured to classify pixels as background or not background. In some embodiments, a projection generator disposed on the substrate is configured to generate a projection in space of the depth and intensity pixel data.04-29-2010
20100104174Markup Language for Interactive Geographic Information System - Data-driven guarded evaluation of conditional-data associated with data objects is used to control activation and processing of the data objects in an interactive geographic information system. Methods of evaluating conditional-data to control activation of the data objects are disclosed herein. Data structures to specify conditional data are also disclosed herein.04-29-2010
20090154792Linear Feature Detection Method and Apparatus - A method of extracting linear features from an image, the method including the steps of: (a) applying a non maximum suppression filter to the image for different angles of response to produce a series o filtered image responses; (b) combining the filtered image responses into a combined image having extracted linear features.06-18-2009
20090154794Method and apparatus for reconstructing 3D shape model of object by using multi-view image information - A method for reconstructing a 3D shape model of an object by using multi-view image information, includes: inputting multi-view images obtained by photographing the object from multiple viewpoints in a voxel space, and extracting silhouette information and color information of the multi-view images; reconstructing visual hulls by silhouette intersection using the silhouette information; and approximating polygons of cross-sections of the visual hulls to a natural geometric shape of the object by using the color information. Further, the method includes expressing a 3D geometric shape of the object by connecting the approximated polygons to create a mesh structure; extracting color textures of a surface of the object by projecting meshes of the mesh structure to the multi-view image; and creating a 3D shape model by modeling natural shape information and surface color information of the object.06-18-2009
20090154793DIGITAL PHOTOGRAMMETRIC METHOD AND APPARATUS USING INTERGRATED MODELING OF DIFFERENT TYPES OF SENSORS - Disclosed is a digital photogrammetric method and apparatus using the integrated modeling of different types of sensors. A unified triangulation method is provided for an overlapping area between an aerial image and a satellite image that are captured by a frame camera and a line camera equipped with different types of sensors. Ground control lines or ground control surfaces are used as ground control features used for the triangulation. A few ground control points may be used together with the ground control surface in order to further improve the three-dimensional position. The ground control line and the ground control surface may be extracted from LiDAR data. In addition, triangulation may be performed by bundle adjustment in the units of blocks each having several aerial images and satellite images. When an orthophoto is needed, it is possible to generate the orthophoto by appropriately using elevation models with various accuracies that are created by a LiDAR system, according to desired accuracy.06-18-2009
20120183205METHOD FOR DISPLACEMENT MEASUREMENT, DEVICE FOR DISPLACEMENT MEASUREMENT, AND PROGRAM FOR DISPLACEMENT MEASUREMENT - Measurement of 3D displacement based on successively captured images of an object becomes difficult to be performed due to a load imposed on an operator along with an increase of the number of target portions defined on the object and that of time steps for displacement measurement. A device for displacement measurement executes stereo measurement relative to a stereo image to generate 3D shape information and orthographically projected image of an object for each time, and tracks the 2D image of the target portion through pattern matching between orthographically projected images at successive times to obtain a 2D displacement vector. The device for displacement measurement converts the start point and the end point of the 2D displacement vector into 3D coordinates, using the 3D shape information, to obtain a 3D displacement vector.07-19-2012
201201832043D MODELING AND RENDERING FROM 2D IMAGES - A method of converting an image from one form to another form by a conversion apparatus having a memory and a processor, the method including the steps of receiving a captured image, extracting at least one image dimension attribute from the image, calculating at least one dimension attribute of the image based on the image dimension attribute, modifying the image based on the calculated dimension attribute and the extracted dimension attribute, and displaying the modified image on a display unit.07-19-2012
20120183203APPARATUS AND METHOD FOR EXTRACTING FEATURE OF DEPTH IMAGE - Provided is a feature extraction method and apparatus to extract a feature of a three-dimensional (3D) depth image. The feature extraction apparatus may generate a plurality of level sets using a depth image, and may extract a feature for each level depth image.07-19-2012
20120183202Methods and Systems for 2D to 3D Conversion from a Portrait Image - A method for converting a 2D image into a 3D image includes receiving the 2D image; determining whether the received 2D image is a portrait, wherein the portrait can be a face portrait or a non-face portrait; if the received 2D image is determined to be a portrait, creating a disparity between a left eye image and a right eye image based on a local gradient and a spatial location; generating the 3D image based on the created disparity; and outputting the generated 3D image.07-19-2012
20120183201METHOD AND SYSTEM FOR RECONSTRUCTING A STEREOSCOPIC IMAGE STREAM FROM QUINCUNX SAMPLED FRAMES - A method for reconstructing a stereoscopic image stream from a plurality of compressed frames is provided. Each compressed frame consists of a merged image formed by juxtaposing a sampled image frame of a left image and a sampled image frame of a right image. Each sampled image frame has half a number of original pixels disposed at intersections of a plurality of horizontal lines and a plurality of vertical lines in a staggered quincunx pattern in which original pixels surround missing pixels. Each missing pixel is reconstructed according to at least 5 horizontal pixel pairs and 3 vertical pixel pairs in a compressed frame.07-19-2012
20130044941METHOD FOR LOCATING ARTEFACTS IN A MATERIAL - A method for locating artefacts, such as particles or voids, in a material includes the steps of defining a path through a volume of the material, sensing the presence and type of any artefacts along the path and determining for each sensed artefact, the respective distance along the path. Analysis of the quantity of sensed artefacts and their respective position along the path enables the determination of measures for the artefact density, artefact size and artefact distribution in the material.02-21-2013
20130044940SYSTEM AND METHOD FOR SECTIONING A MICROSCOPY IMAGE FOR PARALLEL PROCESSING - A computer-implemented system and method of processing a microscopy image are provided. A microscopy image is received, and a configuration for an image section that includes a portion of the microscopy image is determined. Multiple image sections are respectively assigned to multiple processing units, and the processing units respectively process the image sections in parallel. One or more objects are determined to be respectively present in the image sections, and the objects present in the image sections are measured to obtain object data associated with the objects.02-21-2013
20130044939Method and system for modifying binocular images - The present invention relates to a method for modifying binocular images, for example, to manipulate the attention of viewers. The binocular images may be for 2D or 3D scenes. The method modifies a left image destined for a left eye and a right image destined for a right eye, by modifying a portion of the left image by adjusting a visual characteristic of the portion in a first direction by a first defined value and modifying a corresponding portion of the right image by adjusting the visual characteristic of the corresponding portion in the opposite of the first direction by a second defined value. A system with an image modification means for modifying binocular images, an apparatus for displaying modified binocular images, a signal and medium for carrying/storing modified binocular images, and a computer program for modifying binocular images are also disclosed.02-21-2013
20130083995STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points.04-04-2013
20130083993IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device includes: an image acquisition section acquiring base and reference images in which a same object is drawn at horizontal positions different from each other; and a disparity detection section detecting a candidate pixel as a candidate of a pixel corresponding to a base pixel constituting the base image, from a reference pixel group including a first reference pixel constituting the reference image, and a second reference pixel, whose vertical position is different from that of the first reference pixel, based on the base pixel and the reference pixel group, associating a horizontal disparity candidate indicating a distance from a horizontal position of the base pixel to a horizontal position of the candidate pixel, with a vertical disparity candidate indicating a distance from a vertical position of the base pixel to a vertical position of the candidate pixel, and storing the associated candidates in a storage section.04-04-2013
20130083992METHOD AND SYSTEM OF TWO-DIMENSIONAL TO STEREOSCOPIC CONVERSION - In one embodiment, a method of two-dimensional to stereoscopic image conversion, the method comprising detecting a face in a two-dimensional image; determining a body region based on the detected face; providing a color model from a portion of the determined body region, a portion of the detected face, or a combination of both portions; calculating a similarity value of at least one image pixel of the two-dimensional image based on the provided color model; and assigning a depth value of the image pixel based on the calculated similarity value to generate a stereoscopic image.04-04-2013
20100142801Stereo Movie Editing - The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes.06-10-2010
20100329543METHOD AND SYSTEM FOR RECTIFYING IMAGES - The present invention relates to a method and a system for rectifying images. An original stereo image pair is obtained, and the epipolar lines corresponding to the original stereo image pair are parallelized to obtain a first transformed stereo image pair. Epipolar lines corresponding to the first transformed stereo image pair are collinearized to obtain a second transformed stereo image pair. The present invention parallelizes and collinearizes the epipolar lines corresponding to the stereo image pair after the images are rectified.12-30-2010
20130089254APPARATUS AND METHOD FOR CORRECTING STEREOSCOPIC IMAGE USING MATCHING INFORMATION - An apparatus for correcting a stereoscopic image using matching information, includes: a matching information visualizer receiving input of original stereoscopic images and intuitive matching information and visualizing a pair of stereoscopic images based on the intuitive matching information; a correction information processor obtaining a statistical camera parameter based on the intuitive matching information and correcting the received stereoscopic image using the statistical camera parameter; and an error allowable controller providing allowable error information to the correction information processor in consideration of an error allowable degree according to a selected time from the received intuitive matching information and preset human factor guide information, to extract a correlation between stereoscopic images using a stereoscopic image and provided information, thereby helping such that an erroneously photographed image is correctly photographed or correcting the image such that the erroneously photographed image is correctly interpreted, which leads to minimization of visual fatigue.04-11-2013
20130051659STEREOSCOPIC IMAGE PROCESSING DEVICE AND STEREOSCOPIC IMAGE PROCESSING METHOD - A stereoscopic image processing device that converts a two-dimensional (2D) image into a three-dimensional (3D) image includes: a detector detecting a value indicating a variation degree of an image feature quantity within a current frame to be processed of the 2D image; a normalizer (a) normalizing the image feature quantity to approximate the value detected by the detector to a threshold of the variation degree and outputting the normalized image feature quantity when the value is smaller than the threshold of the variation degree; and (b) not normalizing the image feature quantity and outputting the image feature quantity when the value is larger than or equal to the threshold of the variation degree; and a depth information generator generating depth information for converting the 2D image into the 3D image, based on the image feature quantity output by the normalizer.02-28-2013
20130051658METHOD OF SEPARATING OBJECT IN THREE DIMENSION POINT CLOUD - A method of separating an object in a three dimension point cloud including acquiring a three dimension point cloud image on an object using an image acquirer, eliminating an outlier from the three dimension point cloud image using a controller, eliminating a plane surface area from the three dimension point cloud image, of which the outlier has been eliminated using the controller, and clustering points of an individual object from the three dimension point cloud image, of which the plane surface area has been eliminated using the controller.02-28-2013
20130051657METHOD AND APPARATUS FOR DETERMINING A SIMILARITY OR DISSIMILARITY MEASURE - A solution for determining a similarity or dissimilarity measure for a selected pixel of a first image relative to another selected pixel in a second image is described. The first image and the second image form a stereoscopic image pair or part of a multi-view image group. In a first step a first support window containing the selected pixel in the first image is determining. Then a second support window containing the selected pixel in the second image is determining. Subsequently one or more statistical properties of the selected pixel in the first image are calculated to define a probability distribution for the selected pixel in the first image. Finally, pixel similarity or dissimilarity between the first support window and the second support window is aggregated using only those pixels belonging to the probability distribution for the selected pixel in the first image with a probability above a defined minimum.02-28-2013
20130051660IMAGE PROCESSOR, IMAGE DISPLAY APPARATUS, AND IMAGE TAKING APPARATUS - Disclosed is an image processor generating a three-dimensional image easily three-dimensionally viewed by, and hardly causing fatigue of, an observer, and easily adjusting a three-dimensional effect of an arbitrary portion in the three-dimensional image. The disparity correction portion 02-28-2013
20100266198Apparatus, method, and medium of converting 2D image 3D image based on visual attention - A method, apparatus, and medium of converting a two-dimensional (2D) image to a three-dimensional (3D) image based on visual attention are provided. A visual attention map including visual attention information, which is information about a significance of an object in a 2D image, may be generated. Parallax information including information about a left eye image and a right eye image of the 2D image may be generated based on the visual attention map. A 3D image may be generated using the parallax information.10-21-2010
20090304265SYSTEMS AND METHODS FOR MODELING THREE-DIMENSIONAL OBJECTS FROM TWO- DIMENSIONAL IMAGES - In one embodiment, a system and method for modeling a three-dimensional object includes capturing two-dimensional images of the object from multiple different viewpoints to obtain multiple views of the object, estimating slices of the object that lie in parallel planes that cut through the object, and computing a surface of the object from the estimated slices.12-10-2009
20090304264FREE VIEW GENERATION IN RAY-SPACE - The claimed subject matter relates to an architecture that can facilitate more efficient free view generation in Ray-Space by way of a Radon transform. The architecture can render virtual views based upon original image data by employing Ray-Space interpolation techniques. In particular, the architecture can apply the Radon transform to a feature epipolar plane image (FEPI) to extract more suitable slope or direction candidates. In addition, the architecture can facilitate improved block-based matching techniques in order to determine an optimal linear interpretation direction.12-10-2009
20090304263Method for classifying an object using a stereo camera - A method is provided for classifying an object using a stereo camera, the stereo camera generating a first and a second image using a first and a second video sensor respectively. In order to classify the object, the first and the second image are compared with one another in predefined areas surrounding corresponding pixel coordinates, the pixel coordinates for at least one model, at least one position and at least one distance from the stereo camera being made available.12-10-2009
20100272348TRANSPROJECTION OF GEOMETRY DATA - Systems and methods for transprojection of geometry data acquired by a coordinate measuring machine (CMM). The CMM acquires geometry data corresponding to 3D coordinate measurements collected by a measuring probe that are transformed into scaled 2D data that is transprojected upon various digital object image views captured by a camera. The transprojection process can utilize stored image and coordinate information or perform live transprojection viewing capabilities in both still image and video modes.10-28-2010
20100098324RECOGNITION PROCESSING METHOD AND IMAGE PROCESSING DEVICE USING THE SAME - A recognition processing method and an image processing device ends recognition of an object within a predetermined time while maintaining the recognition accuracy. The device extracts combinations of three points defining a triangle whose side length satisfy predetermined criterion values from feature points of the model of a recognition object, registers the extracted combinations as model triangles, and similarly extracts combinations of three points defining a triangle whose side lengths satisfy predetermined criterion values from feature points of the recognition object. The combinations are used as comparison object triangles and associated with the respective model triangles. The device calculates a transformation parameter representing the correspondence relation between each comparison object triangle and the corresponding model triangle using the coordinates of the corresponding points (A and A′, B and B′, and C and C′), determines the goodness of fit of the transformation parameters on the relation between the feature points of the model and those of the recognition object. The object is recognized by specifying the transformation parameters representing the correspondence relation between the feature points of the model and those of the recognition object according to the goodness of fit determined for each association.04-22-2010
20100098325System for optically detecting position and/or orientation of objects comprising at least two coplanar sensors - The electro-optical system for determining position and orientation of a mobile part comprises a fixed projector having a centre of projection (O) and a mobile part. The projector is rigidly linked with a virtual image plane, and the mobile part is rigidly linked with two linear sensors defining a first and a second direction vector. The fixed part projects onto the image plane and onto the sensors patterns, not represented, forming at least two secant networks of at least three segments that are each parallel.04-22-2010
20100098326EMBEDDING AND DECODING THREE-DIMENSIONAL WATERMARKS INTO STEREOSCOPIC IMAGES - Disclosed inventions relates to methods and systems for encoding at least one watermark into a stereoscopic conjugate pair of images. An example method comprises the step of encoding the at least one watermark by shifting selected pixels of said pair of images in one or more directions. The one or more directions include a horizontal direction. In the disclosed embodiments, ancillary information is not required to support decoding of encoded watermarks in addition to the transmitted left and right images.04-22-2010
20120219208IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus that an image input unit; a parallax acquisition unit configured to acquire a per-pixel or per-region parallax between two-viewpoint images; a main subject detection unit configured to detect a main subject on the two-viewpoint images; a parallax acquisition unit configured to acquire a parallax of the main subject; a setting unit configured to set a conversion factor of the parallax; a correction unit configured to correct the conversion factor of the parallax per pixel, per region, or per image; a multi-viewpoint image generation unit configured to convert at least one image of the two-viewpoint images in accordance with the corrected conversion factor of the parallax; an image adjustment unit configured to shift the two-viewpoint images or multi-viewpoint images to obtain a parallax appropriate for stereoscopic view; and a stereoscopically-displayed image generation unit configured to generate a stereoscopically-displayed image.08-30-2012
20130071013VIDEO PROCESSING DEVICE, VIDEO PROCESSING METHOD, PROGRAM - A feature point extraction unit 03-21-2013
20130071012IMAGE PROVIDING DEVICE, IMAGE PROVIDING METHOD, AND IMAGE PROVIDING PROGRAM FOR PROVIDING PAST-EXPERIENCE IMAGES - A image providing device provides a user with realistic and natural past-experience simulation through stereoscopic photographs. Specifically, feature-point extractors 03-21-2013
20130071009DEPTH RANGE ADJUSTMENT FOR THREE-DIMENSIONAL IMAGES - A system is provided for generating a three dimensional image. The system may include a processor configured to generate a disparity map from a stereo image, adjust the disparity map to compress or expand a number of depth levels within the disparity map to generate an adjusted disparity map, and render a stereo view of the image based on the adjusted disparity map.03-21-2013
20130071011RECONSTRUCTION OF SHAPES OF OBJECTS FROM IMAGES - The present disclosure describes a system and method for transforming a two-dimensional image of an object into a three-dimensional representation, or model, that recreates the three-dimensional contour of the object. In one example, three pairs of symmetric points establish an initial relationship between the original image and a virtual image, then additional pairs of symmetric points in the original image are reconstructed. In each pair, a visible point and an occluded point are mapped into 3-space with a single free variable characterizing the mapping for all pairs. A value for the free variable is then selected to maximize compactness of the model, where compactness is defined as a function of the model's volume and its surface area. “Noise” correction derives from enforcing symmetry and selecting best-fitting polyhedra for the model. Alternative embodiments extend this to additional polyhedra, add image segmentation, use perspective, and generalize to asymmetric polyhedra and non-polyhedral objects.03-21-2013
20130071008IMAGE CONVERSION SYSTEM USING EDGE INFORMATION - In accordance with at least some embodiments of the present disclosure, a process for converting a two-dimensional (2D) image based on edge information is described. The process may include partitioning the 2D image to generate a plurality of blocks, segmenting the plurality of blocks into a group of regions based on edges determined in the plurality of blocks, assigning depth values to the plurality of blocks based on a depth gradient hypothesis associated with the group of regions, wherein pixels in each of the plurality of blocks are associated with a same depth value, and generating the depth map based on the depth values of the plurality of blocks.03-21-2013
20130071010METHOD AND SYSTEM FOR FAST THREE-DIMENSIONAL IMAGING USING DEFOCUSING AND FEATURE RECOGNITION - Described is a method and system for fast three-dimensional imaging using defocusing and feature recognition is disclosed. The method comprises acts of capturing a plurality of defocused images of an object on a sensor, identifying segments of interest in each of the plurality of images using a feature recognition algorithm, and matching the segments with three-dimensional coordinates according to the positions of the images of the segments on the sensor to produce a three-dimensional position of each segment of interest. The disclosed imaging method is “aware” in that it uses a priori knowledge of a small number of object features to reduce computation time as compared with “dumb” methods known in the art which exhaustively calculate positions of a large number of marker points.03-21-2013
20090092311METHOD AND APPARATUS FOR RECEIVING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE, AND METHOD AND APPARATUS FOR TRANSMITTING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE - Provided is a method of receiving multiview camera parameters for a stereoscopic image. The method includes: extracting multiview camera parameter information for a predetermined data section from a received stereoscopic image data stream; extracting matrix information including at least one of translation matrix information and rotation matrix information for the predetermined data section from the multiview camera parameter information; and restoring coordinate systems of multiview cameras by using the extracted matrix information.04-09-2009
20130058565GESTURE RECOGNITION SYSTEM USING DEPTH PERCEPTIVE SENSORS - Acquired three-dimensional positional information is used to identify user created gesture(s), which gesture(s) are classified to determine appropriate input(s) to an associated electronic device or devices. Preferably at at least one instance of a time interval, the posture of a portion of a user is recognized, based at least one factor such as shape, position, orientation, velocity. Posture over each of the instance(s) is recognized as a combined gesture. Because acquired information is three-dimensional, two gestures may occur simultaneously.03-07-2013
20130058564METHOD AND APPARATUS FOR RECOVERING A COMPONENT OF A DISTORTION FIELD AND FOR DETERMINING A DISPARITY FIELD - A method and an apparatus for recovering a component of a distortion field of an image of a set of multi-view images are described. Also described are a method and an apparatus for determining a disparity field of an image of a set of multi-view images, which makes use of such method.03-07-2013
20130058563INTERMEDIATE IMAGE GENERATION METHOD, INTERMEDIATE IMAGE FILE, INTERMEDIATE IMAGE GENERATION DEVICE, STEREOSCOPIC IMAGE GENERATION METHOD, STEREOSCOPIC IMAGE GENERATION DEVICE, AUTOSTEREOSCOPIC IMAGE DISPLAY DEVICE, AND STEREOSCOPIC IMAGE GENERATION SYSTEM - By generating in advance intermediate images which have the same resolution as a stereoscopic image that is the final output image, and which integrate pixels for respective viewpoints, generation of a stereoscopic image is possible only by converting the pixel arrangement without using a high-speed and specialised computer or the like. Furthermore, using intermediate images in which images for respective viewpoints are arranged in a shape of tiles, a completely new stereoscopic image generation system can be realised in which a simple and low-cost stereoscopic image generation device generates stereoscopic images from intermediate images output or transmitted in a more standard format by a standard image output device such as a Blu-ray player, an STB, or an image distribution server.03-07-2013
20130058562SYSTEM AND METHOD OF CORRECTING A DEPTH MAP FOR 3D IMAGE - A system and method of correcting a depth map for 3D image is disclosed. A spatial spectral transform unit extracts pixels of object boundaries according to an input image, wherein the spatial spectral transform unit adopts Hilbert-Huang transform (HHT). A correction unit corrects an input depth map corresponding to the input image according to the pixels of object boundaries, thereby resulting in an output depth map.03-07-2013
20130058561PHOTOGRAPHIC SYSTEM - A photographic system for generating photos is provided. The photographic system comprises a photo composition unit, and a photo synthesizer. The photo composition unit is capable of determining an extracted view from a three dimensional (3D) scene. The photo synthesizer, coupled to the photo composition unit, is capable of synthesizing an output photo according to the extracted view.03-07-2013
20110058732METHOD AND APPARATUS FOR STORING 3D INFORMATION WITH RASTER IMAGERY - The present invention meets the above-stated needs by providing a method and apparatus that allows for X parallax information to be stored within an image pixel information. Consequently, only one image need be stored, whether it's a mosaic of a number of images, a single image or a partial image for proper reconstruction. To accomplish this, the present invention stores an X parallax value between the stereoscopic images with the typical pixel information by, e.g., increasing the pixel depth.03-10-2011
20110013828Stereoscopic format converter - A device and method for converting one stereoscopic format into another. A software-enabled matrix is used to set forth predefined relationships between one type of format as an input image and another type of format as an output image. The matrix can then be used as a look-up table that defines a correspondence between input pixels and output pixels for the desired format conversion.01-20-2011
20130064443APPARATUS AND METHOD FOR DETERMINING A CONFIDENCE VALUE OF A DISPARITY ESTIMATE - A method and an apparatus for determining a confidence value of a disparity estimate for a pixel or a group of pixels of a selected image of at least two stereo images are described, the confidence value being a measure for an improved reliability value of the disparity estimate for the pixel or the group of pixels. First an initial reliability value of the disparity estimate for the pixel or the group of pixels is determined, wherein the reliability is one of at least reliable and unreliable. Then a distance of the pixel or the group of pixels to a nearest pixel or group of pixels with an unreliable disparity estimate is determined. Finally, the confidence value of the disparity estimate for the pixel or the group of pixels is obtained from the determined distance.03-14-2013
20130163855AUTOMATED DETECTION AND CORRECTION OF STEREOSCOPIC EDGE VIOLATIONS - Pixel-based and region-based methods, computer program products, and systems for detecting, flagging, highlighting on a display, and automatically fixing edge violations in stereoscopic images and video. The highlighting and display methods involve signed, clamped subtraction of one image of a stereo image pair from the other image, with the subtraction preferably isolated to a region of interest near the lateral edges. Various embodiments include limiting the detection, flagging, and highlighting of edge violations to objects causing a degree of perceptual discomfort greater than a user-set or preset threshold, or to objects having a certain size and/or proximity and/or degree of cut-off by a lateral edge of the left or right eye images of a stereo image pair. Methods of removing violations include automatic or semi-automatic cropping of the offending object, and depth shifting of the offending object onto the screen plane.06-27-2013
20130163854IMAGE PROCESSING METHOD AND ASSOCIATED APPARATUS - An image processing method includes: receiving a plurality of images, the images being captured under different view points; and performing image alignment for the plurality of images by warping the plurality of images, where the plurality of images are warped according to a set of parameters, and the set of parameters are obtained by finding a solution constrained to predetermined ranges of physical camera parameters. In particular, the step of performing the image alignment further includes: automatically performing the image alignment to reproduce a three-dimensional (3D) visual effect, where the plurality of images is captured by utilizing a camera module, and the camera module is not calibrated with regard to the view points. For example, the 3D visual effect can be a multi-angle view (MAV) visual effect. In another example, the 3D visual effect can be a 3D panorama visual effect. An associated apparatus is also provided.06-27-2013
20090232389IMAGE PROCESSING METHOD AND APPARATUS, IMAGE REPRODUCING METHOD AND APPARATUS, AND RECORDING MEDIUM - Provided are an image processing method and apparatus, and an image reproducing method and apparatus. The image processing method includes receiving three-dimensional (3D) image data; generating additional information about the 3D image data; and inserting the additional information in a blanking interval of the 3D image data.09-17-2009
20090232387MULTI PARALLAX EXPLOITATION FOR OMNI-DIRECTIONAL IMAGING ELECTRONIC EYE - Techniques and systems are disclosed for electronic target recognition. In particular, techniques and systems are disclosed for performing electronic surveillance and target recognition using a multiple parallax exploitation (MPEX) electronic eye platform. Among other things, a MPEX system can include an imaging unit that includes multiple image capture devices spaced from one another to form an array to provide overlapping fields-of-view and to capture multiple overlapping stereo images of a scene. The MPEX system can also include a processing unit connected to the imaging unit to receive and process data representing the captured multiple overlapping stereo images from the imaging unit to characterize one or more objects of interest in the scene.09-17-2009
20090232388REGISTRATION OF 3D POINT CLOUD DATA BY CREATION OF FILTERED DENSITY IMAGES09-17-2009
20090010530INFORMATION PROCESSING SYSTEM - An information processing system for performing processes on first image and second image captured from different viewpoints, comprising: a first specifying part for specifying a first corresponding point on the second image, corresponding to a designation point designated on the first image, by searching on a line along a first basis direction corresponding to a predetermined direction and passing through a position corresponding to the designation point in the second image; a second specifying part for specifying a second corresponding point on the second image, corresponding to the designation point, by searching on a line passing through the first corresponding point in the second image and along a second basis direction almost perpendicular to the first basis direction; and a third specifying part for specifying a third corresponding point on the second image, corresponding to the designation point, by searching on a line passing through the second corresponding point in the second image and along the first basis direction.01-08-2009
20080317333METHOD AND SYSTEM FOR CORRECTION OF FLUOROSCOPE IMAGE DISTORTION - Certain embodiments of the present invention provide for a system and method for modeling S-distortion in an image intensifier. In an embodiment, the method may include identifying a reference coordinate on an input screen of the image intensifier. The method also includes computing a set of charged particle velocity vectors. The method also includes computing a set of magnetic field vectors. The method also includes computing the force exerted on the charged particle in an image intensifier. Certain embodiments of the present invention include an iterative method for calibrating an image acquisition system with an analytic S-distortion model. In an embodiment, the method may include comparing the difference between the measured fiducial shadow positions and the model fiducial positions with a threshold value. If the difference is less than the threshold value, the optical distortion parameters are used for linearizing the set of acquired images.12-25-2008
20120114225IMAGE PROCESSING APPARATUS AND METHOD OF GENERATING A MULTI-VIEW IMAGE - An image processing apparatus may detect an occlusion boundary between objects within an input depth image by applying an edge detection algorithm to the input depth image. The image processing apparatus may classify the occlusion boundary into a foreground region boundary and a background region boundary using a depth gradient vector direction of the occlusion boundary, and may extract an occlusion region of the input depth image using the foreground region boundary.05-10-2012
20120114224SYSTEM AND METHOD OF IMAGE PROCESSING - A method of image processing comprising receiving a plurality of interpolated images, interpolated from two adjacent camera positions having different image planes, applying a transformation to each interpolated image to a respective one of a plurality intermediate image planes, wherein each intermediate image plane is oriented intermediate to the image planes of the two adjacent camera positions depending on a viewing angle of that interpolated image relative to the adjacent camera positions. Also an integrated circuit or processor, an apparatus for capturing images and an apparatus for displaying images.05-10-2012
20120114223Method and apparatus for orienting image representative data - A method for processing a three-dimensional image file captured directly from a live subject, the file including the cranium of the subject, comprises: providing a vertex point cloud for the three-dimensional image file; determining a median point for the vertex point cloud; determining a point on the cranium; and utilizing the median point and the cranium point to define a z-axis for the three-dimensional image file.05-10-2012
20090252404Model uncertainty visualization for active learning - An active learning system and method are disclosed for generating a visual representation of a set of unlabeled elements to be labeled according to class. The representation shows the unlabeled elements as data points in a space and each class as a class point in the space. The position of each of the data points in the space reflects the uncertainty of a model regarding the classification of the respective element. The color of each data point also reflects the uncertainty of the model regarding the classification of the element and may be a mixture of the colors used for the class points.10-08-2009
20110019906METHOD FOR THE THREE-DIMENSIONAL SYNTHETIC RECONSTRUCTION OF OBJECTS EXPOSED TO AN ELECTROMAGNETIC AND/OR ELASTIC WAVE - A method for synthetic reconstruction of objects includes: extracting criteria from a knowledge base; extracting, from sensed signals filtered by the criteria, weak signals; extracting, from the weak signals, weak signals of interest; removing noise from and amplifying the weak signals of interest and obtaining useful weak signals; identifying useful direct information, from useful weak signals filtered by the criteria and supplying optimum criteria; reconstructing, using the useful direct information, information of interest; reconstructing, using the information of interest, useful information and supplying optimum criteria; reconstructing, based on the useful information, three-dimensional information, supplying a recognition state file and supplying the optimum criteria; and updating the criteria with the optimum criteria.01-27-2011
20130163856APPARATUS AND METHOD FOR ENHANCING STEREOSCOPIC IMAGE, RECORDED MEDIUM THEREOF - An apparatus for enhancing a stereoscopic image may include: a color relationship extraction unit which extracts color relationships between a plurality of first coordinates in a 3-dimensional color space for a first image and second coordinates in a 3-dimensional color space for a second image corresponding to the plurality of first coordinates; a color relationship correction unit, which corrects a color relationship for any one first coordinate from among the plurality of first coordinates based on a color relationship of at least one first coordinate existing within a particular distance from the any one first coordinate; and a color value transformation unit, which transforms a color value of the first image by using the corrected color relationship of the any one first coordinate. The invention provides the advantage of accurately correcting color imbalance between the left image and right image forming a stereoscopic image.06-27-2013
20130163857MULTIPLE CENTROID CONDENSATION OF PROBABILITY DISTRIBUTION CLOUDS - Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels.06-27-2013
20110280473ROTATION ESTIMATION DEVICE, ROTATION ESTIMATION METHOD, AND RECORD MEDIUM - A rotation estimation device includes an attitude determination section that accepts a plurality of three-dimensional images captured by an image capturing device at a plurality of timings, detects a plane region that is present in common with the plurality of images, and obtains a relative attitude of the image capturing device to the plane region in the image based on the image for each of the plurality of images; and a rotation state estimation section that obtains a rotational state of the image capturing device based on the relative attitude of the image capturing device, the relative attitude being obtained for each of the images.11-17-2011
20110081071METHOD AND APPARATUS FOR REDUCTION OF METAL ARTIFACTS IN CT IMAGES - A method and apparatus include acquisition of a view dataset based on x-rays received by a detector corresponding to a energy level, reconstruction of an initial image using the view dataset, the initial image comprising a plurality of metal voxels at respective metal voxel locations, and generation of a metal mask corresponding to the plurality of metal voxels within the initial image. The method and apparatus also include forward projection of the metal mask onto the view dataset to identify metal dexels in the view dataset, performance of a weighted interpolation based on the identified metal dexels to generate a completed view dataset, reconstruction of a final image using the completed view dataset, the final image comprising a plurality of image voxels corresponding to the metal voxel locations, and replacement of a portion of the plurality of image voxels corresponding to the metal voxel locations with smoothed metal values.04-07-2011
20110142329METHOD AND DEVICE FOR CONVERTING IMAGE - A method and a device for converting an image are disclosed. According to an embodiment of the present invention, the method for converting a two-dimensional image to a three-dimensional image by an image conversion device can include: receiving and setting overall depth information for an original image; classifying the original image into partial objects and setting three-dimensional information for each of the partial objects; generating a first image by moving the original image by use of the three-dimensional information; receiving and setting a zero point for the original image; generating a second image by moving the original image by use of the zero point; and generating a three-dimensional image by combining the first image and the second image. Accordingly, a still image can be converted to a three-dimensional image.06-16-2011
20100220921STEREO IMAGE SEGMENTATION - Real-time segmentation of foreground from background layers in binocular video sequences may be provided by a segmentation process which may be based on one or more factors including likelihoods for stereo-matching, color, and optionally contrast, which may be fused to infer foreground and/or background layers accurately and efficiently. In one example, the stereo image may be segmented into foreground, background, and/or occluded regions using stereo disparities. The stereo-match likelihood may be fused with a contrast sensitive color model that is initialized or learned from training data. Segmentation may then be solved by an optimization algorithm such as dynamic programming or graph cut. In a second example, the stereo-match likelihood may be marginalized over foreground and background hypotheses, and fused with a contrast-sensitive color model that is initialized or learned from training data. Segmentation may then be solved by an optimization algorithm such as a binary graph cut.09-02-2010
20090116729THREE-DIMENSIONAL POSITION DETECTING DEVICE AND METHOD FOR USING THE SAME - A three-dimensional position detecting device includes an electromagnetic radiation source, a first sensing module having first sensing elements, and a second sensing module having second sensing elements. The first and the second sensing elements receive different radiation energies from different spatial direction angles generated by the electromagnetic radiation source relative to the first and the second sensing elements, so values of two spatial direction angles of the electromagnetic radiation source relative to the first and the second sensing modules are obtained according to magnitude relationship of the radiation energies received by the first and the second sensing modules. According to matrix operation of two spatial distances from the electromagnetic radiation source to the first and the second sensing modules and the two spatial direction angles, a spatial coordinate position of the electromagnetic radiation source relative to the first and the second sensing modules is obtained.05-07-2009
20090041336STEREO MATCHING SYSTEM AND STEREO MATCHING METHOD USING THE SAME - A stereo matching system and a stereo matching method using the same. Here, a Sum of Edge Differences (SED) method as a disparity estimation method utilizing edge information is added to a disparity estimation method utilizing a local method to perform stereo matching. As such, it is possible to correct false matching in a non-texture region generated when stereo matching is performed using only a local method, thereby enabling good stereo matching.02-12-2009
20120002866METHOD AND APPARATUS FOR REDUCING THE MEMORY REQUIREMENT FOR DETERMINING DISPARITY VALUES FOR AT LEAST TWO STEREOSCOPICALLY RECORDED IMAGES - A method and an apparatus reduce the temporary random access memory required when determining disparity values for at least two stereoscopically recorded images with known epipolar geometry, in which a disparity is determined for each pixel of an image. Path-dependent dissimilarity costs are calculated on the basis of a disparity-dependent cost function, and compared, in two runs for a number of paths which open in the pixel. The disparity-dependent cost function evaluates a pixel-based dissimilarity measure between the pixel and the corresponding pixel, according to the respective disparity, in a second image. The path-dependent dissimilarity costs for a first predetermined set of disparities are calculated in a first run for a number of first paths and in a second run for a number of remaining paths, and the corresponding path-dependent dissimilarity costs of the first paths and of the remaining paths are accumulated for a second predetermined set of disparities.01-05-2012
20110299763APPARATUS AND METHOD OF INFORMATION EXTRACTION FROM ELECTROMAGNETIC ENERGY BASED UPON MULTI-CHARACTERISTIC SPATIAL GEOMETRY PROCESSING - An apparatus for information extraction from electromagnetic energy via multi-characteristic spatial geometry processing to determine three-dimensional aspects. Structure receives the electromagnetic energy, which has a plurality of spatial phase characteristics. Structure separates the plurality of spatial phase characteristics of the received electromagnetic energy. Structure identifies spatially segregated portions of each of the plurality of spatial phase characteristics, with each spatially segregated portion corresponding in a point to point relationship to a spatially segregated portion for each of the other of the plurality of spatial phase characteristics in a group. Structure quantifies each segregated portion to provide a spatial phase metric of each segregated portion for providing a data map of the spatial phase metric of each separated spatial phase characteristic of the plurality of spatial phase characteristics. Structure processes the spatial phase metrics to determine surface contour information for each segregated portion of the data map.12-08-2011
20110299762Process Of Correcting An Image Provided On A Support Which Is Subsequently Submitted To A Deformation Process - The invention relates to a method for adapting a visual representation which subsequently is subjected to a deformation, like in packaging. To be able to take into account the deformations on the visual representation the method comprises the steps of: providing a pattern on a support, wherein the pattern comprises a distribution of codes, which are arranged such that each code is unique, deforming the support with the pattern, taking at least two images of the deformed support under different points of view, and determining a 3D surface model based on the matching of at least one code of the pattern in the at least two images.12-08-2011
20110299761Image Processing Apparatus, Image Processing Method, and Program - An image processing apparatus includes a projective transformation unit that performs projective transformation on left and right images captured from different points of view, a projective transformation parameter generating unit that generates a projective transformation parameter used by the projective transformation unit by receiving feature point information regarding the left and right images, a stereo matching unit that performs stereo matching using left and right projective transformation images subjected to projective transformation, and a matching error minimization control unit that computes image rotation angle information regarding the left and right projective transformation images and correspondence information of an error evaluation value of the stereo matching. The matching error minimization control unit computes the image rotation angle at which the error evaluation value is minimized, and the projective transformation parameter generating unit computes the projective transformation parameter that reflects the image rotation angle at which the error evaluation value is minimized.12-08-2011
20110286660Spatially Registering User Photographs - Photographs of an object may be oriented with respect to both the geographic location and orientation of the object by registering a 3D model derived from a plurality of photographs of the objects with a 2D image of the object having a known location and orientation. For example, a 3D point cloud of an object created from photographs of the object using a Photosynth™ tool may be aligned with a satellite photograph of the object, where the satellite photograph has location and orientation information. A tool providing scaling and rotation of the 3D model with respect to the 2D image may be used or an automatic alignment may be performed using a function based on object edges filtered at particular angles. Once aligned, data may be recorded that registers camera locations for the plurality of photographs with geographic coordinates of the object, either absolute latitude/longitude or relative to the object.11-24-2011
20110286661METHOD AND APPARATUS FOR TEMPORALLY INTERPOLATING THREE-DIMENSIONAL DEPTH IMAGE - A method and apparatus for temporally interpolating a three-dimensional (3D) depth image are provided to generate an intermediate depth image in a desired time. The apparatus may interpolate depth images generated by a depth camera, using a temporal interpolation procedure, may generate an intermediate depth image in a new time using the interpolated depth images, and may combine the generated intermediate depth image with color images, to generate a high-precision 3D image.11-24-2011
20110293170IMAGE PROCESSING APPARATUS AND MATHOD - The format of an input image is determined appropriately, and an appropriate output image adapted to a format that can be displayed on a display section is displayed.12-01-2011
20120014590MULTI-RESOLUTION, MULTI-WINDOW DISPARITY ESTIMATION IN 3D VIDEO PROCESSING - A disparity value between corresponding pixels in a stereo pair of images, where the stereo pair of images includes a first view and a second view of a common scene, can be determined based on identifying a lowest aggregated matching cost for a plurality of support regions surrounding the pixel under evaluation. In response to the number of support regions having a same disparity value being greater than a threshold number, a disparity value indicator for the pixel under evaluation can be set to the same disparity value.01-19-2012
20110007962Overlay Information Over Video - In accordance with a particular embodiment of the invention, a method for geotagging an image includes receiving an image of a real-world scene. Location information may be received corresponding to the image. The location information may identify the location of the real-world scene. The image may be synchronized with the location information corresponding to the image such that a two-dimensional point on the image corresponds to a three-dimensional location in the real world at the real-world scene. A geotag may be received. The geotag may tag the image at the image at the two-dimensional point and provide additional information concerning the real-world scene. The geotag and the three-dimensional location in the real world at the real-world scene may be stored in a geotag database.01-13-2011
20090196492Method, medium, and system generating depth map of video image - A method, medium, and system generating a depth map of a video image are provided. The depth map generating method extracts the ground of a video image other than an object from the video image, classifies the video image as a long shot image or a non-long shot image based on a distribution value of the extracted ground, calculates a depth value gradually varied along a predetermined direction of the extracted ground when the video image corresponds to the long shot image and calculates a depth value based on the object when the video image corresponds to the non-long shot image. Accordingly, a sense of space and perspective can be effectively given to even a long shot image in which the ground occupies a large part of the image and a stereoscopic image recognizable by a viewer can be generated even if rapid object change is made between scenes in a video image.08-06-2009
20080310708Method for Improving Image Viewing Properties of an Image - An image processing method for improving image viewing properties of a digital image is disclosed. The method comprises converting a value of a property of at least one pixel of the image into a display value of the at least one pixel of the image by means of a parameterized function, wherein the parameterized function is location dependent with reference to the location of said at least one pixel of the image, thus creating locally optimized image viewing properties of the image.12-18-2008
20080310707VIRTUAL REALITY ENHANCEMENT USING REAL WORLD DATA - Techniques for enhancing virtual reality using transformed real world data are disclosed. In some aspects, a composite reality engine receives a transmission of the real world data that is captured by embedded sensors situated in the real world. The real world data is transformed and integrated with virtual reality data to create a composite reality environment generated by a composite reality engine. In other aspects, the composite reality environment enables activation of embedded actuators to modify the real world from the virtual reality environment. In still further aspects, techniques for sharing sensors and actuators in the real world are disclosed.12-18-2008
20100232682METHOD FOR DERIVING PARAMETER FOR THREE-DIMENSIONAL MEASUREMENT PROCESSING AND THREE-DIMENSIONAL VISUAL SENSOR - In the present invention, processing for setting a parameter expressing a measurement condition of three-dimensional measurement to a value necessary to output a proper recognition result is easily performed. The three-dimensional measurement is performed to stereo images of real models WM09-16-2010
20100034457MODELING OF HUMANOID FORMS FROM DEPTH MAPS - A computer-implemented method includes receiving a depth map (02-11-2010
20100027874Stereo image matching method and system using image multiple lines - Disclosed is a stereo image matching method for re-creating 3-dimensional spatial information from a pair of 2-dimensional images. The conventional stereo image matching method generates much noise from a disparity value in the vertical direction, but the present invention uses disparity information of adjacent image lines as a constraint condition to eliminate the noise in the vertical direction, and compress the disparity by using a differential coding method to thereby increase a compression rate.02-04-2010
20100177955BIDIRECTIONAL SIMILARITY OF SIGNALS - A method for measuring bi-directional similarity between a first signal of a first size and a second signal of a second size includes matching at least some patches of the first signal with patches of the second signal for data completeness, matching at least some patches of the second signal with patches of the first signal for data coherence, calculating the bi-directional similarity measure as a function of the matched patches for coherence and the matched patches for completeness and indicating the similarity between the first signal and the second signal. Another method generates a second signal from a first signal where the second signal is different than the first signal by at least one parameter. The method includes attempting to maximize a bi-directional similarity measure between the second signal and the first signal.07-15-2010
20130022261SYSTEMS AND METHODS FOR EVALUATING IMAGES - Systems and methods for evaluating images segment a computational image into sub-images based on spectral information in the computational image, generate respective morphological signatures for the sub-images, generate respective spectral signatures for the sub-images, and generate a resulting image signature based on the morphological signatures and the spectral signatures.01-24-2013
20100080448METHOD AND GRAPHICAL USER INTERFACE FOR MODIFYING DEPTH MAPS - The invention relates to a method and a graphical user interface for modifying a depth map for a digital monoscopic color image. The method includes interactively selecting a region of the depth map based on color of a target region in the color image, and modifying depth values in the thereby selected region of the depth map using a depth modification rule. The color-based pixel selection rules for the depth map and the depth modification rule selected based on one color image from a video sequence may be saved and applied to automatically modify depths maps of other color images from the same sequence.04-01-2010
20100128974Stereo matching processing apparatus, stereo matching processing method and computer-readable recording medium - To improve stereo matching speed and accuracy. An image data input unit 05-27-2010
201000085663D model reconstruction acquisition based on images of incremental or decremental liquid level - A 3D model reconstruction acquisition includes the steps of preparing a transparent container and at least one image capture device, wherein an object is placed in the transparent container and a liquid is received in the transparent container; keeping the liquid level rising or lowering to allow the liquid level to pass through a surface of the object and then keeping capturing a series of the images; computing a liquid-level equation for each of the images by using curves of the images between the object and the incremental or decremental liquid level confined by the transparent container; computing 3D coordinates of the curves in accordance with the liquid-level equation of each image; and collecting 3D coordinates of all of the curves to create a 3D model of the object. In addition, the acquisition can be done in the environment having water and thus be applied to various environments.01-14-2010
20080279448Device and Method for Automactically Determining the Individual Three-Dimensional Shape of Particles - A method for automated determination of an individual three-dimensional shape of particles includes: a) dosing, alignment, and automated delivery of the particles; b) observation of the aligned particles and image acquisition, and c) evaluation of the images. A device for automated determination of the individual three-dimensional shape of particles includes: a) a mechanism for dosing, alignment, and automated delivery of the particles; b) at least two cameras for observation of the aligned particles, and c) a mechanism for evaluation of the images. The device can be used for automated determination of individual three-dimensional shape of particles.11-13-2008
20080279449Universal stereoscopic file format - Stereoscopic images may be represented in four coordinates where a first image is represented in three coordinates and a second image is represented of one coordinate. The brightness contrast is the property largely used in stereoscopic perception. The brightness and color of the first image is represented in three coordinates while the brightness of the second image is represented in the one coordinate. Color perception is dominated by the first image. A universal file format with four channels allows the stereoscopic images to be displayed as anaglyphs or as two full color images or as non-stereoscopic images. The anaglyphs may be rendered in three primary colors or four primary colors providing wide compatibility with traditional and specialized display apparatus. The universal file format facilitates methods to capture, display, convert, and communicate stereoscopic images.11-13-2008
20080267490SYSTEM AND METHOD TO IMPROVE VISIBILITY OF AN OBJECT IN AN IMAGED SUBJECT - A system to track movement of an object travelling through an imaged subject is provided. The system includes an imaging system to acquire a fluoroscopic image and operable to create a three-dimensional model of a region of interest of the imaged subject. A controller includes computer-readable program instructions representative of the steps of calculating a probability that an acquired image data is of the object on a per pixel basis in the fluoroscopic image, calculating a value of a blending coefficient per pixel of the fluoroscopic image dependent on the probability, adjusting the fluoroscopic image including multiplying the value of the blending coefficient with one of a greyscale value, a contrast value, and an intensity value for each pixel of the fluoroscopic image. The adjusted fluoroscopic image is combined with the three-dimensional model to create an output image illustrative of the object in spatial relation to the three-dimensional model.10-30-2008
20100086199METHOD AND APPARATUS FOR GENERATING STEREOSCOPIC IMAGE FROM TWO-DIMENSIONAL IMAGE BY USING MESH MAP - Provided are a method and apparatus for generating a stereoscopic image from a two-dimensional (2D) image by using a mesh map and a computer readable recording medium having recorded thereon a computer program for executing the method. Also provided are a method and apparatus for generating a stereoscopic image by reading a 2D image, displaying the 2D image and a mesh map by overlapping the 2D image and the mesh map, and editing mesh shapes and depth information (depth values) of the mesh map by a user, and a computer readable recording medium having recorded thereon a computer program for executing the method. The method of generating a stereoscopic image includes receiving a 2D image; displaying the 2D image and a mesh map by overlapping the 2D image and the mesh map; editing mesh shapes and depth information (depth values) of the mesh map by a user in accordance with shapes of a displayed image; calculating relative depth information of pixels included in the 2D image in accordance with the mesh shapes and the depth information of the edited mesh map; and generating a stereoscopic image file by using the calculated relative depth information of the 2D image. The present invention may be used in a system for generating a stereoscopic image from a 2D image including a general still image or moving picture.04-08-2010
200903108513D CONTENT AGGREGATION BUILT INTO DEVICES - The claimed subject matter provides a system and/or a method that facilitates capturing a portion 2-dimensional (2D) data for implementation within a 3-dimensional (3D) virtual environment. A device that can capture one or more 2D images, wherein the 2D image is representative of a corporeal object from a perspective dictated by an orientation of the device. The device can comprise a content aggregator that can construct a 3D image from two or more 2D images collected by the device, in which the construction is based at least in part upon aligning each corresponding perspective associated with each 2D image.12-17-2009
20090310853MEASUREMENTS USING A SINGLE IMAGE - A method used in broadcasts of events is disclosed for identifying the coordinates of an object in world space from a video frame, where the object is not on the geometric model of the environment. Once the world coordinates of the object are identified, a graphic may be added to a video replay showing the object. The method may also be expanded in a further embodiment to identify a trajectory of an object over time moving through world space from video images of the start and end of the trajectory, where the object is not on the geometric model of the environment. Once the trajectory of the object in world space is identified, a graphic may be added to a video replay showing the trajectory.12-17-2009
20090310852Method for Constructing Three-Dimensional Model and Apparatus Thereof - Disclosed are a method and an apparatus for constructing an accurate three-dimensional model. The apparatus includes a plurality of light sources, an image-capturing element and an image-processing unit. The present invention is used to integrate the two-dimensional images from different views of an object into a high accurate three-dimensional model. Compared with conventional apparatuses, the apparatus of the present invention is useful without safety problems, relatively easily manipulated, and capable of quick image reconstruction.12-17-2009
20120033872APPARATUS AND METHOD FOR GENERATING EXTRAPOLATED VIEW BASED ON IMAGE RESIZING - A view extrapolation apparatus and a view extrapolation method to generate images at a plurality of virtual points using a relatively small number of input images are disclosed. The view extrapolation apparatus and the view extrapolation method output a view at a reference point, the view at the reference point being formed of frames according to time, generating the frames of the view at the reference point to generate a resized frame, and generating an extrapolated view at a virtual point using the resized frame.02-09-2012
20090208095SITE MODELING USING IMAGE DATA FUSION - Site modeling using image data fusion. Geometric shapes are generated to represent portions of one or more structures based on digital height data and a two-dimensional segmentation of portions of the one or more structures is generated based on three-dimensional line segments and digital height data. A labeled segmentation of the one or more structures is generated based on the geometric shapes and the two-dimensional segmentation. A three-dimensional model of the one or more structures is generated based on the labeled segmentation.08-20-2009
20120269424STEREOSCOPIC IMAGE GENERATION METHOD AND STEREOSCOPIC IMAGE GENERATION SYSTEM - A stereoscopic image generation method and a stereoscopic image generation system that can generate, from an original image, a stereoscopic image that allows the viewer to perceive a natural stereoscopic effect are provided. The method includes a characteristic information acquisition step of acquiring characteristic information for each of pixels, a depth information generation step of generating depth information for each of the pixels on the basis of the characteristic information, and a stereoscopic image generation step of generating a stereoscopic image on the basis of the pieces of depth information.10-25-2012
20090067705Method and Apparatus to Facilitate Processing a Stereoscopic Image Using First and Second Images to Facilitate Computing a Depth/Disparity Image - The processing of a stereoscopic image using first and second images to facilitate computing a corresponding depth/disparity image can be facilitated by providing (03-12-2009
20090169095SYSTEM AND METHOD FOR GENERATING STRUCTURED LIGHT FOR 3-DIMENSIONAL IMAGE RENDERING - A system and method for illuminating an object in preparation for three-dimensional rendering includes a projection device configured to project at least three two-dimensional structured light patterns onto a 3-dimensional object. At least two cameras detect light reflected by the object in response to the at least three structured light patterns. Each structured light pattern varies in intensity in a first dimension and is constant in a second dimension. A single line along the first dimension of a given structured light pattern is created from a superposition of three or more component triangular waveforms. Each component triangular waveform has an amplitude, a periodicity (frequency), and a phase shift which is implemented as a pixel shift. Each component triangular waveform may be subject to one or more waveshaping operations prior to being summed with the remaining component triangular waveforms. The summed waveform itself may also be subject to waveshaping operations.07-02-2009
201000983283D imaging system - The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps.04-22-2010
20100080447Methods and Apparatus for Dot Marker Matching - A method for a computer system includes receiving a first camera image of a 3D object having sensor markers, captured from a first location, at a first instance, receiving a second camera image of the 3D object from a second location, at a different instance, determining points from the first camera image representing sensor markers of the 3D object, determining points from the second camera image representing sensor markers of the 3D object, determining approximate correspondence between points from the first camera image and points from the second camera image, determining approximate 3D locations some sensor markers of the 3D object, and rendering an image including the 3D object in response to the approximate 3D locations.04-01-2010
20090046924Stereo-image processing apparatus - A stereo-image processing apparatus includes a stereo-image taking means configured to take a plurality of images from different viewpoints, a parallax detecting means configured to detect a parallax of a subject on the basis of the images taken by the stereo-image taking means, an object detecting means configured to detect objects on the basis of the parallax detected by the parallax detecting means and a parallax offset value, and a parallax-offset-value correcting means configured to correct the parallax offset value on the basis of a change in a parallax corresponding to an object whose size in real space does not change with time, of the objects detected by the object detecting means, and a change in an apparent size of the object.02-19-2009
20120106833METHOD FOR OBTAINING A POSITION MATCH OF 3D DATA SETS IN A DENTAL CAD/CAM SYSTEM - A method for designing tooth surfaces of a digital dental prosthetic item using a first 3D model of a preparation site and/or of a dental prosthetic item and a second 3D model. The second model comprises regions matching some regions on the first 3D model and regions differing from other regions of the first 3D model. The non-matching regions contain surface information. At least three pairs of corresponding points are selected on the matching region on the first 3D model and second 3D model. The positional correlation of the second 3D model with reference to the first 3D model is determined based on the at least three pairs, and portions of the non-matching regions of the first and second 3D models are implemented for designing the tooth surface prosthetic item taking into consideration the based on the positional correlation of these models relative to each other.05-03-2012
20120106832METHOD AND APPARATUS FOR CT IMAGE RECONSTRUCTION - A method and apparatus for CT image reconstruction may include selecting, by a unit, projection data of the same height on a curve having a curvature approximate to that of the scanning circular orbit, implementing, by a unit, a weighting processing on the selected projection data, filtering, by a unit, the weighting processed projection data along a horizontal direction, implementing, by a unit, three-dimensional back projection on the filtered projection data along the direction of ray. The method and apparatus can effectively eliminate cone beam artifact under a large cone angle.05-03-2012
20120106831STEREO VISION BASED DICE RECOGNITION SYSTEM AND STEREO VISION BASED DICE RECOGNITION METHOD FOR UNCONTROLLED ENVIRONMENTS - A dice recognition system and a dice recognition method for uncontrolled environments are provided. In the present invention, the number of dot on a dice is automatically recognized in an uncontrolled environment by using multiple cameras. The present dice recognition system is different in at least two aspects from any existing automatic dice recognition system which uses a single camera for recognizing dice in an enclosed environment. Firstly, an existing automatic dice recognition system uses a single camera to obtain planar images for dice recognition, while the present dice recognition system uses multiple cameras to obtain different viewpoints images for dice recognition. Secondly, the present dice recognition system is designed for uncontrolled environments, and which can be applied to an open-table game in a general gambling place for dice recognition without changing the original dice, dice cup, and other related objects.05-03-2012
20090263009METHOD AND SYSTEM FOR REAL-TIME VISUAL ODOMETRY - A method for real-time visual odometry comprises capturing a first three-dimensional image of a location at a first time, capturing a second three-dimensional image of the location at a second time that is later than the first time, and extracting one or more features and their descriptors from each of the first and second three-dimensional images. One or more features from the first three-dimensional image are then matched with one or more features from the second three-dimensional image. The method further comprises determining changes in rotation and translation between the first and second three-dimensional images from the first time to the second time using a random sample consensus (RANSAC) process and a unique iterative refinement technique.10-22-2009
20090263007Stereoscopic image recording device and program - If horizontal viewpoint quantity information Nx and vertical viewpoint quantity information Ny are predetermined quantities, a value of the aspect ratio of the output image data to be finally output satisfies a predetermined condition, and a 3D identification mark is contained in the output image data, then an image recording device adds a first extension as general-purpose 3D image data which can also be used in a conventional device and records it. Accordingly, when 3D image data is output (displayed and printed) in a conventional device, image data which can be used as 3D image data (which can be viewed as a stereoscopic image) can be output as general-purpose image data while image data which cannot be used as 3D image data in the conventional device is not output as image data. This prevents a confusion of general users.10-22-2009
20110170767THREE-DIMENSIONAL (3D) IMAGING METHOD - A method for constructing a digital image (07-14-2011
20090274362Road Image Analyzing Apparatus and Road Image Analyzing Method - In a road image analyzing apparatus capable of obviously and rapidly distinguishing a road marking from a guardrail and capable of obtaining precise position information, a pre-processing unit defines sub-areas to main image data obtained by an image pickup unit, and an edge extracting unit extracts an edge component in each of the sub-areas. A linear line extracting unit analyzes the extracted edge component to extract a linear component, and a linear component analyzing unit extracts a continuous component from the linear component by using the linear component. A matching process unit performs a matching process between a vertex of the continuous component and auxiliary image data to obtain three-dimensional position information of each continuous component. An identifying unit identifies whether the continuous component is a road marking or a guardrail on the basis of height information of each continuous component included in the three-dimensional position information.11-05-2009
20090290787STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points.11-26-2009
20090290786STEREOSCOPIC MEASUREMENT SYSTEM AND METHOD - A stereoscopic measurement system captures stereo images and determines measurement information for user-designated points within stereo images. The system comprises an image capture device for capturing stereo images of an object. A processing system communicates with the capture device to receive stereo images. The processing system displays the stereo images and allows a user to select one or more points within the stereo image. The processing system processes the designated points within the stereo images to determine measurement information for the designated points.11-26-2009
20110200249SURFACE DETECTION IN IMAGES BASED ON SPATIAL DATA - A system and method are provided for detecting surfaces in image data based on spatial data. The method includes obtaining an empirical probability density function (PDF) for the spatial data, where the spatial data includes a plurality of three-dimensional (08-18-2011
20110200248Method and system for aligning three-dimensional surfaces - A method for associating a three-dimensional surface representing a real object and a three-dimensional reference surface, said reference surface being represented by a set of reference points, the method comprising: obtaining a set of real points representing the real surface, determining the normal vector of each point of said obtained set of real points, selecting, among the set of real points, control points according to the determined normal vector by converting the set of real points to a bi-dimensional space of normal vectors, generating sets of points having similar normal vector among the points of the set of real points and selecting, for each set of points with similar normal vector, one point that is a control point of the real surface, determining correspondence points close to the set of reference points that are determined to correspond to the control points of the real surface, and determining the motion that minimizes the distances between the control points of the real surface and the correspondence points.08-18-2011
20090297020Method and system for determining poses of semi-specular objects - A camera acquires a set of coded images and a set of flash images of a semi-specular object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source. 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images. Surface normals are obtained for the 3D points from photometric stereo on the set of flash images. The 3D coordinates, 2D silhouettes and surface normals are compared with a known 3D model of the object to determine the pose of the object.12-03-2009
20090041338PHOTOGRAPHING FIELD ANGLE CALCULATION APPARATUS - A memory of a photographing field angle calculation apparatus has stored therein the position of each point captured by an imaging system as three coordinate values in horizontal, vertical, and depth directions when a photography space is photographed by the imaging system. A usage pixel value extraction means selects a plurality of points located in end portions in the horizontal direction of a range captured by the imaging system from those stored in the memory based on the vertical coordinate value, and extracts the horizontal and depth coordinate values of each of the selected points. A field angle calculation means calculates a horizontal photographing field angle when the photography space is photographed using the extracted horizontal and depth coordinate values.02-12-2009
20080212872METHOD OF SETTING UP MULTI-DIMENSIONAL DDA VARIABLES - An apparatus and a computer program product render a multi-dimensional digital image using raytracing in a multi-dimensional space. A multi-dimensional digital differential analyzer (DDA) is included. Variables of said multi-dimensional digital differential analyzer (DDA) are set up using multiplications only. The digital image is rendered based upon the variables of the multi-dimensional digital differential analyzer (DDA). Each axis of the multi-dimensional space includes a numerator which holds the progress within a cell along that axis and a denominator which describes a size condition causing said DDA to step to a next cell.09-04-2008
20080212870COMBINED BEACON AND SCENE NAVIGATION SYSTEM - A controller and navigation system to implement beacon-based navigation and scene-based navigation is described. The navigation system may generate position data for the controller to compensate for a misalignment of the controller relative to the coordinate system of the navigation system. The navigation system may also distinguish between a beacon light source and a non-beacon light source.09-04-2008
20080212871DETERMINING A THREE-DIMENSIONAL MODEL OF A RIM OF AN ANATOMICAL STRUCTURE - Determining a three-dimensional model of a rim of an anatomical structure using two-dimensional images of the rim. The images are taken from different directions and each image can provide a different two-dimensional contour of the rim. Corresponding pairs of points are identified in the images and are used with a transformation matrix to calculate the three-dimensional model. The model may then be used to assist physicians in implantation procedures.09-04-2008
20110206274POSITION AND ORIENTATION ESTIMATION APPARATUS AND POSITION AND ORIENTATION ESTIMATION METHOD - A position and orientation estimation apparatus inputs an image capturing an object, inputs a distance image including three-dimensional coordinate data representing the object, extracts an image feature from the captured image, determines whether the image feature represents a shape of the object based on three-dimensional coordinate data at a position on the distance image corresponding to the image feature, correlates the image feature representing the shape of the object with a part of a three-dimensional model representing the shape of the object, and estimates the position and orientation of the object based on a correlation result.08-25-2011
20110206273Intelligent Part Identification for Use with Scene Characterization or Motion Capture - A variety of methods, systems, devices and arrangements are implemented for use with motion capture. One such method is implemented for identifying salient points from three-dimensional image data. The method involves the execution of instructions on a computer system to generate a three-dimensional surface mesh from the three-dimensional image data. Lengths of possible paths from a plurality of points on the three-dimensional surface mesh to a common reference point are categorized. The categorized lengths of possible paths are used to identify a subset of the plurality of points as salient points.08-25-2011
20120294510DEPTH RECONSTRUCTION USING PLURAL DEPTH CAPTURE UNITS - A depth construction module is described that receives depth images provided by two or more depth capture units. Each depth capture unit generates its depth image using a structured light technique, that is, by projecting a pattern onto an object and receiving a captured image in response thereto. The depth construction module then identifies at least one deficient portion in at least one depth image that has been received, which may be attributed to overlapping projected patterns that impinge the object. The depth construction module then uses a multi-view reconstruction technique, such as a plane sweeping technique, to supply depth information for the deficient portion. In another mode, a multi-view reconstruction technique can be used to produce an entire depth scene based on captured images received from the depth capture units, that is, without first identifying deficient portions in the depth images.11-22-2012
20100128972Stereo matching processing system, stereo matching processing method and recording medium - To correctly associate coinciding positions between a plurality of images.05-27-2010
20100128971Image processing apparatus, image processing method and computer-readable recording medium - A pair of images subjected to image processing is divided. Next, based on mutually-corresponding divided images, mutually-corresponding matching images are respectively set. When a corresponding point of a characteristic point in one matching image is not extracted from the other matching image, adjoining divided images are joined together, and based on the joined divided image, a new matching image is set.05-27-2010
20100142802APPARATUS FOR CALCULATING 3D SPATIAL COORDINATES OF DIGITAL IMAGES AND METHOD THEREOF - Provided is a digital photographing apparatus including: an image acquiring unit that acquires images by photographing a subject; a sensor information acquiring unit that acquires positional information, directional information, and posture information of the digital photographing apparatus at the time of photographing a subject; a device information acquiring unit that acquires device information of the digital photographing apparatus at the time of photographing a subject; and a spatial coordinates calculator that calculates 3D spatial coordinates for photographed images using the acquired positional information, directional information, posture information, and device information.06-10-2010
20090003688System and method for creating images - The invention provides a method of replicating the primary human field of view in an image. The method comprises receiving at least three digital images of a scene, the digital images comprising a centre image facing a scene directly, a centre left image obtained by rotating an image capture device a predefined angle to the left of centre and a centre right image obtained by rotating the image capture device a predefined angle to the right of centre; manipulating the centre image, the centre left image, and the centre right image on a data processing device; obtaining a composite image from the manipulated centre image, centre left image and centre right image conformed to the first virtual model; manipulating the composite image on the data processing device; obtaining a distortion adjusted image from the composite image conformed to the second virtual model; creating a physical image of the distortion adjusted image; and physically manipulating the physical image to form a physical image having a planar centre portion and curved left and right portions extending toward a viewpoint.01-01-2009
20090003686ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object.01-01-2009
201001355733-D Optical Microscope - A 3-D optical microscope, a method of turning a conventional optical microscope into a 3-D optical microscope, and a method of creating a 3-D image on an optical microscope are described. The 3-D optical microscope includes a processor, at least one objective lens, an optical sensor capable of acquiring an image of a sample, a mechanism for adjusting focus position of the sample relative to the objective lens, and a mechanism for illuminating the sample and for projecting a pattern onto and removing the pattern from the focal plane of the objective lens. The 3-D image creation method includes taking two sets of images, one with and another without the presence of the projected pattern, and using a software algorithm to analyze the two image sets to generating a 3-D image of the sample. The 3-D image creation method enables reliable and accurate 3-D imaging on almost any sample regardless of its image contrast.06-03-2010
20080240550IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - By applying identification processing to each index included in a captured image, a set of an identifier, image coordinates, and an image number is acquired for each index, and the acquired set is registered in a data saving unit. The data saving unit manages the numbers of times of previous identification for respective identifiers. A display unit displays the number of times managed in association with an identifier in a set to be registered every time the set is registered. An index position and orientation calculation unit calculates the positions and orientations of indices corresponding to a set group using the set group registered in a memory.10-02-2008
20080240548Isosurfacial three-dimensional imaging system and method - An isosurfacial three-dimensional imaging system and method uses scanning electron microscopy for surface imaging of an assumed opaque object providing a series of tilt images for generating a sinogram of the object and a voxel data set for generating a three-dimensional image of the object having exterior surfaces some of which may be obscured so as to provide exterior three-dimensional surface imaging of objects including hidden surfaces normally obscured from stereographic view.10-02-2008
20100296727METHODS AND DEVICES FOR READING MICROARRAYS - In one embodiment of the invention, a method to image a probe array is described that includes focusing on a plurality of fiducials on a surface of an array. The method utilizes obtaining the best z position of the fiducials and using a surface fitting algorithm to produce a surface fit profile. One or more surface non-flatness parameters can be adjusted to improve the flatness image of the array surface to be imaged.11-25-2010
20090169096IMAGE PROCESSING METHODS AND APPARATUS - We describe methods of characterising a set of images to determine their respective illumination, for example for recovering the 3D shape of an illuminated object. The method comprises: inputting a first set of images of the object captured from different positions; determining frontier point data from the images, this defining a plurality of frontier points on the object and for each said frontier point a direction of a normal to the surface of the object at the frontier point, and determining data defining the image capture positions; inputting a second set of images of said object, having substantially the same viewpoint and different illumination conditions; and characterising the second set of images said frontier point data to determine data comprising object reflectance parameter data (β) and, for each image of said second set, illumination data (L) comprising data defining an illumination direction and illumination intensity for the image.07-02-2009
20080285842Optoelectronic multiplane sensor and method for monitoring objects - An optoelectronic sensor and method for detecting an object in a three-dimensional monitored region uses a plurality of video sensors. Each sensor has a multiplicity of light-receiving elements that are configured to take a pixel picture of the monitored space, and a control unit identifies an object in the monitored space from video data of the pixel picture. Each video sensor has at least one pixel line that is formed by light-receiving elements. The video sensors are spaced from each other so that each sensor monitors an associated plane of the monitored space.11-20-2008
20080292179SYSTEM AND METHOD FOR EVALUATING THE NEEDS OF A PERSON AND MANUFACTURING A CUSTOM ORTHOTIC DEVICE - A system for providing a custom orthotic can include a scanner, and imager for providing a digital three-dimensional image based on the scan, a gait and pressure measuring device and a data inputting system for inputting information regarding the customer. An analysis device can be provided to make modifications to the three-dimensional image based on the customer information, and the modified three-dimensional image can be forwarded electronically to a manufacturer for production.11-27-2008
20090141967IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A disparity function setting unit configured to set a plurality of disparity relationships expressing disparities as functions of an image position; a data term calculating unit configured to calculate the similarity of corresponding areas between images specified by the preset disparity functions; a smoothing term calculating unit configured to calculate the consistency between the disparity functions and the pixels located in the vicinity; and a disparity function selecting unit configured to select the disparity function for each point of the image from the plurality of preset disparity functions are provided.06-04-2009
20120045117METHOD AND DEVICE FOR TRAINING, METHOD AND DEVICE FOR ESTIMATING POSTURE VISUAL ANGLE OF OBJECT IN IMAGE - Method and device for estimating the posture orientation of the object in image are described. An image feature of the image is obtained. For each orientation class, 3-D object posture information corresponding to the image feature is obtained based on a mapping model corresponding to the orientation class, for mapping the image feature to the 3-D object posture information. A joint probability of a joint feature including the image feature and the corresponding 3-D object posture information for each orientation class is calculated according to a joint probability distribution model based on single probability distribution models for the orientation classes. A conditional probability of the image feature in condition of the corresponding 3-D object posture information is calculated based on the joint probability for each orientation class. The orientation class corresponding to the maximum of the conditional probabilities is estimated as the posture orientation of the object in the image.02-23-2012
20080279446SYSTEM AND TECHNIQUE FOR RETRIEVING DEPTH INFORMATION ABOUT A SURFACE BY PROJECTING A COMPOSITE IMAGE OF MODULATED LIGHT PATTERNS - A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face—or other animal feature or inanimate object—recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.11-13-2008
20090190827Environment recognition system - An environment recognition system includes image taking means for taking a pair of images of an object in a surrounding environment with a pair of cameras and outputting the pair of images, stereo matching means for conducting stereo matching on a plurality of pairs of images that are taken by different image taking methods or that are formed by subjecting the pair of taken images to different image processing methods and forming distance images respectively for the pairs of images, selection means for dividing the distance images into a plurality of sections, calculating representative parallaxes respectively for the sections, and selecting any of the representative parallaxes of the corresponding section as a representative parallax of the section, and detection means for detecting the object in the image on the basis of the representative parallaxes of the sections.07-30-2009
20090129665Image processing system, 3-dimensional shape estimation system, object position/posture estimation system and image generation system - An object of the present invention is to process an image without a need to previously find out the initial value of a parameter representing an illumination condition and without a need for a user to manually input the illumination parameter. An image processing system includes a generalized illumination basis model generation means 05-21-2009
20090161946IMAGE PROCESSING APPARATUS - An image processing apparatus comprises an inputting section for inputting a plurality of continuous images which were photographed by a photographing section progressively moving relative to a photographed object; an extracting section for extracting characteristic points from images input by the inputting section; a tracking section for tracking the points corresponding to the characteristic points in the plurality of continuous images; an embedding section for embedding tracking data, which includes data of extracted and tracked points by the extracting section and the tracking section, into each image; and an outputting section for outputting the plurality of continuous images sequentially in which the tracking data was embedded by the embedding section.06-25-2009
20090161944TARGET DETECTING, EDITING AND REBUILDING METHOD AND SYSTEM BY 3D IMAGE - A method and system for target detecting, editing and rebuilding by 3D image is provided, which comprises an inputting and picking unit, a training and detecting unit, a displaying and editing unit and a rebuilding unit. The inputting and picking unit receives a digital image and a LiDAR data and picks up a first parameter to form a 3D image. The training and detecting unit selects a target, picks up a second parameter therefrom, calculates the second parameter to generate a threshold and detects the target areas in the 3D image according to the threshold. The displaying and editing unit sets a quick selecting tool according to the threshold and edits the detecting result. The rebuilding unit sets a buffer area surrounding the target, picks up a third parameter therefrom and calculates the original shape of the target by the Surface Fitting method according to the third parameter.06-25-2009
20090161945Geometric parameter measurement of an imaging device - Disclosed is a method of determining at least one three-dimensional (3D) geometric parameter of an imaging device. A two-dimensional (2D) target image is provided having a plurality of alignment patterns. The target image is imaged with an imaging device to form a captured image. At least one pattern of the captured image is compared with a corresponding pattern of the target image. From the comparison, the geometric parameter of the imaging device is then determined. The alignment patterns include at least one of (i) one or more patterns comprising a 2D scale and rotation invariant basis function, (ii) one or more patterns comprising a 1D scale invariant basis function, and (iii) one or more patterns having a plurality of grey levels and comprising a plurality of superimposed sinusoidal patterns, the plurality of sinusoidal patterns having a plurality of predetermined discrete orientations. Also disclosed is a two-dimensional test chart for use in testing an imaging device, the test chart comprising a plurality of alignment patterns, at least one of said alignment patterns including one of those patterns mentioned above.06-25-2009
20110268350COLOR IMAGE PROCESSING METHOD, COLOR IMAGE PROCESSING DEVICE, AND RECORDING MEDIUM - To provide a color image processing method and device capable of improving the texture of a specific object in a color image taken by a color imaging device by controlling the quantity of a specular component in the specific object. A color image processing device (11-03-2011
20090185741Apparatus and method for automatic airborne LiDAR data processing and mapping using data obtained thereby - Apparatus for processing of a LiDAR point cloud of a ground scan, comprises: a point cloud input for receiving said LiDAR point cloud, a ground filter for filtering out points that belong to the ground from said point cloud, thereby to generate an elevation map showing features extending from the ground, an automatic feature search and recognition unit associated with said three dimensional graphical engine for searching said elevation map of said three-dimensional model to identify features therein and to replace points associated with said feature with a virtual object representing said feature, thereby to provide objects within said data; and a three-dimensional graphical renderer supporting three-dimensional graphics, to generate a three-dimensional rendering of said ground scan.07-23-2009
20090129666METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION OF A SCENE - Passive methods for three-dimensional reconstruction of a scene by means of image data are generally based on the determination of spatial correspondences between a number of images of the scene recorded from various directions and distances. A method and a device are disclosed which provide a high reliability in the solution of the correspondence problem in conjunction with a low computational outlay. Image areas for determining the correspondences are determined within a plurality of images forming at least two image sequences. In preferred embodiments, a parameterized function h(u,v,t) is matched to each of the image areas in a space R(uvgt) defined by pixel position (u, v), image value g and time t. The parameters of the parameterized functions are used to form a similarity measure between the image areas.05-21-2009
20080317332System and Method for Determining Geometries of Scenes - A method and an apparatus determines a geometry of a scene by projecting one or more output image into the scene, in which a time to project the output image is t12-25-2008
20080317331Recognizing Hand Poses and/or Object Classes - There is a need to provide simple, accurate, fast and computationally inexpensive methods of object and hand pose recognition for many applications. For example, to enable a user to make use of his or her hands to drive an application either displayed on a tablet screen or projected onto a table top. There is also a need to be able to discriminate accurately between events when a user's hand or digit touches such a display from events when a user's hand or digit hovers just above that display. A random decision forest is trained to enable recognition of hand poses and objects and optionally also whether those hand poses are touching or not touching a display surface. The random decision forest uses image features such as appearance, shape and optionally stereo image features. In some cases, the training process is cost aware. The resulting recognition system is operable in real-time.12-25-2008
20120070072IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER READABLE PRODUCT - According to one embodiment, an image processing device includes a readiness determining unit configured to determine whether or not a state of a face image included in an image at one time out of images obtained at a plurality of different times is a ready state that satisfies a condition for performing three-dimensionality determination, three-dimensionality determination is determining whether the object is three-dimensional or not; an initiation determining unit configured to determine whether or not a state of a face image included in an image at different time from the image at the one time is an initiation state changed from the ready state; and a first three-dimensionality determining unit configured to perform the three-dimensionality determination on the face images included in the images when it is determined that the state is the initiation state.03-22-2012
20120070071SYSTEMS AND METHODS FOR AUTOMATED WATER DETECTION USING VISIBLE SENSORS - Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.03-22-2012
20120070069IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing apparatus includes a difference calculation unit, an intensity calculation unit, and an enhancing unit. The difference calculation unit calculates, for each partial area of an input image, a difference between a depth value of a subject and a reference value representing a depth as a reference. The intensity calculation unit calculates for each partial area an intensity, which has a local maximum value when the difference is 0 and has a greater value as the absolute value of the difference is smaller. The enhancing unit enhances each partial area according to the intensity to generate an output image.03-22-2012
20120070068FOUR DIMENSIONAL RECONSTRUCTION AND CHARACTERIZATION SYSTEM - A method and apparatus for performing a four-dimensional image reconstruction. The apparatus can be configured to receive a first input to slice a stack of two-dimensional images that depicts an object of interest into one or more planes to form one or more virtual images and receive a second input to segment the virtual images. One or more seed points can be generated on the virtual images based on the second input and to automatically order the seed points using a magnetic linking method. Contours corresponding to the object of interest can be generated using a live-wire algorithm and perform a first three-dimensional construction of the object of interest based on the contours. The contours can be converted into seed points for a subsequent set of images and perform a second three-dimensional construction of the object of interest corresponding to the subsequent set of images.03-22-2012
20090141966INTERACTIVE GEO-POSITIONING OF IMAGERY - An interactive user-friendly incremental calibration technique that provides immediate feedback to the user when aligning a point on a 3D model to a point on a 2D image. A can drag-and-drop points on a 3D model to points on a 2D image. As the user drags the correspondences, the application updates current estimates of where the camera would need to be to match the correspondences. The 2D and 3D images can be overlayed on each other and are sufficiently transparent for visual alignment. The user can fade between the 2D/3D views providing immediate feedback as to the improvements in alignment. The user can begin with a rough estimate of camera orientation and then progress to more granular parameters such as estimates for focal length, etc., to arrive at the desired alignment. While one parameter is adjustable, other parameters are fixed allowing for user adjustment of one parameter at a time.06-04-2009
20090180682SYSTEM AND METHOD FOR MEASURING IMAGE QUALITY - The present invention provides an improved system and method for measuring quality of both single and stereo video images. The embodiments of the present invention include frequency content measure for a single image or region-of-interest thereof and disparity measure for stereo images or region-of-interest thereof.07-16-2009
20090141968CORONARY RECONSTRUCTION FROM ROTATIONAL X-RAY PROJECTION SEQUENCE - A method for three-dimensional reconstruction of a branched object from a rotational sequence of images of the branched object includes segmenting the branched object from each image of the sequence, extracting centerlines of the branched object, performing symbolic reconstruction via a stereo correspondence matching between the centerlines from different views of the sequence of images using a graph cut-based optimization, and creating a three-dimensional tomographic reconstruction of the branched object compensated for motion of the branched object between the images of the sequence.06-04-2009
20110222758RADIOGRAPHIC IMAGE CAPTURING SYSTEM AND METHOD OF DISPLAYING RADIOGRAPHIC IMAGES - A radiographic image capturing system includes an image reconstructor for processing a plurality of radiographic images of a subject in order to reconstruct a radiographic tomographic image of the subject, and a monitor for displaying at least the radiographic tomographic image. The radiographic image capturing system also includes a region-of-interest setter for setting a region of interest of the subject on the radiographic images or the radiographic tomographic image, a radiographic image extractor for extracting, from among the radiographic images, two radiographic images for viewing the region of interest by way of stereographic vision, and a first stereographic vision display controller or a second stereographic vision display controller for controlling the monitor to display the extracted two radiographic images for stereographic vision.09-15-2011
20110222757Systems and methods for 2D image and spatial data capture for 3D stereo imaging - Systems and methods for 2D image and spatial data capture for 3D stereo imaging are disclosed. The system utilizes a cinematography camera and at least one reference or “witness” camera spaced apart from the cinematography camera at a distance much greater that the interocular separation to capture 2D images over an overlapping volume associated with a scene having one or more objects. The captured image date is post-processed to create a depth map, and a point cloud is created form the depth map. The robustness of the depth map and the point cloud allows for dual virtual cameras to be placed substantially arbitrarily in the resulting virtual 3D space, which greatly simplifies the addition of computer-generated graphics, animation and other special effects in cinemagraphic post-processing.09-15-2011
20110222756Method for Handling Pixel Occlusions in Stereo Images Using Iterative Support and Decision Processes - In stereo images that include occluded pixels and visible pixels, occlusions are handled by first determining, for the occluded pixels, initial disparity values and support for the initial disparity values using an initial support function, an occlusion map and disparities of the visible pixels neighboring the occluded pixels in the stereo images. Then, for the occluded pixels, final disparity values and support for the final disparity values are determined using the initial disparity values, a final support function and a normalization function in an iterative support-and-decision process.09-15-2011
201200636723D GEOMETRIC MODELING AND MOTION CAPTURE USING BOTH SINGLE AND DUAL IMAGING - A method and apparatus for obtaining an image to determine a three dimensional shape of a stationary or moving object using a bi dimensional coded light pattern having a plurality of distinct identifiable feature types. The coded light pattern is projected on the object such that each of the identifiable feature types appears at most once on predefined sections of distinguishable epipolar lines. An image of the object is captured and the reflected feature types are extracted along with their location on known epipolar lines in the captured image. Displacements of the reflected feature types along their epipolar lines from reference coordinates thereupon determine corresponding three dimensional coordinates in space and thus a 3D mapping or model of the shape of the object at any point in time.03-15-2012
20120063671IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER PROGRAM PRODUCT - According to one embodiment, an image processing device includes an obtaining unit configured to obtain a plurality of images captured in time series; a first calculating unit configured to calculate a first change vector indicating a change between the images in an angle representing a posture of a subject included in each of the images; a second calculating unit configured to calculate a second change vector indicating a change in coordinates of a feature point of the subject; a third calculating unit configured to calculate an intervector angle between the first change vector and the second change vector; and a determining unit configured to determine that the subject is three-dimensional when the intervector angle is smaller than a predetermined first threshold.03-15-2012
20120063670MOBILE TERMINAL AND 3D IMAGE COMPOSING METHOD THEREOF - A mobile terminal and a method for composing 3D images thereof are disclosed. The method for composing 3D images of a mobile terminal includes: selecting a background image as a reference from an image buffer; adjusting a convergence point of the selected background image; extracting an object image to be composed to the background image; displaying guidance information indicating a position at which the object image can be composed to the background image; and composing the object image to the background image according to the guidance information. Thus, when 3D images, each having a different convergence, are composed, the convergence point of a background image is adjusted and guidance information indicating a position at which an object image is to be composed is provided, thereby conveniently and accurately composing the 3D images.03-15-2012
20120063668Spatial accuracy assessment of digital mapping imagery - The present invention defines a quantitative measure for expressing the spatial (geometric) accuracy of a single optical geo-referenced image. Further, a quality control (QC) method for assessing that measure is developed. The assessment is done on individual images (not stereo models), namely, an image of interest is compared with automatically selected image from a geo-referenced image database of known spatial accuracy. The selection is based on the developed selection criterion entitled “generalized proximity criterion” (GPC). The assessment is done by computation of spatial dissimilarity between N pairs of line-of-sight rays emanating from conjugate pixels on the two images. This innovation is sought to be employed in any optical system (stills, video, push-broom, etc), but its primary application is aimed at validating photogrammetric triangulation blocks that are based on small (<10 MPixels) and medium (<50 MPixels) collection systems of narrow and dynamic field of view together with certifying the respective collection systems.03-15-2012
20090080767METHOD FOR DETERMINING A DEPTH MAP FROM IMAGES, DEVICE FOR DETERMINING A DEPTH MAP - Window based matching is used for determining a depth map from images obtained from different orientations. A set of fixed matching windows is used for points of the image for which the depth is to be determined. The set of matching windows covers a footprint of pixels around the point of the image, and the average number (0) of matching windows that a pixel of the footprint (FP) belongs to is less than one plus the number of pixels in the footprint divided by 15 (003-26-2009
20090080765SYSTEM AND METHOD TO GENERATE A SELECTED VISUALIZATION OF A RADIOLOGICAL IMAGE OF AN IMAGED SUBJECT - A system to illustrate image data of an imaged subjected is provided. The system comprises an imaging system, an input device, an output device, and a controller in communication with the imaging system, the input device, and the output device. The controller includes a processor to perform program instructions representative of the steps of generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images, navigating through the three-dimensional reconstructed volume, the navigating step including receiving an instruction from an input device that identifies a location of a portion of the three-dimensional reconstructed volume, calculating and generating a two-dimensional display of the portion of the three-dimensional reconstructed volume identified in the navigation step, and reporting the additional view or at least one parameter to calculate and generate the additional view.03-26-2009
20130216124Spatial Reconstruction of Plenoptic Images - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object.08-22-2013
20090080766Method and apparatus for the Three-Dimensional Digitization of objects - This invention relates to a method and an apparatus for the three-dimensional digitization of objects with a 3D sensor, which comprises a projector and one or more cameras, in which a pattern is projected onto the object by means of the projector, and the pattern is detected with the one or more cameras. In accordance with the invention, the method and the apparatus are characterized in that at least three reference marks and/or a reference raster are projected onto the object with the 3D sensor and are detected with two or more external, calibrated digital cameras.03-26-2009
20110142328Method for using image depth information - In a first exemplary embodiment of the present invention, an automated, computerized method is provided for determining illumination information in an image. According to a feature of the present invention, the method comprises the steps of identifying depth information in the image, identifying spatio-spectral information for the image, as a function of the depth information and utilizing the spatio-spectral information to identify illumination flux in the image.06-16-2011
20110229014ANALYSIS OF STEREOSCOPIC IMAGES - A method of identifying the left-eye and the right-eye images of a stereoscopic pair, comprising the steps of comparing the images to locate an occluded region visible in only one of the images; detecting image edges; and identifying a right-eye image where image edges are aligned with a left hand edge of an occluded region and identifying a left-eye image where more image edges are aligned with a right hand edge of an occluded region.09-22-2011
20110229013METHOD AND SYSTEM FOR MEASURING OBJECT - A method and system for measuring three-dimensional coordinates of an object are provided. The method includes: capturing images from a calibration point of known three-dimensional coordinates by two image-capturing devices disposed in a non-parallel manner, so as for a processing module connected to the image-capturing devices to calculate a beam confluence collinear function of the image-capturing devices; calibrating the image-capturing devices to calculate intrinsic parameters and extrinsic parameters of the image-capturing devices and calculate the beam confluence collinear function corresponding to the image-capturing devices; and capturing images from a target object by the image-capturing devices so as for the processing module to calculate three-dimensional coordinates of the object according to the beam confluence collinear function. In so doing, the method and system enable the three-dimensional coordinates and bearings of a target object to be calculated quickly, precisely, and conveniently. Hence, the method and system are applicable to various operating environments.09-22-2011
20110229015METHOD AND APPARATUS FOR DETERMINING THE SURFACE PROFILE OF AN OBJECT - The present invention relates to a method, apparatus, computer code and algorithm for determining the surface profile of an object. The invention involves capturing three or four images of the object at different planes of which some of the images can be taken outside the depth of field of the optical system and some inside the depth of the field of the optical system. The invention may have particular application in instances of surface analysis and security applications under ambient lighting conditions.09-22-2011
20130121558Point Selection in Bundle Adjustment - In an embodiment, a method comprises receiving a set of three dimensional (05-16-2013
20130121559MOBILE DEVICE WITH THREE DIMENSIONAL AUGMENTED REALITY - A method for determining an augmented reality scene by a mobile device includes estimating 3D geometry and lighting conditions of the sensed scene based on stereoscopic images captured by a pair of imaging devices. The device accesses intrinsic calibration parameters of a pair of imaging devices of the device independent of a sensed scene of the augmented reality scene. The device determines two dimensional disparity information of a pair of images from the device independent of a sensed scene of the augmented reality scene. The device estimates extrinsic parameters of a sensed scene by the pair of imaging devices, including at least one of rotation and translation. The device calculates a three dimensional image based upon a depth of different parts of the sensed scene based upon a stereo matching technique. The device incorporates a three dimensional virtual object in the three dimensional image to determine the augmented reality scene.05-16-2013
20130121560IMAGE PROCESSING DEVICE, METHOD OF PROCESSING IMAGE, AND IMAGE DISPLAY APPARATUS - According to an embodiment, an image processing device includes: a first acquiring unit, a second acquiring unit, a first setting unit, a second setting unit, a first calculating unit, and a second calculating unit. The first acquiring unit acquires a plurality of captured images by imaging a target object from a plurality of positions. The second acquiring unit acquires a provisional three-dimensional position and a provisional size. The first setting unit sets at least one search candidate point near the provisional three-dimensional position. The second setting unit sets a search window for each projection position where the search candidate point is projected, the search window having a size. The first calculating unit calculates an evaluation value that represents whether or not the target object is included inside the search window. The second calculating unit calculates a three-dimensional position of the target object based on the evaluation value.05-16-2013
20130121561Method, System and Computer Program Product for Detecting an Object in Response to Depth Information - First information is about respective depths of pixel coordinates within an image. Second information is about respective depths of the pixel coordinates within a ground plane. In response to comparing the first information against the second information, respective markings are generated to identify whether any one or more of the pixel coordinates within the image has significant protrusion from the ground plane. In response to a particular depth of a representative pixel coordinate within the image, a window of pixel coordinates is identified that is formed by different pixel coordinates and the representative pixel coordinate. In response to the respective markings, respective probabilities are computed for the pixel coordinates, so that the respective probability for the representative pixel coordinate is computed in response to the respective markings of all pixel coordinates within the window. In response to the respective probabilities, at least one object is detected within the image.05-16-2013
20130121562Method, System and Computer Program Product for Identifying Locations of Detected Objects - First and second objects are detected within an image. The first object includes first pixel columns, and the second object includes second pixel columns. A rightmost one of the first pixel columns is adjacent to a leftmost one of the second pixel columns. A first equation is fitted to respective depths of the first pixel columns, and a first depth is computed of the rightmost one of the first pixel columns in response to the first equation. A second equation is fitted to respective depths of the second pixel columns, and a second depth is computed of the leftmost one of the second pixel columns in response to the second equation. The first and second objects are merged in response to the first and second depths being sufficiently similar to one another, and in response to the first and second equations being sufficiently similar to one another.05-16-2013
20130121563PRIORITIZED COMPRESSION FOR VIDEO - In one embodiment, a method of prioritized compression for 3D video wireless display, the method comprising: inputting video data; abstracting scene depth of the video data; estimating foreground and background for each image of the video data; performing different kinds of compressions to the foreground and background in each image; and outputting the processed video data. Thus, the image quality is not affected by the data loss during the wireless transmission.05-16-2013
20130121564POINT CLOUD DATA PROCESSING DEVICE, POINT CLOUD DATA PROCESSING SYSTEM, POINT CLOUD DATA PROCESSING METHOD, AND POINT CLOUD DATA PROCESSING PROGRAM - A point cloud data processing device is equipped with a non-plane area removing unit 101, a plane labeling unit 102, a contour calculating unit 103, and a point cloud data remeasurement request processing unit 106. The non-plane area removing unit 101 removes point cloud data relating to non-plane areas from point cloud data in which a two-dimensional image of an object is linked with data of three-dimensional coordinates of plural points that form the two-dimensional image. The plane labeling unit 102 adds labels for identifying planes with respect to the point cloud data in which the data of the non-plane areas are removed. The contour calculating unit 103 calculates a contour of the object by using local flat planes based on a local area that is connected with the labeled plane. The point cloud data remeasurement request processing unit 106 requests remeasurement of the point cloud data.05-16-2013
20090245624IMAGE MATCHING SYSTEM USING THREE-DIMENSIONAL OBJECT MODEL, IMAGE MATCHING METHOD, AND IMAGE MATCHING PROGRAM - Even when only a small number of reference images are available for each object, it is possible to search at high speed a reference image stored in a database from an input image of an object imaged with a different pose and a different illumination condition. A reference image matching result storage section (10-01-2009
20090214105Identity Document and Method for the Manufacture Thereof - Identity document comprising a data medium with data. These data comprise an image of a face. This image consists of two component images that are observed at different angles. By simultaneously viewing the two images, the person studying the identity document can obtain further information about the face. This is possible because the two images are applied at a relatively small angle of 5° to 20°.08-27-2009
20090226079IDENTIFICATION OF OBJECTS IN A 3D VIDEO USING NON/OVER REFLECTIVE CLOTHING - A method includes generating a depth map from at least one image, detecting objects in the depth map, and identifying anomalies in the objects from the depth map. Another method includes identifying at least one anomaly in an object in a depth map, and using the anomaly to identify future occurrences of the object. A system includes a three dimensional (3D) imaging system to generate a depth map from at least one image, an object detector to detect objects within the depth map, and an anomaly detector to detect anomalies in the detected objects, wherein the anomalies are logical gaps and/or logical protrusions in the depth map.09-10-2009
20090220145Target and three-dimensional-shape measurement device using the same - A target set in a to-be-measured object and used for acquiring a reference value of point-cloud data, the target includes a small circle surrounded by a frame and having the center of the target, a large circle surrounded by the frame and disposed concentrically with the small circle so as to surround the small circle, a low-luminance reflective region located between the frame and the large circle and having the lowest reflectivity, a high-luminance reflective region located between the large circle and the small circle and having the highest reflectivity, and an intermediate-luminance reflective region located inside the small circle and having an intermediate reflectivity which is higher than the reflectivity of the low-luminance reflective region and which is lower than the reflectivity of the high-luminance reflective region.09-03-2009
20090220143Method for measuring a shape anomaly on an aircraft structural panel and system therefor - The disclosed embodiments concern a method for measuring a shape anomaly on an aircraft structural panel, including the following operations: projecting a target pattern at the site of the anomaly on the panel; producing at least two images of the projected pattern; processing the two images by stereocorrelation to obtain measurements of the anomaly. The disclosed embodiments also concern a system for implementing the method, including: a projected device for projecting a target pattern at the site of the anomaly on the panel; at least two imaging devices for producing each an image of the target pattern; and means for processing the target pattern images.09-03-2009
20090245623Systems and Methods for Gemstone Identification and Analysis - Images of items of jewelry having gemstones embedded therein are imaged and analyzed to determine the weights associated with the gemstones and, separately the precious metal in which the gemstones are encased without having to remove the gemstones from the jewelry.10-01-2009
20100054579THREE-DIMENSIONAL SURFACE GENERATION METHOD - The present invention provides a three-dimensional surface generation method that directly and efficiently generates a three-dimensional surface of the object surface from multiple images capturing a target object.03-04-2010
20120141016VIRTUAL VIEWPOINT IMAGE SYNTHESIZING METHOD AND VIRTUAL VIEWPOINT IMAGE SYNTHESIZING SYSTEM - Provided is a virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints. The virtual viewpoint image is synthesized through a reference images obtaining step, a depth maps generating step, an up-sampling step, a virtual viewpoint information obtaining step, and a virtual viewpoint image synthesizing step.06-07-2012
20120195494PSEUDO 3D IMAGE GENERATION DEVICE, IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE TRANSMISSION METHOD, IMAGE DECODING DEVICE, AND IMAGE DECODING METHOD - A pseudo 3D image generation device includes frame memories that store a plurality of basic depth models used for estimating depth data based on a non-3D image signal and generating a pseudo 3D image signal; a depth model combination unit that combines the plurality of basic depth models for generating a composite depth model based on a control signal indicating composite percentages for combining the plurality of basic depth models; an addition unit that generates depth estimation data from the non-3D image signal and the composite depth models; and a texture shift unit that shifts the texture of the non-3D image for generating the pseudo 3D image signal.08-02-2012
20120195493STEREO MATCHING METHOD BASED ON IMAGE INTENSITY QUANTIZATION - A stereo matching method based on image intensity quantization is revealed. The method includes several steps. Firstly, provide computer an image pair of an object for image intensity quantization of the image pair to get a quantization result. Then according to the quantization result, a first extracted image pair is generated and used to get a first disparity map. A second extracted image pair is generated similarly to get a second disparity map. Next the two disparity maps are compared with each other to get an image error data. When an error contained in the image error data is smaller than or equal to an error threshold value, the computer outputs the second disparity map. Moreover, accuracy of disparity maps is improved by iteration processing. Therefore, the amount of information for processing is minimized and efficiency of data access/transmission is improved.08-02-2012
20090116733Systems and Methods for Creating and Viewing Three Dimensional Virtual Slides - Systems and methods for creating and viewing three dimensional virtual slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. This two dimensional images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter.05-07-2009
20090116732METHODS AND SYSTEMS FOR CONVERTING 2D MOTION PICTURES FOR STEREOSCOPIC 3D EXHIBITION - The present invention discloses methods of digitally converting 2D motion pictures or any other 2D image sequences to stereoscopic 3D image data for 3D exhibition. In one embodiment, various types of image data cues can be collected from 2D source images by various methods and then used for producing two distinct stereoscopic 3D views. Embodiments of the disclosed methods can be implemented within a highly efficient system comprising both software and computing hardware. The architectural model of some embodiments of the system is equally applicable to a wide range of conversion, re-mastering and visual enhancement applications for motion pictures and other image sequences, including converting a 2D motion picture or a 2D image sequence to 3D, re-mastering a motion picture or a video sequence to a different frame rate, enhancing the quality of a motion picture or other image sequences, or other conversions that facilitate further improvement in visual image quality within a projector to produce the enhanced images.05-07-2009
20100166294SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION - This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.07-01-2010
20100002934Three-Dimensional Motion Capture - In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.01-07-2010
20080317334Method and Microscopy Divice for the Deflectometric Detection of Local Gradients and the Tree-Dimensional Shape of an Object - The invention relates to a method and an apparatus for high-resolution deflectometric determination of the local slope and of the three-dimensional shape of an object (12-25-2008
20100150431Method of Change Detection for Building Models - Lidar point clouds and multi-spectral aerial images are integrated for change detection of building models. This reduces errors owing to ground areas and vegetation areas. Manifold change types are detected with low cost, low inaccuracy and high efficiency.06-17-2010
20120195492Method and apparatus for generating a dense depth map using an adaptive joint bilateral filter - A method and apparatus for generating a dense depth map. In one embodiment, the method includes applying a joint bilateral filter to a first depth map to generate a second depth map, where at least one filter weight of the joint bilateral filter is adapted based upon content of an image represented by the first depth map, and the second depth map has a higher resolution than the first depth map.08-02-2012
20120141014COLOR BALANCING FOR PARTIALLY OVERLAPPING IMAGES - When photographs are to be combined into a single image, haze correction and/or color balancing may be performed. The photographs may be analyzed and left-clipped in order to darken the photographs and to increase the density of pixels in the low-luminosity region, thereby decreasing the perception of haze. When the photographs are combined into one continuous image, tie points are selected that lie in regions where the photographs overlap. The tie points may be selected based on visual similarity of the photographs in the region around the tie point, using a variety of algorithms. Functions are then chosen to generate saturation and luminosity values that minimize, at the tie points, the cost of using the generated values as opposed to the actual saturation and luminosity values. These functions are then used to generate saturation and luminosity values for the full image.06-07-2012
20100158354METHOD OF CREATING ANIMATABLE DIGITAL CLONE FROM MULTI-VIEW IMAGES - The present invention relates to a method of creating an animatable digital clone includes receiving input multi-view images of an actor captured by at least two cameras and reconstructing a three-dimensional appearance therefrom, accepting shape information selectively based on a probability of photo-consistency in the input multi-view images obtained from the reconstruction and transferring a mesh topology of a reference human body model onto a shape of the actor obtained from the reconstruction. The method further includes generating an initial human body model of the actor via transfer of the mesh topology utilizing sectional shape information of the actor's joints, and generating a genuine human body model of the actor from learning genuine behavioral characteristics of the actor by applying the initial human body model to multi-view posture learning images where performance of a predefined motion by the actor is recorded.06-24-2010
20100158351COMBINED EXCHANGE OF IMAGE AND RELATED DATA - A method of combined exchange of image data and further data being related to the image data, the image data being represented by a first two-dimensional matrix of image data elements and the further data being represented by a second two-dimensional matrix of further data elements is disclosed. The method comprises combining the first two-dimensional matrix and the second two-dimensional matrix into a combined two-dimensional matrix of data elements.06-24-2010
20100158352APPARATUS AND METHOD FOR REAL-TIME CAMERA TRACKING - A camera tracking apparatus for calculating in real time feature information and camera motion information based on an input image includes a global camera tracking unit for computing a global feature map having feature information on entire feature points; a local camera tracking unit for computing in real time a local feature map having feature information on a part of the entire feature points; a global feature map update unit for receiving the computed feature information from the global and local camera tracking units to update the global feature map; and a local feature selection unit for receiving the updated feature information from the global feature map update unit to select in real time the feature points contained in the local feature map. The local camera tracking unit computes the local feature map for each frame, while the global camera tracking unit computes the global feature map over frames.06-24-2010
20090116730THREE-DIMENSIONAL DIRECTION DETECTING DEVICE AND METHOD FOR USING THE SAME - A three-dimensional direction detecting device, including: an electromagnetic radiation source and a sensing module. The electromagnetic radiation source is used to generate electromagnetic radiations. The sensing module has a plurality of sensing elements for receiving different radiation energies generated by the electromagnetic radiations from different spatial angles. Therefore, the sensing elements respectively receive the different radiation energies from different spatial direction angles generated by the electromagnetic radiation source relative to the sensing elements, so that the value of a spatial direction angle of the electromagnetic radiation source relative to the sensing module is obtained according to the magnitude relationship of the radiation energies received by the sensing module.05-07-2009
20090116728Method and System for Locating and Picking Objects Using Active Illumination - A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object.05-07-2009
20100189342SYSTEM, METHOD, AND APPARATUS FOR GENERATING A THREE-DIMENSIONAL REPRESENTATION FROM ONE OR MORE TWO-DIMENSIONAL IMAGES - In a system and method for generating a 3-dimensional representation of a portion of an organism, collecting training data, wherein the training data includes a first set of training data and a second set of training data. At least one statistical model having a set of parameters is built using the training data. The at least one statistical model is compared to a 2-dimensional image of the portion of the organism. At least one parameter of the set of parameters of the statistical model is modified based on the comparison of the at least one statistical model to the 2-dimensional image of the portion of the organism. The modified set of parameters representing the portion of the organism is passed through the statistical model.07-29-2010
20100189341INTRA-ORAL MEASUREMENT DEVICE AND INTRA-ORAL MEASUREMENT SYSTEM - The present invention aims to provide an intra-oral measurement device and an intra-oral measurement system capable of measuring an inside of an oral cavity at high accuracy without increasing a size of the device, and includes a light projecting unit for irradiating a measuring object including at least a tooth within an oral cavity with light, a lens system unit for collecting light reflected by the measuring object, a focal position varying mechanism for changing a focal position of the light collected by the lens system unit, and an imaging unit for imaging light passed through the lens system unit.07-29-2010
20100189343METHOD AND APPARATUS FOR STORING 3D INFORMATION WITH RASTER IMAGERY - The present invention meets the above-stated needs by providing a method and apparatus that allows for X parallax information to be stored within an image pixel information. Consequently, only one image need be stored, whether it's a mosaic of a number of images, a single image or a partial image for proper reconstruction. To accomplish this, the present invention stores an X parallax value between the stereoscopic images with the typical pixel information by, e.g., increasing the pixel depth.07-29-2010
20110235899STEREOSCOPIC IMAGE PROCESSING DEVICE, METHOD, RECORDING MEDIUM AND STEREOSCOPIC IMAGING APPARATUS - An apparatus (09-29-2011
20100239158FINE STEREOSCOPIC IMAGE MATCHING AND DEDICATED INSTRUMENT HAVING A LOW STEREOSCOPIC COEFFICIENT - The invention relates to a method and system for the acquisition and correlation matching of points belonging to a stereoscopic pair of images, whereby the pair is formed by a first image and a second image representing a scene. According to the invention, the two images of the pair are acquired with a single acquisition instrument (09-23-2010
20130216123Design and Optimization of Plenoptic Imaging Systems - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object.08-22-2013
20130216125Resolution-Enhanced Plenoptic Imaging System - The spatial resolution of captured plenoptic images is enhanced. In one aspect, the plenoptic imaging process is modeled by a pupil image function (PIF), and a PIF inversion process is applied to the captured plenoptic image to produce a better resolution estimate of the object.08-22-2013
20110058733METHOD OF COMPILING THREE-DIMENSIONAL OBJECT IDENTIFYING IMAGE DATABASE, PROCESSING APPARATUS AND PROCESSING PROGRAM - Provided are a method of generating a low-capacity model capable of identifying an object with high accuracy, and creating an image database using the model, a processing program for executing the method, and a processing apparatus that executes the process. The method for compiling an image database that is used for a three-dimensional object recognition includes a steps of extracting vectors as local descriptors from a plurality of images each image showing a three-dimensional object as seen from different viewpoints, a model creating step of evaluating the degree of contribution of each local descriptor to identification of the three-dimensional object, and creating a three-dimensional object model systematized to ensure approximate nearest neighbor search using the individual vectors which satisfy criteria, and a registration step of adding an object identifier to the created object model and registering the object model into an image database. In the model creating step, the local descriptor to be used in the model is selected based on the contributions of the individual vectors which are evaluated in such a way that when a vector extracted from one image of one three-dimensional object is an approximate nearest neighbor to another vector relating to an image of the three-dimensional object seen from a different viewpoint, the vector has a positive contribution, whereas when the vector is an approximate nearest neighbor to another vector relating to a different three-dimensional object, the vector has a negative contribution. The processing program is designed to execute the method, and the processing apparatus executes the process.03-10-2011
20120141015VANISHING POINT ESTIMATION SYSTEM AND METHODS - System and methods for estimating a vanishing point within an image, including comprising: programming executable on a processor for computing line segment estimation of one or more lines in said image, wherein one or more of the lines comprise multiple line segments as a single least-mean-square-error (LMSE) fitted lines. Additionally the one or more lines having multiple line segments are represented as a single least-mean-square-error (LMSE) fitted line, and the one or more lines are intersected to locate a vanishing point in a density space.06-07-2012
20100128973Stereo image processing apparatus, stereo image processing method and computer-readable recording medium - A stereo image processing apparatus 05-27-2010
20100040280Enhanced ghost compensation for stereoscopic imagery - A method and apparatus for reduction of ghost images in stereoscopic images. This disclosure provides a ghost compensation apparatus and methods that detect affected regions where ghosting may occur in a stereoscopic image, yet where conventional ghost compensation techniques are ineffective because there is insufficient luminance overhead to conduct a conventional ghost compensation process. Luminance values are modified in such regions prior to applying a ghost compensation process.02-18-2010
20090110266STEREOSCOPIC IMAGE PROCESSING DEVICE AND METHOD, STEREOSCOPIC IMAGE PROCESSING PROGRAM, AND RECORDING MEDIUM HAVING THE PROGRAM RECORDED THEREIN - The present invention is directed to a stereo image processing apparatus adapted for generating stereo images which permit, at a glance, discrimination of a suitable observation method. This stereo image processing apparatus includes an image input unit (04-30-2009
20100195898METHOD AND APPARATUS FOR IMPROVING QUALITY OF DEPTH IMAGE - A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image.08-05-2010
201101105813D OBJECT RECOGNITION SYSTEM AND METHOD - Disclosed herein is a three-dimensional (3D) object recognition system and method. The 3D object recognition system includes a storage unit for storing an extended randomized forest in which a plurality of randomized trees is included and each of the randomized trees includes a plurality of leaf nodes, training means for extracting a plurality of keypoints from a training target object image, and calculating and storing an object recognition posterior probability distribution and training target object-based keypoint matching posterior probability distributions, and matching means for extracting a plurality of keypoints from a matching target object image, matching the extracted keypoints to a plurality of leaf nodes, recognizing an object using the object recognition posterior probability distributions, and matching the keypoints to keypoints of the recognized object using training target object-based keypoint matching posterior probability distributions stored at the matched leaf nodes.05-12-2011
20110026807ADJUSTING PERSPECTIVE AND DISPARITY IN STEREOSCOPIC IMAGE PAIRS - A system and method for adjusting perspective and disparity in a stereoscopic image pair using range information includes receiving the stereoscopic image pair representing a scene; identifying range information associated with the stereoscopic image pair and including distances of pixels in the scene from a reference location; generating a cluster map based at least upon an analysis of the range information and the stereoscopic images, the cluster map grouping pixels of the stereoscopic images by their distances from a viewpoint; identifying objects and background in the stereoscopic images based at least upon an analysis of the cluster map and the stereoscopic images; generating a new stereoscopic image pair at least by adjusting perspective and disparity of the object and the background in the stereoscopic image pair, the adjusting occurring based at least upon an analysis of the range information; and storing the new generated stereoscopic image pair in a processor-accessible memory system.02-03-2011
20110026808APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM GENERATING DEPTH MAP - Disclosed are an apparatus, a method and a computer-readable medium automatically generating a depth map corresponding to each two-dimensional (2D) image in a video. The apparatus includes an image acquiring unit to acquire a plurality of 2D images that are temporally consecutive in an input video, a saliency map generator to generate at least one saliency map corresponding to a current 2D image among the plurality of 2D images based on a Human Visual Perception (HVP) model, a saliency-based depth map generator, a three-dimensional (3D) structure matching unit to calculate matching scores between the current 2D image and a plurality of 3D typical structures that are stored in advance, and to determine a 3D typical structure having a highest matching score among the plurality of 3D typical structures to be a 3D structure of the current 2D image, a matching-based depth map generator; a combined depth map generator to combine the saliency-based depth map and the matching-based depth map and to generate a combined depth map, and a spatial and temporal smoothing unit to spatially and temporally smooth the combined depth map.02-03-2011
20090324058Use of geographic coordinates to identify objects in images - A method and device are disclosed. In one embodiment the method includes determining the location of a camera when the camera captures an image. The method continues by determining the viewable subject area of the image. Additionally, the method determines the location of one or more objects at the time the image is taken. Finally, upon making these determinations, the method concludes by identifying each of the one or more objects as being in the image when the location of each of the one or more objects is calculated to have been within the viewable subject area of the image at the time the image was taken.12-31-2009
20080298672SYSTEM AND METHOD FOR LOCATING A THREE-DIMENSIONAL OBJECT USING MACHINE VISION - This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.12-04-2008
20130129190Model-Based Stereo Matching - Model-based stereo matching from a stereo pair of images of a given object, such as a human face, may result in a high quality depth map. Integrated modeling may combine coarse stereo matching of an object with details from a known 3D model of a different object to create a smooth, high quality depth map that captures the characteristics of the object. A semi-automated process may align the features of the object and the 3D model. A fusion technique may employ a stereo matching confidence measure to assist in combining the stereo results and the roughly aligned 3D model. A normal map and a light direction may be computed. In one embodiment, the normal values and light direction may be used to iteratively perform the fusion technique. A shape-from-shading technique may be employed to refine the normals implied by the fusion output depth map and to bring out fine details. The normals may be used to re-light the object from different light positions.05-23-2013
20130129192RANGE MAP DETERMINATION FOR A VIDEO FRAME - A method for determining a range map for a particular video frame from a digital video comprising: determining a set of extrinsic parameters and one or more intrinsic parameters for each video frame. A set of candidate video frames are defined and an image similarity score for each candidate video frame providing an indication of the visual similarity. The image similarity scores are compared to a predefined threshold to determine a subset of the candidate video frames. A position difference score is determined for each video frame in the determined subset responsive to the extrinsic parameters, and the video frame having the largest position difference score is selected. The range map is determined responsive to disparity values representing a displacement between corresponding image pixels in the particular video frame and the selected video frame.05-23-2013
20130129193FORMING A STEROSCOPIC IMAGE USING RANGE MAP - A method for forming a stereoscopic image from a main image of a scene captured from a main image viewpoint including one or more foreground objects, together a main image range map and a background image. A first-eye image is determined corresponding to a first-eye viewpoint and a second-eye image is determined corresponding to a second-eye viewpoint. At least one of the first-eye image and the second-eye image is determined by warping the main image to the associated viewpoint, wherein the warped main image includes one or more holes corresponding to scene content that was occluded in the main image; warping the background image to the associated viewpoint; and determining pixel values to fill the one or more holes in the warped main image using pixel values at corresponding pixel locations in the warped background image; and forming a stereoscopic image including the first-eye image and the second-eye image.05-23-2013
20130129194METHODS AND SYSTEMS OF MERGING DEPTH DATA FROM A PLURALITY OF DISPARITY MAPS - A method of merging a plurality of disparity maps. The method comprises calculating a plurality of disparity maps each from images captured by another of a plurality of pairs of image sensors having stereoscopic fields of view (SFOVs) with at least one overlapping portion, the SFOVs covering a scene with a plurality of objects, identifying at least one of the plurality of objects in the at least one overlapping portion, the at least one object being mapped in each the disparity map, calculating accuracy of disparity values depicting the object in each the disparity map, merging depth data from the plurality of disparity maps according to the accuracy so as to provide a combined depth map wherein disparity values of the object are calculated according to one of the plurality of disparity maps, and outputting the depth data.05-23-2013
20090110267Automated texture mapping system for 3D models - A camera pose may be determined automatically and is used to map texture onto a 3D model based on an aerial image. In one embodiment, an aerial image of an area is first determined. A 3D model of the area is also determined, but does not have texture mapped on it. To map texture from the aerial image onto the 3D model, a camera pose is determined automatically. Features of the aerial image and 3D model may be analyzed to find corresponding features in the aerial image and the 3D model. In one example, a coarse camera pose estimation is determined that is then refined into a fine camera pose estimation. The fine camera pose estimation may be determined based on the analysis of the features. When the fine camera pose is determined, it is used to map texture onto the 3D model based on the aerial image.04-30-2009
20100296726HIGH-RESOLUTION OPTICAL DETECTION OF THE THREE-DIMENSIONAL SHAPE OF BODIES - In a cost-efficient method and arrangement for 3D digitization of bodies and body parts, which produces dense and exact spatial coordinates despite imprecise optics and mechanics, the body to be digitized is placed on a photogrammetrically marked surface, a photogrammetrically marked band is fitted to the body or body part to be digitized, and a triangulation arrangement comprised of a camera and a light pattern projector is moved on a path around the body. By a photogrammetric evaluation of the photogrammetric marks of the surface and the band situated in the image field of the camera, and of the light traces of the light projector on the marked surface and the marked band, all unknown internal and external parameters of the triangulation arrangement are determined, and the absolute spatial coordinates of the body or body part are established from the light traces on the non-marked body with high point density and high precision without any separate calibration methods.11-25-2010
20100296725DEVICE AND METHOD FOR OBTAINING A THREE-DIMENSIONAL TOPOGRAPHY - In a device for obtaining a three-dimensional topography of a measured object, a center axis of an illumination system is situated at an angle with respect to a recording direction of a 2D camera, and the illumination system generates a focal plane on a predetermined area of the measured object, the predetermined area being smaller than a recording area of the 2D camera. The measured object is movable relative to the 2D camera and relative to the illumination system with the aid the movement device. The 2D camera records multiple images of the measured object from various positions which are occupied due to the movement of the movement device.11-25-2010
20100303338Digital Video Content Fingerprinting Based on Scale Invariant Interest Region Detection with an Array of Anisotropic Filters - Video sequence processing is described with various filtering rules applied to extract dominant features for content based video sequence identification. Active regions are determined in video frames of a video sequence. Video frames are selected in response to temporal statistical characteristics of the determined active regions. A two pass analysis is used to detect a set of initial interest points and interest regions in the selected video frames to reduce the effective area of images that are refined by complex filters that provide accurate region characterizations resistant to image distortion for identification of the video frames in the video sequence. Extracted features and descriptors are robust with respect to image scaling, aspect ratio change, rotation, camera viewpoint change, illumination and contrast change, video compression/decompression artifacts and noise. Compact, representative signatures are generated for video sequences to provide effective query video matching and retrieval in a large video database.12-02-2010
20100303339System and Method for Initiating Actions and Providing Feedback by Pointing at Object of Interest - A system and method as described for compiling feedback in command statements that relate to applications or services associated with spatial objects or features, pointing at such spatial object or feature order to identify the object of interest, and executing the command statements on a system server and attaching feedback information to their representation of this object or feature in a database of the system server.12-02-2010
20100310153ENHANCED IMAGE IDENTIFICATION - A method for deriving a representation of an image is described. The method involves processing signals corresponding to the image. A three dimensional representation of the image is derived. The three dimensional representation of the image to used to derive the representation of the image. In one embodiment, each line of the image is defined by a first parameter (d) and a second parameter (θ), and a position on each line is defined by a third parameter (t), and the three dimensional representation is parameterised by the first, second and third parameters. A set of values is extracted from the three dimensional representation at a value of the first parameter, and a functional is applied along lines, or parts of lines, of the extracted set of values, the lines extending along values of the second or third parameter.12-09-2010
20130136336IMAGE PROCESSING APPARATUS AND CONTROLLING METHOD FOR IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing apparatus includes, a composition estimation module configured to estimate a composition from a two-dimensional image, an inmost color determination module configured to determine an inmost color based on the estimated composition and the two-dimensional image, a first depth generator configured to generate a first depth for each of multiple regions in the two-dimensional image based on the inmost color, and an image processor configured to convert the two-dimensional image into a three-dimensional image using the first depth.05-30-2013
20130136339SYSTEM FOR REAL-TIME STEREO MATCHING - A system for real-time stereo matching is provided, which provides improved stereo matching speed and rate by gradually optimizing a disparity range used in the stereo matching based on the stereo matching result of the previous frame image and thus reducing unnecessary matching computations.05-30-2013
20130136340ARITHMETIC PROCESSING DEVICE - An image processor sets a first predetermined number of first blocks at first intervals in a second image, calculates a first evaluated value, selects one of the first blocks, and calculates a first parallax between the selected first block and the matching target block. An image processor sets a second predetermined number of second blocks at second intervals in a second image, calculates a second evaluated value, selects one of the second blocks, and calculates a second parallax between the selected second block and the matching target block. A controller determines, based on the first evaluated value and the second evaluated value and based on the first parallax and the second parallax, whether or not to employ one of the first parallax and the second parallax.05-30-2013
20130136341ELECTRONIC APPARATUS AND THREE-DIMENSIONAL MODEL GENERATION SUPPORT METHOD - According to one embodiment, an electronic apparatus includes a 3D model generator, a capture position estimation module and a notification controller. The 3D model generator generates 3D model data of a 3D model by using images in which a target object of the 3D model is captured. The capture position estimation module estimates a capture position of a last captured image of the images. The notification controller notifies a user of a position at which the object is to be next captured, based on the generated 3D model data and the estimated capture position. The 3D model generator updates the 3D model data by further using a newly captured image of the object.05-30-2013
20110123096THREE-DIMENSIONAL IMAGE ANALYSIS SYSTEM, PROCESS DEVICE, AND METHOD THEREOF - A three-dimensional image analysis system, a process device for use in the three-dimensional image analysis system, and a method thereof are provided. The three-dimensional image analysis system is configured to generate a plurality of three-dimensional data of a three-dimensional image. The process device defines a plurality of horizontal scan lines and a plurality of vertical scan lines according to the three dimensional data, determines a preliminary edge information of the three-dimensional image according to the horizontal scan lines and the vertical scan lines, divides the three dimensional data into a plurality of groups, compares the groups to determine a plane information of the three-dimensional image, and determines an edge information of the three-dimensional image according to the preliminary edge information and the plane information. The method is adapted for the process device.05-26-2011
20100303341METHOD AND DEVICE FOR THREE-DIMENSIONAL SURFACE DETECTION WITH A DYNAMIC REFERENCE FRAME - The surface shape of a three-dimensional object is acquired with an optical sensor. The sensor, which has a projection device and a camera, is configured to generate three-dimensional data from a single exposure, and the sensor is moved relative to the three-dimensional object, or vice versa. A pattern is projected onto the three-dimensional object and a sequence of overlapping images of the projected pattern is recorded with the camera. A sequence of 3D data sets is determined from the recorded images and a registration is effected between subsequently obtained 3D data sets. This enables the sensor to be moved freely about the object, or vice versa, without tracking their relative position, and to determine a surface shape of the three-dimensional object on the fly.12-02-2010
20100303336Method for ascertaining the axis of rotation of a vehicle wheel - A method for ascertaining the axis of rotation of a vehicle wheel in which a light pattern is projected at least onto the wheel during the rotation of the wheel and the light pattern reflected from the wheel is detected by a calibrated imaging sensor system and analyzed in an analyzer device. Accurate and robust measurement of the axis of rotation and, optionally, of the axis and wheel geometry, in particular when the vehicle is passing by, is achieved in that a 3D point cloud with respect to the wheel is determined in the analysis and a parametric surface model of the wheel is adapted thereto; normal vectors of the wheel are calculated for different rotational positions of the wheel for obtaining the axes of rotation; and the axis of rotation vector is calculated as the axis of rotation from the spatial movement of the normal vector of the wheel.12-02-2010
20100303340STEREO-IMAGE REGISTRATION AND CHANGE DETECTION SYSTEM AND METHOD - A system and method for registering stereoscopic images comprising: obtaining at least two sets of stereoscopic images, each one of the at least two sets including at least two images that are taken from different angles, determining at least two groups of images, each one of the groups including at least two images that are respective images of at least two of the sets or are derived therefrom. For each one of the groups, calculating a respective optimal entities list and stereo-matching at least two images, each one being or derived from different one of the at least two groups and same or different sets, using at least four optimal entities from each one of the optimal entities list, thereby giving rise to at least one pair of registered stereoscopic images.12-02-2010
20110110582METHOD AND SYSTEM FOR DETERMINING THE POSITION OF A FLUID DISCHARGE IN AN UNDERWATER ENVIRONMENT - The present invention relates to a method for determining the position of a fluid discharge in an underwater environment comprising the phases which consist in collecting (05-12-2011
20110110580GEOSPATIAL MODELING SYSTEM FOR CLASSIFYING BUILDING AND VEGETATION IN A DSM AND RELATED METHODS - A geospatial modeling system may include a geospatial model database configured to store a digital surface model (DSM) of a geographical area, and to store image data of the geographical area. The image data may have a spectral range indicative of a difference between buildings and vegetation. The geospatial modeling system may also include a processor cooperating with the geospatial model database to separate bare earth data from remaining building and vegetation data in the DSM to define a building and vegetation DSM. The processor may also register the image data with the building and vegetation DSM, and classify each point of the building and vegetation DSM as either building or vegetation based upon the spectral range of the image data.05-12-2011
20110110583SYSTEM AND METHOD FOR DEPTH EXTRACTION OF IMAGES WITH MOTION COMPENSATION - A system and method for spatiotemporal depth extraction of images are provided. The system and method provide for acquiring a sequence of images from a scene, the sequence including a plurality of successive frames of images, estimating the disparity of at least one point in a first image with at least one corresponding point in a second image for at least one frame, estimating motion of the at least one point in the first image, estimating the disparity of the at least one next successive frame based on the estimated disparity of at least one previous frame in a forward direction of the sequence, wherein the estimate disparity is compensated with the estimated motion, and minimizing the estimated disparity of each of the plurality of successive frames based on the estimated disparity of at least one previous frame in a backward direction of the sequence.05-12-2011
20110129144IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing apparatus includes: a depth information extraction means for extracting depth information from an input 3D image; a luminance extraction means for extracting luminance components of the 3D image; a contrast extraction means for extracting contrast components of the 3D image based on the luminance components; a storage means for storing a performance function indicating relation between the contrast components and depth amounts subjectively perceived, which is determined based on visual sense characteristics of human beings; and a contrast adjustment means for calculating present depth amounts of the inputted 3D image from the contrast components based on the performance function with respect to at least one of a near side region and a deep side region of the inputted 3D image which are determined from the depth information and adjusting contrast components of the inputted 3D image based on the calculated present depth amounts and a set depth adjustment amount.06-02-2011
20090067707Apparatus and method for matching 2D color image and depth image - Provided are an apparatus and method for matching a 2D color image and a depth image to obtain 3D information. The method includes matching resolution of the 2D color image and resolution of a light intensity image, wherein the 2D color image and the light intensity image are separately obtained, detecting at least one edge from the matched 2D color image and the matched light intensity image, and matching overlapping pixels of the matched 2D color image and a depth image, which corresponds to the matched light intensity image, with each other in case that the matched 2D color image and the depth image are overlapped as much as the matched 2D color image and the matched light intensity image are overlapped so that the detected edges of the matched 2D color image and the detected edges of the matched light intensity image are maximally overlapped with each other. Accordingly, the 2D color image and the depth image can be accurately matched so that reliable 3D image information can be quickly obtained.03-12-2009
20090067706System and Method for Multiframe Surface Measurement of the Shape of Objects - A system and method are provided for the multiframe surface measurement of the shape of material objects. The system and method include capturing a plurality of images of portions of the surface of the object being measured and merging the captured images together in a common reference system. The shape and/or texture of a complex-shaped object can be measured using a 3D scanner by capturing multiple images from different perspectives and subsequently merging the images in a common coordinate system to align the merged images together. Alignment is achieved by capturing images of both a portion of the surface of the object and also of a reference object having known characteristics (e.g., shape and/or texture). This allows the position and orientation of the object scanner to be determined in the coordinate system of the reference object.03-12-2009
20110013827METHOD FOR OBTAINING A POSITION MATCH OF 3D DATA SETS IN A DENTAL CAD/CAM SYSTEM - Disclosed is a method for designing tooth surfaces of a digital dental prosthetic item existing as a 3D data set using a first 3D model of a preparation site and/or of a dental prosthetic item and a second 3D model, which second model comprises regions which match some regions on the first 3D model and regions which differ from other regions of the first 3D model, the non-matching regions containing some of the surface information required for the dental prosthetic item, wherein at least three pairs (P01-20-2011
20110033104MESH COLLISION AVOIDANCE - The invention relates to a system (02-10-2011
20110019905THREE-DIMENSIONAL AUTHENTICATION OF MIRCOPARTICLE MARK - A system, method, and apparatus for authenticating microparticle marks or marks including other three-dimensional objects. The authentication utilizes two or more sets of information captured or acquired for the mark in response to illumination of the mark by electromagnetic energy such as in the visible frequency range. These sets of information are then used to verify that the mark includes three-dimensional objects such as microparticles. The two or more sets of information about the mark preferably vary from each other in time, space/directionality, color, frequency or any combinations thereof, and can be captured or acquired as part of one, two, or more images of the microparticle mark.01-27-2011
20110116707METHOD FOR GROUPING 3D MODELS TO CLASSIFY CONSTITUTION - Provided is a three-dimensional model classification method of classifying constitutions. The method includes correcting color values of a frontal image and one or more profile images to allow a color value of a reference color table in the images to equal a predetermined reference color value, through obtaining the frontal image and one or more profile images of a subject including the reference color table by a camera, the reference color table including one or more sub color regions, generating a three-dimensional geometric model of the subject by extracting feature point information from the frontal image and the profile image, matching the corresponding feature point information to extract spatial depth information, after removing the reference color table region from the frontal image and the profile image, and classifying a group of the three-dimensional geometric model of the subject by selecting a reference three-dimensional geometric model having a smallest sum of spatial displacements from the three-dimensional geometric model of the subject from a plurality of reference three-dimensional geometric models stored in the database and setting the group which the selected reference three-dimensional geometric model represents as the group where the three-dimensional geometric model of the subject belongs.05-19-2011
20110044530IMAGE CLASSIFICATION USING RANGE INFORMATION - A method of identifying an image classification for an input digital image comprising receiving an input digital image for a captured scene; receiving a range map which represents range information associated with the input digital image, wherein the range information represents distances between the captured scene and a known reference location; identifying the image classification using both the range map and the input digital image; and storing the image classification in association with the input digital image in a processor-accessible memory system.02-24-2011
20110044532Functional-Based Knowledge Analysis In A 2D and 3D Visual Environment - A method of creating a visual display based on a plurality of data sources is provided. An exemplary embodiment of the method comprises extracting a set of extracted data from the plurality of data sources and processing at least a portion of the extracted data with a set of knowledge agents according to specific criteria to create at least one data assemblage. The exemplary method also comprises providing an integrated two-dimensional/three-dimensional (2D/3D) visual display in which at least one 2D element of the at least one data assemblage is integrated into a 3D visual representation using a mapping identifier and a criteria identifier.02-24-2011
20110044531SYSTEM AND METHOD FOR DEPTH MAP EXTRACTION USING REGION-BASED FILTERING - A system and method for extracting depth information from at least two images employing region-based filtering for reducing artifacts are provided. The present disclosure provides a post-processing algorithm or function for reducing the artifacts generated by scanline Dynamic Programming (DP) or other similar methods. The system and method provides for acquiring a first image and a second image from a scene, estimating the disparity of at least one point in the first image with at least one corresponding point in the second image to generate a disparity map, segmenting at least one of the first or second images into at least one region, and filtering the disparity map based on the segmented regions. Furthermore, anisotropic filters are employed, which have a great smoothing effect along the vertical direction than that of the horizontal direction, and therefore, reduce stripe artifacts without significantly blurring the depth boundaries.02-24-2011
20090214107IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM - Corresponding points corresponding to each other between each of a plurality of images photographed from different positions are searched for. When a plurality of corresponding points is searched out in a second image of the plurality of images for one target pixel in a first image of the plurality of images, at least partial subject shape around the target pixel is calculated based on distance values of a plurality of pixels around the target pixel, then a target distance value, which is a distance value of the target pixel, is calculated with respect to each of the plurality of corresponding points based on the target pixel and each of the plurality of corresponding points, and a valid corresponding point is determined from the plurality of corresponding points having a smallest difference from the subject shape.08-27-2009
20090214106PHOTOGRAMMETRIC TARGET AND RELATED METHOD - A multi-target photogrammetric target assembly and related method of evaluating curvilinear surface character. The target assembly includes a first photogrammetric target disposed at a first support and a second photogrammetric target disposed at a second support. The first support and the second support are operatively connected such that the first target is in predefined lateral spaced relation to the second target. The method includes providing a structure having a curvilinear surface and affixing one or more multi-target photogrammetric target assemblies to the curvilinear surface. The position of the targets is measured by one or more imaging devices to define surface contour characteristics.08-27-2009
20090324059METHOD FOR DETERMINING A DEPTH MAP FROM IMAGES, DEVICE FOR DETERMINING A DEPTH MAP - Window based matching is used to determine a depth map from images obtained from different orientations. A set of matching windows is used for points of the image for which the depth is to be determined. A provisional depth map is generated wherein to each point more than one candidate disparity value is attributed. The provisional depth map is filtered by a surface filtering wherein at least the z-component of a norm of a sum of unit vectors pointing from the candidate disparity values for neighboring points to a point of interest.12-31-2009
20120243774METHOD FOR RECONSTRUCTION OF URBAN SCENES - An urban scenes reconstruction method includes: acquiring digital data of a three-dimensional subject, the digital data comprising a 2D photograph and a 3D scan; fusing the 3D scan and the 2D photograph to create a depth-augmented photograph; decomposing the depth-augmented photograph into a plurality of constant-depth layers; detecting repetition patterns of each constant-depth layer; and using the repetitions to enhance the 3D scan to generate a polygon-level 3D reconstruction.09-27-2012
20120243776IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a noise removal unit that corrects a geometric mismatch of optical noise of a left eye image and a right eye image by performing a noise removal process for removing the separately generated optical noise on the left eye image and the right eye image which are captured and obtained by a two-lens type stereoscopic image capturing camera.09-27-2012
20100226563MODEL IMAGE ACQUISITION SUPPORT APPARATUS, MODEL IMAGE ACQUISITION SUPPORT METHOD, AND MODEL IMAGE ACQUISITION SUPPORT PROGRAM - The present invention provides a model image acquisition support apparatus, a model image acquisition support method, and a model image acquisition support program that can easily and swiftly obtain an optimum model image for an image processing apparatus that performs matching processing based on a model image set in advance with respect to a measurement image that is obtained by imaging an object. A plurality of model image candidates, serving as candidates for model image, are extracted from a reference image obtained by imaging an object which can be a model. Matching processing with the plurality of extracted model images is executed on measurement images actually obtained by a visual sensor, so that trial results are obtained. A trial result is generated upon evaluating each of the trial results of the matching processing with the model image. An optimum model image is determined based on the evaluation result.09-09-2010
20100220920METHOD, APPARATUS AND SYSTEM FOR PROCESSING DEPTH-RELATED INFORMATION - The invention relates to a method, apparatus and system for processing first depth-related information associated with an image sequence. The method of processing comprises mapping first depth-related information of respective images of a shot of the image sequence on corresponding second depth-related information using a first estimate of a characteristic of the distribution of first depth-related information associated with at least one image from the shot, the mapping adapting the first depth-related information by enhancing the dynamic range of a range of interest of first depth-related information defined at least in part by the first estimate, and the amount of variation in the mapping for respective images in temporal proximity in the shot being limited.09-02-2010
20110211749System And Method For Processing Video Using Depth Sensor Information - A method for processing video using depth sensor information, comprising the steps of: dividing the image area into a number of bins roughly equal to the depth sensor resolution, with each bin corresponding to a number of adjacent image pixels; adding each depth measurement to the bin representing the portion of the image area to which the depth measurement corresponds; averaging the value of the depth measurement for each bin to determine a single average value for each bin; and applying a threshold to each bin of the registered depth map to produce a threshold image.09-01-2011
20110249887IMAGE SYNTHESIS APPARATUS, IMAGE SYNTHESIS METHOD AND PROGRAM - An image synthesis apparatus includes: an image selection section adapted to select two or more three-dimensional images to be synthesized from among a plurality of three-dimensional images; an order determination section adapted to determine, based on parallax amounts of the selected three-dimensional images, a synthesis order representative of an order in which the selected three-dimensional images are to be synthesized; an image synthesis section adapted to synthesize the selected three-dimensional images in accordance with the synthesis order; and a control section adapted to control the image selection section, the order determination section and the image synthesis section in response to an operation of a user.10-13-2011
20110249886IMAGE CONVERTING DEVICE AND THREE-DIMENSIONAL IMAGE DISPLAY DEVICE INCLUDING THE SAME - An image converting device includes; a downscaling unit which downscales a two-dimensional image to generate at least one downscaling image, a feature map generating unit which extracts feature information from the downscaling image to generate a feature map, wherein the feature map includes a plurality of objects, an object segmentation unit which divides the plurality of objects, an object order determining unit which determines a depth order of the plurality of objects, and adds a first weight value to an object having the shallowest depth among the plurality of objects, and a visual attention calculating unit which generates a low-level attention map based on visual attention of the feature map.10-13-2011
20110085727SYSTEM AND METHOD FOR MARKING A STEREOSCOPIC FILM - A system and method for marking a stereoscopic film with colors are provided. The system and method provides for marking a left image with a mark and a right image with a mark having complementary colors, wherein upon viewing, the marks are not visible under certain conditions. The system and method provide for acquiring a stereoscopic image, the stereoscopic image including a first image and a second image, applying a first mark to the first image in a predetermined location, the first mark having a first color, and applying a second mark to the second image in substantially the same predetermined location as in the first image, the second mark having a second color that is different than the first color of the first mark, wherein when viewed in three-dimensional mode, the first mark and the second mark combine into a single mark of one color.04-14-2011
20100014750POSITION MEASURING SYSTEM, POSITION MEASURING METHOD AND COMPUTER READABLE MEDIUM - A position measuring system includes: an image capturing unit that captures reference points provided on an object, the reference points composed of at least four first reference points provided respectively at vertices of a polygon or at vertices and a barycenter of a polygon and at least one second reference point provided so as to have a specific positional relationship with respect to the first reference points; an identification unit that identifies images of the first reference points and the second reference point captured by the image capturing unit, on the basis of positional relationships between the images of the first reference points and the second reference point; and a calculation unit that calculates a three-dimensional position and three-axial angles of the object on the basis of positional relationships of the images of the first reference points identified by the identification unit.01-21-2010
20110091095IMAGE RECONSTRUCTION METHOD - An image reconstruction method includes: fetching at least two images; calculating a relative displacement between those adjacent images by utilizing a phase correlation algorithm; calculating an absolute displacement between any one of those images and the first image of those images; and computing a common area of those images by utilizing the relative displacement and the absolute displacement, then deleting remainder portions of the image excluding the common area; determining a rotation centers of those images; and reconstructing three-dimensional data of those images. In the present invention, the phase correlation algorithm can be utilized to process numerous noise signals so as to get a higher precision of the image reconstruction.04-21-2011
20090041337IMAGE PROCESSING APPARATUS AND METHOD - Three-dimensional position information of each of feature points in a left and a right image is calculated based on a disparity between the left and right images; a lane marker existing on a road surface is detected from each of the left and right images; based on three-dimensional position information of a lane marker in a neighboring road surface area, by extending the lane marker to a distant area, a lateral direction position, and a depth direction position, of the extended lane marker in the distant area are estimated; an edge segment of a certain length or more is detected from feature points in the distant area in each of a plurality of images; three-dimensional position information of the edge segment is calculated; and, based on the three-dimensional position information of the edge segment, and on the extended lane marker information, a road incline in the distant area is estimated.02-12-2009
20090052767Modelling - A method of modelling an object (02-26-2009
20100054580IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND IMAGE GENERATION PROGRAM - The image generation device includes distance calculation means for calculating a distance between a space model and an imaging device arrangement object model which is a model such as a vehicle having a camera mounted, according to viewpoint conversion image data generated by viewpoint conversion means, captured image data representing captured image, a space model, or mapped space data. When displaying an image viewed from an arbitrary virtual viewpoint in the 3D space, the image display format is changed according to the distance calculated by the distance calculation means. When displaying a monitoring object such as a vicinity of a vehicle, a shop, a house or a city as an image viewed from an arbitrary virtual viewpoint in the 3D space, it is possible to display the monitoring object in such a manner that the relationship between the vehicle and the image of the monitoring object can be understood intuitionally.03-04-2010
20100061623Position measuring apparatus - A position measuring apparatus including a first irradiating part that irradiates a first beam to an object, a second irradiating part that irradiates a second beam to the object, a capturing part that captures images of the object, a processing part that generates a first difference image and a second difference image by processing the images captured by the capturing part, an extracting part that extracts a contour and a feature point of the object from the first difference image, a calculating part that calculates three-dimensional coordinates of a reflection point located on the object based on the second difference image, and a determining part that determines a position of the object by matching the contour, the feature point, and the three-dimensional coordinates with respect to predetermined modeled data of the object.03-11-2010
20110176722SYSTEM AND METHOD OF PROCESSING STEREO IMAGES - The present invention is a system and a method for processing stereo images utilizing a real time, robust, and accurate stereo matching system and method based on a coarse-to-fine architecture. At each image pyramid level, non-centered windows for matching and adaptive upsampling of coarse-level disparities are performed to generate estimated disparity maps using the ACTF approach. In order to minimize propagation of disparity errors from coarser to finer levels, the present invention performs an iterative optimization, at each level, that minimizes a cost function to generate smooth disparity maps with crisp occlusion boundaries.07-21-2011
20110249888Method and Apparatus for Measuring an Audiovisual Parameter - There is provided a method of measuring 3D depth of a stereoscopic image, comprising providing left and right eye input images, applying an edge extraction filter to each of the left and right eye input images, and determining 3D depth of the stereoscopic image using the edge extracted left and right eye images. There is also provided an apparatus for carrying out the method of measuring 3D depth of a stereoscopic image.10-13-2011
20110249889STEREOSCOPIC IMAGE PAIR ALIGNMENT APPARATUS, SYSTEMS AND METHODS - Apparatus, systems, and methods disclosed herein operate to produce an image alignment shift vector used to shift left and right image portions of a stereoscopic image with respect to each other in order to reduce or eliminate undesirable horizontal and vertical disparity components. Vertical and horizontal projections of luminance value aggregations from selected left and right image pixel blocks are correlated to derive vertical and horizontal components of a disparity vector corresponding to each left/right pixel block pair. Disparity vectors corresponding to multiple image blocks are algebraically combined to yield the image alignment shift vector. The left and/or right images are then shifted in proportion to the magnitude of the image alignment shift vector at an angle corresponding to that of the image alignment shift vector.10-13-2011
20110081072IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - [PROBLEM] Provided are an image processing device, an image processing method, and a program which are capable of high density restoration and which are also strong to image processing.04-07-2011
20110069880THREE-DIMENSIONAL PHOTOGRAPHIC SYSTEM AND A METHOD FOR CREATING AND PUBLISHING 3D DIGITAL IMAGES OF AN OBJECT - The present invention provides a three-dimensional photographic system, which is applicable to take pictures of an object from a great variety of angles, particularly for taking series of pictures, which are later combined together forming a three-dimensional digital image of the object (03-24-2011
20110069879Apparatus and method to extract three-dimensional (3D) facial expression - Provided is a method and apparatus of extracting a 3D facial expression of a user. When a facial image of the user is received, the 3D facial expression extracting method and apparatus may generate 3D expression information by tracking an expression of the user from the facial image using at least one of shape-based tracking and texture-based tracking, may generate a 3D expression model based on the 3D expression information, and reconstruct the 3D expression model to have a natural facial expression by adding muscle control points to the 3D expression model.03-24-2011
20120201449IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus and a control method thereof are provided. The an image processing apparatus includes: a depth map estimating unit which estimates a depth map of a stereoscopic image; a region setup unit which sets up a region in the stereoscopic image; and a 3D effect adjusting unit which determines a difference in a depth level between the setup region and a surrounding region other than the setup region based on the estimated depth map, and adjusts a 3D effect of the stereoscopic image based on the determined difference in the depth level.08-09-2012
20110150322THREE-DIMENSIONAL MULTILAYER SKIN TEXTURE RECOGNITION SYSTEM AND METHOD - A three-dimensional multilayer skin texture recognition system and method based on hyperspectral imaging. Three-dimensional facial model associated with an object may be acquired from a three-dimensional image capturing device. A face reconstruction approach may be implemented to reconstruct and rewarp the three-dimensional facial model to a frontal face image. A hyperspectral imager may be employed to extract a micro structure skin signature associated with the skin surface. The micro structure skin signature may be characterized utilizing a weighted subtraction of reflectance at different wavelengths that captures different layers under the skin surface via a multilayer skin texture recognition module. The volumetric skin data associated with the face skin can be classified via a volumetric pattern.06-23-2011
20110150321METHOD AND APPARATUS FOR EDITING DEPTH IMAGE - Provided is a method of editing a depth image, comprising: receiving a selection on a depth image frame to be edited and a color image corresponding to the depth image frame; receiving a selection on an interest object in the color image; extracting boundary information of the interest object; and correcting a depth value of the depth image frame using the boundary information of the interest object.06-23-2011
20110150320Method and System for Localizing in Urban Environments From Omni-Direction Skyline Images - A location and orientation in an environment is determined by acquiring a set of one or more real omni-directional images of an unknown skyline in the environment from an unknown location and an unknown orientation in the environment by an omni-directional camera. A set of virtual omni-directional images is synthesized from a 3D model of the environment, wherein each virtual omni-directional image is associated with a known skyline, a known location and a known orientation. Each real omni-directional image is compared with the set of virtual omni-directional images to determine a best matching virtual omni-directional image with the associated known location and known orientation that correspond to the unknown location and orientation.06-23-2011
20110026809Fast multi-view three-dimensional image synthesis apparatus and method - A fast multi-view three-dimensional image synthesis apparatus includes: a disparity map generation module for generating a left image disparity map by using left and right image pixel data; intermediate-view generation modules for generating intermediate-view pixel data from different view points by using the left and right image pixel data and the left image disparity map; and a multi-view three-dimensional image generation module for generating multi-view three-dimensional image pixel data by using the left image pixel data, the right image pixel data and intermediate-view pixel data. Each of the intermediate-view generation module includes: a right image disparity map generation unit for generating a rough right image disparity map; an occluded region compensation unit for generating a right image disparity map by removing occluded regions from the rough right image disparity map; and an intermediate-view generation unit for generating the intermediate-view pixel data from the different view points.02-03-2011
20100310155IMAGE ENCODING METHOD FOR STEREOSCOPIC RENDERING - An image encoding method that allows stereoscopic rendering basically comprises the following steps. In a primary encoding step (VDE12-09-2010
20100310154METHOD FOR MATCHING AN OBJECT MODEL TO A THREE-DIMENSIONAL POINT CLOUD - The invention relates to a method for matching an object model to a three-dimensional point cloud, wherein the point cloud is generated from two images by means of a stereo method and a clustering method is applied to the point cloud in order to identify points belonging to respectively one cluster, wherein model matching is subsequently carried out, with at least one object model being superposed on at least one cluster and an optimum position of the object model with respect to the cluster being determined, and wherein a correction of false assignments of points is carried out by means of the matched object model. A classifier, trained by means of at least one exemplary object, is used to generate an attention map from at least one of the images. A number and/or a location probability of at least one object, which is similar to the exemplary object, is determined in the image using the attention map, and the attention map is taken into account in the clustering method and/or in the model matching.12-09-2010
20120033873METHOD AND DEVICE FOR DETERMINING A SHAPE MATCH IN THREE DIMENSIONS - Provided are a method and a device for determining a shape match in three dimensions, which can utilize information relating to three-dimensional shapes effectively. Camera control means (02-09-2012
20110211751METHOD AND APPARATUS FOR DETERMINING MISALIGNMENT - and second images being viewable stereoscopically, the method comprising: determining a feature position within the first image and a corresponding feature position within the second image; defining, within the first image and the second image, the optical axis of the cameras capturing said respective images; and calculating the misalignment between at least one of scale, roll or vertical translation of the feature position within the first image and the corresponding feature position within the second image, the misalignment being determined in dependence upon the location of the feature position of the first image and the corresponding feature position of the second image relative to the defined optical axis of the respective images is described. A corresponding apparatus is also described.09-01-2011
20100086200SYSTEMS AND METHODS FOR MULTI-PERSPECTIVE SCENE ANALYSIS - Systems and methods for using visual attention modeling techniques to evaluate a scene from multiple perspectives.04-08-2010
20110158503Reversible Three-Dimensional Image Segmentation - Aspects of the subject matter described herein relate to reversible image segmentation. In aspects, candidate pairs for merging three dimensional objects are determined. The cost of merging candidate pairs is computed using a cost function. A candidate pair that has the minimum cost is selected for merging. This may be repeated until all objects have been merged, until a selected number of merging has occurred, or until some other criterion is met. In conjunction with merging objects, data is maintained that allows the merging to be reversed.06-30-2011
20110211750METHOD AND APPARATUS FOR DETERMINING MISALIGNMENT - An apparatus for determining misalignment between a first image and a second image, the first and second images being viewable stereoscopically, the apparatus comprising: 09-01-2011
20090022393Method for reconstructing a three-dimensional surface of an object - Method for determining a disparity value of a disparity of each of a plurality of points on an object, the method including the procedures of detecting by a single image detector, a first image of the object through a first aperture, and a second image of the object through a second aperture, correcting the distortion of the first image, and the distortion of the second image, by applying an image distortion correction model to the first image and to the second image, respectively, thereby producing a first distortion-corrected image and a second distortion-corrected image, respectively, for each of a plurality of pixels in at least a portion of the first distortion-corrected image representing a selected one of the points, identifying a matching pixel in the second distortion-corrected image, and determining the disparity value according to the coordinates of each of the pixels and of the respective matching pixel.01-22-2009
20080247638Three-Dimensional Object Imaging Device - A three-dimensional object imaging device comprises a compound-eye imaging unit and an image reconstructing unit for reconstructing an image of a three-dimensional object based on multiple unit images captured by the imaging unit. Based on the unit images obtained by the imaging unit, the image reconstructing unit calculates a distance (hereafter “pixel distance”) between the object and the imaging unit for each pixel forming the unit images, and rearranges the unit images pixel-by-pixel on a plane at the pixel distance to create a reconstructed image. Preferably, the image reconstructing unit sums a high-frequency component reconstructed image created from the multiple unit images with a lower noise low-frequency component unit image selected from low-frequency component unit images created from the multiple unit images so as to form a reconstructed image of the three-dimensional object. This makes it possible to obtain a reconstructed image with high definition easily by a simple process.10-09-2008
20090263008Method For Recognizing Dice Dots - A method recognizing dice dots comprises the steps: projecting at least one dice with a plurality of different angle light sources; capturing a plurality of images of the dice according to the projecting times of the light sources on the dice; and recognizing dice dots based on the images through calculation methods. When recognized results obtained through the calculation methods are judged same by the recognizing module the dice dots are confirmed and accepted. If the recognized results done through the calculation methods are different, the dice is rolled anew.10-22-2009
20080279447Computational Solution Of A Building Of Three Dimensional Virtual Models From Aerial Photographs - A method for processing in a computer a plurality of digital images stored in the computer. The digital images are from respective photographs (11-13-2008
20080292180POSITION AND ORIENTATION MEASUREMENT APPARATUS AND CONTROL METHOD THEREOF - A position and orientation measurement apparatus for measuring the position and orientation of an image capturing apparatus, which captures an image of a measurement object, relative to the measurement object, extracts configuration planes of the measurement object based on three-dimensional model data of the measurement object, and extracts measurement line segments to be used in detection of edges of a captured image from line segments which form the configuration planes. The position and orientation measurement apparatus projects the extracted measurement line segments onto the captured image based on an estimated position and orientation of the image capturing apparatus, selects visible measurement line segments which are not hidden by the extracted configuration planes, and calculates the position and orientation of the image capturing apparatus relative to the measurement object based on the visible measurement line segments and corresponding edges of the captured image.11-27-2008
20110255776METHODS AND SYSTEMS FOR ENABLING DEPTH AND DIRECTION DETECTION WHEN INTERFACING WITH A COMPUTER PROGRAM - One or more images can be captured with a depth camera having a capture location in a coordinate space. First and second objects in the one or more images can be identified and assigned corresponding first and second object locations in the coordinate space. A relative position can be identified in the coordinate space between the first object location and the second object location when viewed from the capture location by computing an azimuth angle and an altitude angle between the first object location and the object location in relation to the capture location. The relative position includes a dimension of depth with respect to the coordinate space. The dimension of depth is determined from analysis of the one or more images. A state of a computer program is changed based on the relative position.10-20-2011
20110164810IMAGE SIGNATURES FOR USE IN MOTION-BASED THREE-DIMENSIONAL RECONSTRUCTION - A family of one-dimensional image signatures is obtained to represent each one of a sequence of images in a number of translational and rotational orientations. By calculating these image signatures as images are captured, a new current view can be quickly compared to historical views in a manner that is less dependent on the relative orientation of a target and search image. These and other techniques may be employed in a three-dimensional reconstruction process to generate a list of candidate images from among which full three-dimensional registration may be performed to test for an adequate three-dimensional match. In another aspect this approach may be supplemented with a Fourier-based approach that is selectively applied to a subset of the historical images. By alternating between spatial signatures for one set of historical views and spatial frequency signatures for another set of historical views, a pattern matching system may be implemented that more rapidly reattaches to a three-dimensional model in a variety of practical applications.07-07-2011
20080205749Polyp detection using smoothed shape operators - Improved surface feature recognition in CT images is provided by extracting a triangulated mesh representation of the surface of interest. Shape operators are computed at each vertex of the mesh from finite differences of vertex normals. The shape operators at each vertex are smoothed according to an iterative weighted averaging procedure. Principal curvatures at each vertex are computed from the smoothed shape operators. Vertices are marked as maxima and/or minima according to the signs of the principal curvatures. Vertices marked as having the same feature type are clustered together by adjacency on the mesh to provide candidate patches. Feature scores are computed for each candidate patch and the scores are provided as output to a user or for further processing.08-28-2008
20110164811IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND IMAGE PROCESSING PROGRAM - An image processing device calculates, from a registration image representing a photographed object and three-dimensional shape data in which respective points of a three-dimensional shape of the object are correlated with pixels of the registration image, by assuming uniform albedo, a shadow base vector group having components from which an image under an arbitrary illumination condition can be generated through linear combination. A shadow in the registration image is estimated with using the vector group. A perfect diffuse component image including the shadow is generated, and based on the image a highlight removal image is generated in which a specular reflection component is removed from the registration image. Thus, an image recognition system generates illumination base vectors from the highlight removal image and thereby can obtain the illumination base vectors based on which an accurate image recognition process can be carried out without influence of a specular reflection.07-07-2011
20100246938Image Processing Method for Providing Depth Information and Image Processing System Using the Same - An image processing method for providing corresponding depth information according to an input image is provided. This method includes the following steps. First, a reference image is generated according to the input image. Next, the input image and the reference image are divided into a number of input image blocks and a number of reference image blocks, respectively. Then, according to a number of input pixel data of each input image block and a number of reference pixel data of each reference image block, respective variance magnitudes of the input image blocks are obtained. Next, the input image is divided into a number of segmentation regions. Then, the depth information is generated according to the corresponding variance magnitudes of the input image blocks which each segmentation region covers substantially.09-30-2010
20100246937METHOD AND SYSTEM FOR INSPECTION OF CONTAINERS - A method and system for producing images of at least one object of interest in a container. The method includes receiving three-dimensional volumetric scan data from a scan of the container, reconstructing a three-dimensional representation of the container from the three-dimensional volumetric scan data, and inspecting the three-dimensional representation to detect the at least one object of interest within the container. The method also includes re-projecting a two-dimensional image from one of the three-dimensional volumetric scan data and the three-dimensional representation, and identifying a first plurality of image elements in the two-dimensional image corresponding to a location of the at least one object of interest. The method further includes outputting the two-dimensional image with the first plurality of image elements highlighted.09-30-2010
20110019904METHOD FOR DISPLAYING A VIRTUAL IMAGE - A method for displaying a virtual image of three dimensional objects in an area using stereo recordings of the area for storing a pixel and a height for each point of the area. A method is obtained of enabling displaying of vertical surfaces or even slightly downwards and inwards inclined surfaces. Stereo recordings from at least three different stereo recordings of different solid angles are used. For each different solid angle at least one data base including data about texture and height pixel point wise is established. Data for displaying the virtual image are combined from the different data bases in dependence of the direction in which the virtual image is to be displayed.01-27-2011
20100284607METHOD AND SYSTEM FOR GENERATING A 3D MODEL FROM IMAGES - A method for generating a three dimensional (3D) model of an object from a series of two dimensional (2D) images is described. The series of 2D images depict varying views of the object and have associated camera parameter information. The method includes the steps of tracing the object in a first 2D image selected from the series of 2D images to provide a first set of tracing information, then tracing the object in a second 2D image selected from the series of 2D images to provide a second set of tracing information. The 3D model of the object is then generated based on the camera parameter information and the first and second sets of tracing information.11-11-2010
20100284606IMAGE PROCESSING DEVICE AND METHOD THEREOF - An image processing device and a method thereof are provided. In the method, an original image and a corresponding depth image are received, wherein the depth image includes a plurality of depth values, and the depth values indicate depth of field of a plurality of blocks in the original image respectively. Further, each of the blocks is processed to obtain a corresponding smoothness and/or sharpness effect according to each of the depth values. Thereby, a stereoscopic sensation of the original image can be enhanced.11-11-2010
20100284605Methodology to Optimize and Provide Streaming Object Rotation Using Composite Images - Optimizing and presenting various sequences of images and/or photographs for viewing with a Web browser, is accomplished without the necessity of loading the entire image set, for example in connection with the 3D display of a product of interest. To represent an object that is rotating, a set of images must be taken. These images are taken at various angles, typically using either using a fixed camera or a turntable. The illusion of an object being rotated is created when the captured images based on the angle being viewed are displayed. To ensure a seamless rotation of an object, a technique is taught that significantly concentrates on reducing the loading time of the captured images by prioritizing which images should be transferred first according to their size, and their number of object views or view angles. A seamless rotation is thus achieved while less than the total number of images is loaded. In fact, an embodiment of the invention teaches that, by selectively loading certain images with specific angular values, it is possible to achieve an object rotation, i.e. using horizontal and vertical adjacent images positioning.11-11-2010
20100329542Method for Determining a Location From Images Acquired of an Environment with an Omni-Directional Camera - A location and orientation in an environment is determined by first acquiring a real omni-directional image of an unknown skyline in the environment. A set of virtual omni-directional images of known skylines are synthesized from a 3D model of the environment, wherein each virtual omni-directional image is associated with a known location and orientation. The real omni-directional image with each virtual omni-directional images to determine a best matching virtual omni-directional image with the associated known location and orientation.12-30-2010
20110135190OBJECT POSITIONING WITH VISUAL FEEDBACK - A positioning system comprises a pattern projector (06-09-2011
20110262031CONCAVE SURFACE MODELING IN IMAGE-BASED VISUAL HULL - Apparatus and methods disclosed herein provide for a set of reference images obtained from a camera and a reference image obtained from a viewpoint to capture an entire concave region of an object; a silhouette processing module for obtaining a silhouette image of the concave region of the object; and a virtual-image synthesis module connected to the silhouette processing module for synthesizing a virtual inside-out image of the concave region from the computed silhouette images and for generating a visual hull of the object having the concave region.10-27-2011
20120148147STEREOSCOPIC IMAGE DISPLAY SYSTEM, DISPARITY CONVERSION DEVICE, DISPARITY CONVERSION METHOD AND PROGRAM - Disparity in a stereoscopic image is converted, according to features of a configuration element of an image that influences depth perception of a stereoscopic image. A disparity detecting unit 06-14-2012
20110255775METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR GENERATING THREE-DIMENSIONAL (3D) IMAGES OF A SCENE - Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images.10-20-2011
20110096982METHOD AND APPARATUS FOR GENERATING PROJECTING PATTERN - A pattern generating apparatus includes a sequence generating unit and an image data generating unit. The sequence generating unit generates a sequence formed by terms having M-value numeric values. The image data generating unit generates the image data by converting each numeric value of the sequence into a gray-level value according to each numeric value, and the sequence is generated by the sequence generating unit. The sequence generating unit generates the sequence such that vectors expressed by sub-sequences have different directions for the sub-sequence constituting the generated sequence.04-28-2011
20100215248Method for Determining Dense Disparity Fields in Stereo Vision - In a stereo vision system comprising two cameras shooting the same scene from different positions, a method is performed for determining dense disparity fields between digital images shot by the two cameras, including the steps of capturing a first and a second image of the scene, and determining, for each pixel of the second image, the displacement from a point in the first image to such pixel of the second image minimising an optical flow objective function, wherein the optical flow objective function includes, for each pixel of the second image, a term depending in a monotonously increasing way on the distance between the epipolar line associated with such pixel and the above point in the first image, such term depending on calibration parameters of the two cameras and being weighed depending on the uncertainty of the calibration data.08-26-2010
20100215250SYSTEM AND METHOD OF INDICATING TRANSITION BETWEEN STREET LEVEL IMAGES - A system and method of displaying transitions between street level images is provided. In one aspect, the system and method creates a plurality of polygons that are both textured with images from a 2D street level image and associated with 3D positions, where the 3D positions correspond with the 3D positions of the objects contained in the image. These polygons, in turn, are rendered from different perspectives to convey the appearance of moving among the objects contained in the original image.08-26-2010
20110176721Method and apparatus for composition coating for enhancing white light scanning of an object - The invention is directed to a method and apparatus for pretreatment an object to be white light scanned to enable accurate and consistent scanning. In those instances where the object part has a reflective or refractive surface or is made from a material having translucent or transparent properties the object must be pretreated to ensure accurate data collection during the scanning process. The object is coated with a composition forming a thin and uniform film of non destructive material coating to enhance the surface contrast characteristics for the mono-chromatic fringe pattern employed in the white light scanning process.07-21-2011
20110052045IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - A system is provided to compress an image of a subject captured in a plurality of directions, at high compression, and the image processing apparatus includes: a model storage section that stores a reference model that is a three-dimensional model representing an object; a model generating section that generates, based on a plurality of captured images of an object, an object model that is a three-dimensional model that matches the object captured in the plurality of captured images; and an output section that outputs a position and a direction of the object captured in each of the plurality of captured images, in association with difference information between the reference model and the object model.03-03-2011
20110052044METHOD AND APPARATUS FOR CROSS-SECTION PROCESSING AND OBSERVATION - A cross-section processing and observation method includes: forming a cross section in a sample by a focused ion beam through etching processing; obtaining a cross-section observation image through cross-section observation by the focused ion beam; and forming a new cross section by performing etching processing in a region including the cross section and obtaining a cross-section observation image of the new cross section. A surface observation image of a region including a mark on the sample and the cross section is obtained. A position of the mark is recognized in the surface observation image and etching processing is performed on the cross section by setting, in reference to the position of the mark, a focused ion beam irradiation region in which to form the new cross section. Cross-section processing and observation is thus enabled continuously and efficiently using a focused ion beam apparatus having no SEM apparatus.03-03-2011
20110052043METHOD OF MOBILE PLATFORM DETECTING AND TRACKING DYNAMIC OBJECTS AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed herein is a computer-readable medium and method of a mobile platform detecting and tracking dynamic objects in an environment having the dynamic objects. The mobile platform acquires a three-dimensional (3D) image using a time-of-flight (TOF) sensor, removes a floor plane from the acquired 3D image using a random sample consensus (RANSAC) algorithm, and individually separates objects from the 3D image. Movement of the respective separated objects is estimated using a joint probability data association filter (JPDAF).03-03-2011
20110052042PROJECTING LOCATION BASED ELEMENTS OVER A HEADS UP DISPLAY - Methods and systems for projecting a location based elements over a heads up display. One method includes: generating a three dimensional (3D) model of a scene, based on a source of digital mapping of the scene; associating a position of at least one selected LAE contained within the scene, with a respective position in the 3D model; superimposing the projecting onto a specified position on a transparent screen facing a viewer and associated with the vehicle, at least one graphic indicator associated with the at least one LAE, wherein the specified position is calculated based on: the respective position of the LAE in the 3D model, the screen's geometrical and optical properties, the viewer's viewing angle, the viewer's distance from the screen, the vehicle's position and angle within the scene, such that the viewer, the graphic indicator, and the LAE are substantially on a common line.03-03-2011
20110052041METHOD FOR DETERMINING THE ROTATIONAL AXIS AND THE CENTER OF ROTATION OF A VEHICLE WHEEL - The invention relates to a method for determining the rotational axis and the rotating center of a vehicle wheel by means of at least two image capture units assigned to each other in position and situation during the journey of the vehicle, and by means of an analysis unit arranged downstream of said units, processing the recorded image information, taking into account multiple wheel features (03-03-2011
20100215249AUTOMATED IMAGE SEPARATION METHOD - A method of decomposing a set of scans of different views of overlapping objects into constituent objects is presented. The method involves an initialization process whereby keypoints in two views are determined and matched, disparity between keypoint pairs are computed, and the keypoints are grouped into clusters based on their disparities. Following the initialization process is an iterative optimization process whereby a cost function is calculated and minimized assuming a fixed composition matrix and re-solved assuming a fixed attenuation coefficient. Then, the composition matrix and the attenuation coefficient are updated simultaneously, and the solving, the re-solving, and the updating steps are repeated until there is no significant improvement in the result.08-26-2010
20100166295METHOD AND SYSTEM FOR SEARCHING FOR GLOBAL MINIMUM - A method and a system for searching for a global minimum are provided. First, a subclass of a plurality of space points in a multidimensional space is clustered into a plurality of clusters through a clustering algorithm, wherein each of the space points is corresponding to an error value in an evaluation function. Then, ellipsoids for enclosing the clusters in the multidimensional space are respectively calculated. Next, a designated space corresponding to each of the ellipsoids is respectively inputted into a recursive search algorithm to search for a local minimum among the error values corresponding to the space points within each designated space. Finally, the local minimums of all the clusters are compared to obtain the space point corresponding to the minimum local minimum.07-01-2010
20100166293IMAGE FORMING METHOD AND OPTICAL COHERENCE TOMOGRAPH APPARATUS USING OPTICAL COHERENCE TOMOGRAPHY - An image forming method uses an optical coherence tomography as to an optical axis direction of plural pieces of image information of an object. First image information of an object is obtained at a first focus with respect to an optical axis direction to then object. A focusing position is changed by dynamic focusing from the first focus to a second focus along the optical axis. The second image information of the object is obtained at the second focus. A third image information, tomography image information of the object and including a tomography image of the first focus or the second focus, is obtained by Fourier domain optical coherence tomography. A tomography image or a three-dimensional image of the object is formed in positional relation, in the optical axis direction, between the first image information and the second image information using the third image information.07-01-2010
20100195899DETECTION OF PEOPLE IN REAL WORLD VIDEOS AND IMAGES - Systems and methods for detecting people in video data streams or image data are provided. The method includes using a plurality of training images for learning spatial distributions associated with a plurality of body parts, detecting a plurality of detections of body parts in an input image, clustering the detections of body parts located within a predetermined distance from one another to create one effective detection for each cluster of detections, and determining a position of each person associated with each effective detection. The detections of body parts can be associated with respective previously learned spatial distributions.08-05-2010
20100195900APPARATUS AND METHOD FOR ENCODING AND DECODING MULTI-VIEW IMAGE - An apparatus and method for encoding and decoding a multi-view image including a stereoscopic image are provided. The apparatus for encoding a multi-view image includes a base layer encoding unit that encodes a base layer image to generate a base layer bit stream, a view-based conversion unit that performs view-based conversion of the base layer image to generate a view-converted base layer image, a subtractor obtaining a residual between a enhancement layer image and the view-converted base layer image, and an enhancement layer encoding unit that encodes the obtained residual to generate an enhancement layer bit stream.08-05-2010
20110262030Recovering 3D Structure Using Blur and Parallax - A system and method for generating a focused image of an object is provided. The method comprises obtaining a plurality of images of an object, estimating an initial depth profile of the object, estimating a parallax parameter and a blur parameter for each pixel in of the plurality of images and generating a focused image and a corrected depth profile of the object using a posterior energy function. The posterior energy function is based on the estimated parallax parameter and the blur parameter of each pixel in the plurality of images.10-27-2011
20110182498Image Processing Apparatus, Image Processing Method, and Program - An image processing apparatus includes a viewing situation analyzing unit configured to obtain information representing a user's viewing situation of 3D content stored in a certain storage unit, and, based on a preset saving reference in accordance with a viewing situation of 3D content, determine a data reduction level of content data of the 3D content stored in the storage unit; and a data conversion unit configured to perform data compression of the content data of the 3D content stored in the storage unit in accordance with the determined data reduction level.07-28-2011
20110188738FACE EXPRESSIONS IDENTIFICATION - In the last few years, face expression measurement has been receiving significant attention mainly due to advancements in areas such as face detection, face tracking and face recognition. For face recognition systems, detecting the locations in two-dimension (2D) images where faces are present is a first step to be performed before face expressions can be measured. However, face detection from a 2D image is a challenging task because of variability in imaging conditions, image orientation, pose, presence/absence of facial artefacts facial expression and occlusion. Existing efforts to address the shortcomings of existing face recognition systems deal with technologies for creation of three-dimensional (3D) models of a human subject's face based on a digital photograph of the human subject. However, such technologies are computationally intensive nature and susceptible to errors and hence might not be suitable for deployment. An embodiment of the invention describes a method for identifying face expressions of image objects.08-04-2011
20110188741SYSTEM AND METHOD FOR DIMENSIONING OBJECTS USING STEREOSCOPIC IMAGING - A method and configuration to estimate the dimensions of a cuboid. The configuration includes two image acquisition units offset from each other with at least one of the units positioned at a defined acquisition height above a background surface. Image processing techniques are used to extract a perimeter of a top surface of the cuboid, placed on the background surface, from pairs of acquired images. A height estimation technique, which corrects for spatial drift of the configuration, is used to calculate an absolute height of the cuboid. The absolute height of the cuboid is used, along with the extracted perimeter of the top surface of the cuboid, to calculate an absolute length and an absolute width of the cuboid. The height, length, and width may be used to calculate an estimated volume of the cuboid.08-04-2011
20110188740DEVICE FOR IMPROVING STEREO MATCHING RESULTS, METHOD OF IMPROVING STEREO MATCHING RESULTS USING THE DEVICE, AND SYSTEM FOR RECEIVING STEREO MATCHING RESULTS - Provided is a device for improving stereo matching results. The device for improving stereo matching results includes: a stereo camera unit outputting binocular disparity images by using binocular disparity between two images preprocessed according to a plurality of preprocessing conditions; a discrete cosine transform (DCT) unit generating DCT coefficients by performing DCT on the binocular disparity images; a streak estimation unit receiving the DCT coefficients and estimating amounts of streaks distributed on a screen by using AC coefficients, including streak patterns, of the DCT coefficients; a condition estimation unit estimating a preprocessing condition, corresponding to the smallest amount of streaks of the estimated amounts of streaks, of the plurality of preprocessing conditions, as an optimal condition, and a streak removal unit generating binocular disparity images without the streaks by changing predetermined AC coefficients of the DCT coefficients and performing inverse DCT on the changed DCT coefficients.08-04-2011
20110188739Image processing apparatus and method - An image processing apparatus that configures a single frame by determining a central image of a certain viewpoint as an original resolution, and frame another single frame by combining a left image of a left viewpoint and a right image of a right viewpoint. The image processing apparatus may generate three-dimensional (3D) image data configured using the frames, and may encode, decode, and render an image based on the 3D image data.08-04-2011
20110188737SYSTEM AND METHOD FOR OBJECT RECOGNITION BASED ON THREE-DIMENSIONAL ADAPTIVE FEATURE DETECTORS - Method and system for imaging an object in three-dimensions, binning data of the imaged object into three dimensional bins, determining a density value p of the data in each bin, and creating receptive fields of three dimensional feature maps, including processing elements O, each processing element O of a same feature map having a same adjustable parameter, weight Wc08-04-2011
20110188736Reduced-Complexity Disparity MAP Estimation - Image processing herein reduces the computational complexity required to estimate a disparity map of a scene from a plurality of monoscopic images. Image processing includes calculating a disparity and associated matching cost for at least one pixel block in a reference image, and then predicting, based on this disparity and associated matching cost, a disparity and associated matching cost for a pixel block that neighbors the at least one pixel block. Image processing continues with calculating a tentative disparity and associated matching cost for the neighboring pixel block, by searching for a corresponding pixel block in a different monoscopic image over a reduced range of candidate pixel blocks focused around the disparity predicted. Searching over a reduced range avoids significant computational complexity. Image processing concludes with determining the disparity for the neighboring pixel block based on comparing the matching costs associated with the tentative disparity and the disparity predicted.08-04-2011
20100021052System and method for generating a terrain model for autonomous navigation in vegetation - The disclosed terrain model is a generative, probabilistic approach to modeling terrain that exploits the 3D spatial structure inherent in outdoor domains and an array of noisy but abundant sensor data to simultaneously estimate ground height, vegetation height and classify obstacles and other areas of interest, even in dense non-penetrable vegetation. Joint inference of ground height, class height and class identity over the whole model results in more accurate estimation of each quantity. Vertical spatial constraints are imposed on voxels within a column via a hidden semi-Markov model. Horizontal spatial constraints are enforced on neighboring columns of voxels via two interacting Markov random fields and a latent variable. Because of the rules governing abstracts, this abstract should not be used to construe the claims.01-28-2010
20100215252HAND HELD PORTABLE THREE DIMENSIONAL SCANNER - Embodiments of the invention may include a scanning device to scan three dimensional objects. The scanning device may generate a three dimensional model. The scanning device may also generate a texture map for the three dimensional model. Techniques utilized to generate the model or texture map may include tracking scanner position, generating depth maps of the object and generation composite image of the surface of the object.08-26-2010
20100215251METHOD AND DEVICE FOR PROCESSING A DEPTH-MAP - The present invention relates to a device and apparatus for processing a depth-map 08-26-2010
20120308119IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - A parallax detection unit generates a parallax map indicating a parallax of each pixel of an image formed by right and left images and generates a reliability map indicating reliability of the parallax. A depth information estimation unit generates a depth information map indicating the depth of a subject on the image based on the right and left images. A depth parallax conversion unit converts the depth information map into a pseudo-parallax map using a conversion equation used to convert depth information to parallax information. A parallax synthesis unit synthesizes the parallax map and the pseudo-parallax map to generate a corrected parallax map based on the reliability map. The present technology is applicable to an image processing apparatus.12-06-2012
20120308120METHOD FOR ESTIMATING DEFECTS IN AN OBJECT AND DEVICE FOR IMPLEMENTING SAME - The invention relates to a device and method for estimating defects potentially present in an object comprising an outer surface, wherein the method comprises the steps of: a) illuminating the outer surface of the object with an inductive wave field at a predetermined frequency; b) measuring an induced wave field ({right arrow over (H)}) at the outer surface of the object; c) developing from the properties of the object's material a coupling matrix T associated with a depth Z of the object from the outer surface; d) solving the matrix system12-06-2012
20120308116HEAD ROTATION TRACKING FROM DEPTH-BASED CENTER OF MASS - The rotation of a user's head may be determined as a function of depth values from a depth image. In accordance with some embodiments, an area of pixels from a depth image containing a user's head is identified as a head region. The depth values for pixels in the head region are used to calculate a center of depth-mass for the user's head. The rotation of the user's head may be determined based on the center of depth-mass for the user's head.12-06-2012
20120308114VOTING STRATEGY FOR VISUAL EGO-MOTION FROM STEREO - Methods and systems for egomotion estimation (e.g. of a vehicle) from visual inputs of a stereo pair of video cameras are described. 3D egomotion estimation is a six degrees of freedom problem in general. In embodiments of the present invention, this is simplified to four dimensions and further decomposed to two two-dimensional sub-solutions. The decomposition allows use of a voting strategy that identifies the most probable solution. An input is a set of image correspondences between two temporally consecutive stereo pairs, i.e. feature points do not need to be tracked over time. The experiments show that even if a trajectory is put together as a simple concatenation of frame-to-frame increments, the results are reliable and precise.12-06-2012
20120308115Method for Adjusting 3-D Images by Using Human Visual Model - The present disclosure provides a method for adjusting 3-D images converted from 2-D images by using a human visual model. Steps of the method include inputting a 2-D image, dividing the 2-D image into a plurality of blocks, forming a matrix of blocks, obtaining a depth value of each of the plurality of blocks, adjusting the depth value of each of the plurality of blocks according to a position of each of the plurality of blocks, obtaining adjusted depth information of the 2-D image, wherein the adjusted depth information comprises an adjusted depth value of each of the plurality of blocks of the 2-D image, and using depth image based rendering (DIBR) to generate a set of 3-D images according to the adjusted depth information and the 2-D image.12-06-2012
20120308117STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - An example of a game apparatus as an image processing apparatus includes a CPU, and the CPU controls a movement, etc. of a player object according to an instruction from a player. In a case that a predetermined condition is satisfied, a two-dimensional surface is displayed within a virtual three-dimensional space. When the player moves a first controller, a two-dimensional image is depicted on the two-dimensional surface in response thereto. Then, it is determined whether or not the depicted two-dimensional image is a predetermined image. If it is determined that the two-dimensional image is the predetermined image, a three-dimensional object corresponding to the predetermined image appears, and the two-dimensional surface and the two-dimensional image depicted thereon are erased.12-06-2012
20120308118APPARATUS AND METHOD FOR 3D IMAGE CONVERSION AND A STORAGE MEDIUM THEREOF - An apparatus and method for converting a two-dimensional (2D) input image into a three-dimensional (3D) image, and a storage medium thereof are provided, the method being implemented by the 3D-image conversion apparatus including receiving an input image including a plurality of frames; selecting a first frame corresponding to a preset condition among the plurality of frames; extracting a first object from the selected first frame; inputting selection for one depth information setting mode among a plurality of depth information setting modes with regard to the first object; generating first depth information corresponding to the selected setting mode with regard to the first object; and rendering the input image based on the generated first depth information.12-06-2012
20110002532Data Reconstruction Using Directional Interpolation Techniques - Approaches to three-dimensional (3D) data reconstruction are presented. The 3D data comprises 2D images. In some embodiments, the 2D images are directionally interpolated to generate directionally-interpolated 3D data. The directionally-interpolated 3D data are then segmented to generate segmented directionally-interpolated 3D data. The segmented directionally-interpolated 3D data is then meshed. In other embodiments, a 3D data set, which includes 2D flow images, is accessed. The accessed 2D flow images are then directionally interpolated to generate 2D intermediate flow images.01-06-2011
20080285843Camera-Projector Duality: Multi-Projector 3D Reconstruction - A system and method are disclosed for calibrating a plurality of projectors for three-dimensional scene reconstruction. The system includes a plurality of projectors and at least one camera, a camera-projector calibration module and a projector-projector calibration module. The camera-projector calibration module is configured to calibrate a first projector with the camera and generate a first camera-projector calibration data using camera-projector duality. The camera-projector calibration module is also configured to calibrate a second projector with the camera and generate a second camera-projector calibration data. The projector-projector calibration module is configured to calibrate the first and the second projector using the first and the second camera-projector calibration data.11-20-2008
20100284608FEATURE-BASED SEGMENTATION METHOD, FOR SEGMENTING A PLURALITY OF LOOSELY-ARRANGED DUPLICATE ARTICLES AND A GROUP FOR ACTUATING THE METHOD FOR SUPPLYING A PACKAGING MACHINE - The invention relates to a segmentation method based on the characteristics for segmenting a plurality of duplicate articles (11-11-2010
20110116706METHOD, COMPUTER-READABLE MEDIUM AND APPARATUS ESTIMATING DISPARITY OF THREE VIEW IMAGES - Provided is a method, computer-readable medium apparatus that may estimate a disparity of three view images. A global matching may be performed to calculate a global path by performing a dynamic programming on the three view images, and a local matching for supplementing an occlusion region of the calculated global path may be performed, and thereby a disparity estimation of the three view images may be performed.05-19-2011
20120121166METHOD AND APPARATUS FOR THREE DIMENSIONAL PARALLEL OBJECT SEGMENTATION - A method and apparatus for parallel object segmentation. The method includes retrieving at least a portion of a 3-dimensional point cloud data x, y, z of a frame, dividing the frame into sub-image frames if the sub-frame based object segmentation is enabled, 05-17-2012
20120121165Method and apparatus for time of flight sensor 2-dimensional and 3-dimensional map generation - A method and apparatus for Time Of Flight sensor 2-dimensional and 3-dimensional map generation. The method includes retrieving Time Of Flight sensor fixed point data to obtain four phases of Time Of Flight fixed point raw data, computing Gray scale image array and phase differential signal arrays utilizing four phases of TOF fixed point raw data, computing Gray image array and Amplitude image array for fixed point, converting the phase differential signal array from fixed point to floating point, performing the floating point division for computing Arctan, TOF depthmap, and 3-dimensional point cloud map for Q format fixed point, and generating depthmap, 3-dimensional cloud coefficients and 3-dimensional point cloud for Q format fixed point.05-17-2012
201201211633D DISPLAY APPARATUS AND METHOD FOR EXTRACTING DEPTH OF 3D IMAGE THEREOF - A three-dimensional (3D) display apparatus and a method for extracting a depth of a 3D image of the 3D display apparatus are provided. The 3D display apparatus includes: an image input unit which receives an image; a 3D image generator which generates a 3D image of which a depth is adjusted according to a relative motion between global and local motions of the image; and an image output unit which outputs the 3D image.05-17-2012
20120121164IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR - An image processing apparatus according to the present invention, comprises a calculation unit that calculates a sharpness of a 2D image for each region thereof; an image processing unit that performs image processing, in a region with a sharpness calculated by the calculation unit being higher than a first predetermined value, to increase that sharpness, and performing image processing, in a region with a sharpness calculated by the calculation unit being lower than a second predetermined value which is equal to or lower than the first predetermined value, to reduce that sharpness; and a generation unit that generates, from the 2D image processed by the image processing unit, an image for a left eye and an image for a right eye by shifting the 2D image in a horizontal direction.05-17-2012
20120121162Filtering apparatus and method for high precision restoration of depth image - A high speed filtering apparatus and a method for high precision restoration of a depth image are provided. The high speed filtering apparatus for high precision restoration of the depth image may include a block setting unit to set a first block including a target pixel, and to set a second block with respect to a central pixel distributed around the target pixel based on a size of the first block, a weight determining unit to determine a pixel weight with respect to each pixel in the second block, and to determine a block weight with respect to the second block by applying the pixel weight, and a processor to filter the target pixel based on the block weight, thereby accurately filtering the target pixel.05-17-2012
20110305383APPARATUS AND METHOD PROCESSING THREE-DIMENSIONAL IMAGES - Provided is a 3D image processing apparatus and method. The 3D image processing apparatus may determine, with a small amount of calculation, a quantization parameter to be used for compressing a depth image, based on a quantization parameter used for compressing a color image and characteristics of the color image and the depth image.12-15-2011
20090148037Color-coded target, color code extracting device, and three-dimensional measuring system - To provide a color-coded target having a color code of colors chosen not to cause code reading errors and technique for automatically detecting and processing the targets. The color-coded target CT06-11-2009
20120039526Volume-Based Coverage Analysis for Sensor Placement in 3D Environments - Coverage of sensors in a CTV system in a three-dimensional environment are analyzed by partitioning a 3D model of the environment into a set of voxels. A ray is cast from each pixel in each sensor through the 3D model to determine coverage data for each voxel. The coverage data are analyzed to determine a result indicative of an effective arrangement of the set of sensors.02-16-2012
20120039525APPARATUS AND METHOD FOR PROVIDING THREE DIMENSIONAL MEDIA CONTENT - A system that incorporates teachings of the exemplary embodiments may include, for example, means for generating a disparity map based on a depth map, means for determining accuracy of pixels in the depth map where the determining means identifies the pixels as either accurate or inaccurate based on a confidence map and the disparity map, and means for providing an adjusted depth map where the providing means adjusts inaccurate pixels of the depth map using a cost function associated with the inaccurate pixels. Other embodiments are disclosed.02-16-2012
20120148146SYSTEM FOR MAKING 3D CONTENTS PROVIDED WITH VISUAL FATIGUE MINIMIZATION AND METHOD OF THE SAME - Disclosed are a system for making 3D contents provided with visual fatigue minimization and a method of the same. More particularly, an exemplary embodiment of the present invention provides a system for making 3D contents including: a human factor information unit generating guide information for making 3D contents by considering factors causing visual fatigue of the 3D contents; and a 3D contents making unit applying guide information generated by the human factor information unit to 3D contents data inputted for making the 3D contents to make the 3D contents, and a method of making 3D contents.06-14-2012
20120148145SYSTEM AND METHOD FOR FINDING CORRESPONDENCE BETWEEN CAMERAS IN A THREE-DIMENSIONAL VISION SYSTEM - This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.06-14-2012
20120099782IMAGE PROCESSING APPARATUS AND METHOD - Provided is an image processing apparatus for extracting a three-dimensional (3D) feature point from a depth image. An input processing unit may receive a depth image and may receive, via a user interface, selection information of at least one region that is selected as a target region in the depth image. A geometry information analyzer of the image processing apparatus may analyze geometry information of the target region within the input depth image, and a feature point extractor may extract at least one feature point from the target region based on the geometry information of the target region.04-26-2012
20110091096Real-Time Stereo Image Matching System - A real-time stereo image matching system for stereo image matching of a pair of images captured by a pair of cameras (04-21-2011
201000983273D Imaging system - The present invention provides a system (method and apparatus) for creating photorealistic 3D models of environments and/or objects from a plurality of stereo images obtained from a mobile stereo camera and optional monocular cameras. The cameras may be handheld, mounted on a mobile platform, manipulator or a positioning device. The system automatically detects and tracks features in image sequences and self-references the stereo camera in 6 degrees of freedom by matching the features to a database to track the camera motion, while building the database simultaneously. A motion estimate may be also provided from external sensors and fused with the motion computed from the images. Individual stereo pairs are processed to compute dense 3D data representing the scene and are transformed, using the estimated camera motion, into a common reference and fused together. The resulting 3D data is represented as point clouds, surfaces, or volumes. The present invention also provides a system (method and apparatus) for enhancing 3D models of environments or objects by registering information from additional sensors to improve model fidelity or to augment it with supplementary information by using a light pattern projector. The present invention also provides a system (method and apparatus) for generating photo-realistic 3D models of underground environments such as tunnels, mines, voids and caves, including automatic registration of the 3D models with pre-existing underground maps.04-22-2010
20120155750METHOD AND APPARATUS FOR RECEIVING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE, AND METHOD AND APPARATUS FOR TRANSMITTING MULTIVIEW CAMERA PARAMETERS FOR STEREOSCOPIC IMAGE - Provided is a method of receiving multiview camera parameters for a stereoscopic image. The method includes: extracting multiview camera parameter information for a predetermined data section from a received stereoscopic image data stream; extracting matrix information including at least one of translation matrix information and rotation matrix information for the predetermined data section from the multiview camera parameter information; and restoring coordinate systems of multiview cameras by using the extracted matrix information.06-21-2012
20110064299IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method, includes: detecting a correspondence of each pixel between images acquired by imaging a subject from a plurality of viewpoints; calculating depth information of a non-occlusion pixel and creating a depth map including the depth information; regarding a region consisting of occlusion pixels as an occlusion region and determining an image reference region including the occlusion region and a peripheral region; dividing the image reference region into clusters on the basis of an amount of feature in the image reference region; calculating the depth information of the occlusion pixel in each cluster on the basis of the depth information in at least one cluster from the focused cluster, and clusters selected on the basis of the amount of feature of the focused cluster in the depth map; and adding the depth information of the occlusion pixel to the depth map.03-17-2011
20110064298APPARATUS FOR EVALUATING IMAGES FROM A MULTI CAMERA SYSTEM, MULTI CAMERA SYSTEM AND PROCESS FOR EVALUATING - An apparatus for evaluating images from a multi camera system is proposed, the multi camera system comprising a main camera for generating a main image and at least two satellite cameras for generating at least a first and a second satellite image. The cameras can be orientated to a common observation area. The apparatus is operable to estimate a combined positional data of a point in the 3D-space of the observation area corresponding to a pixel or group of pixels of interest of the main image. The apparatus comprises first disparity means for estimating at least a first disparity data concerning the pixel or group of pixels of interest derived from the main image and the first satellite image, second disparity means for estimating at least a second disparity data concerning the pixel or group of pixels of interest derived from the main image and the second satellite image, and positional data means for estimating the combined positional data of the point in the 3D-space of the observation area corresponding to the pixel or group of pixels of interest. The positional data means is operable to estimate first positional data on basis of the first disparity data and second positional data on basis of the second disparity data, and to combine the first positional data and the second positional data to the combined positional data.03-17-2011
20110064300INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - The present invention relates to an information processing device, an information processing method, and a program, capable of drawing, in 3D image display, an image for the left eye and an image for a right eye of graphics, in a matched state.03-17-2011
20110158507METHOD FOR VISION FIELD COMPUTING - A method for vision field computing may comprise the following steps of: forming a sampling system for a multi-view dynamic scene; controlling cameras in the sampling system for the multi-view dynamic scene to perform spatial interleaved sampling, temporal interleaved exposure sampling and exposure-variant sampling; performing spatial intersection to the sampling information in the view subspace of the dynamic scene and temporal intersection to the sampling information in the time subspace of the dynamic scene to reconstruct a dynamic scene geometry model; performing silhouette back projection based on the dynamic scene geometry model to obtain silhouette motion constraints for the view angles of the cameras; performing temporal decoupling for motion de-blurring with the silhouette motion constraints; and reconstructing a dynamic scene 3D model with a resolution larger than nominal resolution of each camera by a 3D reconstructing algorithm.06-30-2011
20110317910IMAGE ANALYSIS METHOD AND IMAGE ANALYSIS APPARATUS - An image analysis method includes acquiring images of spatially different analysis regions. Each of the images of the analysis regions is constituted by pixels including a plurality of data acquired simultaneously or time-serially. The method further includes obtaining a cross-correlation between two analysis regions by using data of pixels of images of the analysis regions.12-29-2011
20120002863Depth image encoding apparatus and depth image decoding apparatus using loop-filter, method and medium - A depth image encoding apparatus and a depth image decoding apparatus are provided. The depth image encoding apparatus may compute coefficients used to restore an edge region and a smooth region of a depth image, and may restore the depth image using the depth image and a color image.01-05-2012
20120002867FEATURE POINT GENERATION SYSTEM, FEATURE POINT GENERATION METHOD, AND FEATURE POINT GENERATION PROGRAM - A feature point generation system capable of generating a feature point that satisfies a preferred condition from a three-dimensional shape model is provided. Image group generation means 31 generates a plurality of images obtained by varying conditions with respect to the three-dimensional shape model. Evaluation means 33 calculates a first evaluation value that decreases steadily as a feature point group is distributed more uniformly on the three-dimensional shape model and a second evaluation value that decreases steadily as extraction of a feature point in an image corresponding to a feature point on the three-dimensional shape model becomes easier, and calculates an evaluation value relating to a designated feature point group as a weighted sum of the respective evaluation values. Feature point arrangement means 32 arranges the feature point group on the three-dimensional shape model so that the evaluation value calculated by the evaluation means 33 is minimized.01-05-2012
20120002864IMAGE PROCESSING UNIT, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing unit includes a statistical information calculating section which calculates statistical information in macroblock units with regard to image data with a plurality of fields, a region determination section which executes region determination with regard to the image data with the level of recognition of three-dimensional images as a determination standard using the statistical information calculated by the statistical information calculating section, and an encoding processing section which encodes the image data of each field and generates an encoded stream while changing the content of the encoding process for each of the macroblocks according to the result of the region determination executed by the region determination section.01-05-2012
20120002862APPARATUS AND METHOD FOR GENERATING DEPTH SIGNAL - According to one embodiment, a depth signal generating apparatus includes following units. The calculating unit is configured to calculate a statistic value for pixel values for each of predefined areas in the first image, and calculate, for each of predetermined base depth models, a first evaluation value based on the calculated statistic value. The correcting unit is configured to correct, based on a second evaluation value previously derived for the second image and a first degree of similarity indicating a similarity between the predetermined base depth models, the first evaluation value to derive second evaluation values for the predetermined base depth models. The selecting unit is configured to select a base depth model having the highest second evaluation value from the predetermined base depth models. The generating unit is configured to generate a depth signal based on the selected base depth model.01-05-2012
20120002865METHOD FOR PERFORMING AUTOMATIC CLASSIFICATION OF IMAGE INFORMATION - The method is characterised in that the method comprises the steps that a computer or several interconnected computers are caused to a) store, in the form of a pixel set in which set each pixel is associated with image information in at least one channel for light intensity, a first image to be classified onto a digital storage medium; b) carry out a first classification of the image, which classification is caused to be based upon the image information of each respective pixel and which classification is caused to associate each pixel with a certain class in a first set of classes, and to store these associations in a first database; c) calculate, for each pixel and for several classes in the first set of classes, the smallest distance in the image between the pixel in question and the closest pixel which is associated with the class in question in the database, and to store an association between each pixel and the calculated smallest distance for the pixel in a second database for each class for which a distance has been calculated; d) carry out a second classification of the data in the second database, which classification is caused to be based upon the smallest distance for each pixel to each respective class, and to associate each pixel to a certain class in a second set of classes; and e) store the classified image in the form of a set of pixels onto a digital storage medium, where each pixel comprises data regarding the association of the pixel to the certain class in the second set of classes, and where the classified image has the same dimensions as the first image.01-05-2012
20120045116METHOD FOR 3D DIGITALIZATION OF AN OBJECT WITH VARIABLE SURFACE - In a method for the 3D digitalization of an object with variable surface a plurality of camera pictures of partial surfaces of the object (02-23-2012
20090148036Image processing apparatus, image processing method, image processing program and position detecting apparatus as well as mobile object having the same - There is provided an image processing apparatus capable of reducing a memory amount to be used and a processing time in processing images captured stereoscopically in wide-angle. In order to find pixel positions of an object as information for use in detecting position of the object from images captured by two cameras that are capable of imaging the object in wide-angle and are disposed on a straight line, the image processing apparatus includes an image input means for inputting the images captured by the two cameras, an image projecting means for projecting the images inputted from the respective cameras on a cylindrical plane having an axial line disposed in parallel with the straight line on which the respective cameras are disposed while correcting distortions and a pixel position detecting means for detecting the pixel positions corresponding to the object in the image projected on the cylindrical plane.06-11-2009
20120207383METHOD AND APPARATUS FOR PERFORMING SEGMENTATION OF AN IMAGE - A method and system for segmenting a plurality of images. The method comprises the steps of segmenting the image through a novel clustering technique that is, generating a composite depth map including temporally stable segments of the image as well as segments in subsequent images that have changed. These changes may be determined by determining one or more differences between the temporally stable depth map and segments included in one or more subsequent frames. Thereafter, the portions of the one or more subsequent frames that include segments including changes from their corresponding segments in the temporally stable depth map are processed and are combined with the segments from the temporally stable depth map to compute their associated disparities in one or more subsequent frames. The images may include a pair of stereo images acquired through a stereo camera system at a substantially similar time.08-16-2012
20120207384Representing Object Shapes Using Radial Basis Function Support Vector Machine Classification - A shape of an object is represented by a set of points inside and outside the shape. A decision function is learned from the set of points an object. Feature points in the set of points are selected using the decision function, or a gradient of the decision function, and then a local descriptor is determined for each feature point.08-16-2012
20090016598METHOD FOR COMPUTER-AIDED IDENTIFICATION OF THE CHILD OCTANTS OF A PARENT OCTANT, WHICH ARE INTERSECTED BY A BEAM, IN AN OCTREE DATA STRUCTURE BY MEANS OF LOOK-UP TABLES - The present invention relates to a method for computer-aided identification of the child octants of a parent octant, which are intersected by a beam, in an octree data tree. The method firstly determines the number of the child octants of the parent octant which are intersected by the beam and, on the basis thereof, the child octants of the parent octant which are intersected by the beam. It is characterised in that, for determination of intermediate octants which do not correspond to the entry and the exit octant and nevertheless are intersected by the beam, look-up tables are used for identification.01-15-2009
20120057777IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a two-dimensional orthogonal transform unit configured to perform two-dimensional orthogonal transform on a plurality of images, an one-dimensional orthogonal transform unit configured to perform one-dimensional orthogonal transform in a direction in which the images are arranged on two-dimensional orthogonal transform coefficient data obtained by performing the two-dimensional orthogonal transform on the images using the two-dimensional orthogonal transform unit, and a three-dimensional orthogonal transform coefficient data encoder configured to encode three-dimensional orthogonal transform coefficient data obtained by performing the one-dimensional orthogonal transform on the two-dimensional orthogonal transform coefficient data using the one-dimensional orthogonal transform unit.03-08-2012
20120008853THREE-DIMENSIONAL (3D) IMAGE PROCESSING METHOD AND SYSTEM - A three-dimensional (3D) image processing method is provided. The method includes receiving from an image source a 3D image containing compressed first image pixel data and compressed second image pixel data, and storing the received compressed first image pixel data and compressed second image pixel data in a line register group. The method also includes determining a relationship between lines of the compressed first image pixel data and compressed second image pixel data, and using reading and writing operations on the line register group based on the relationship and a predetermined timing sequence to decompress the compressed first image pixel data and compressed second image pixel data.01-12-2012
20120008857METHOD OF TIME-EFFICIENT STEREO MATCHING - Unlike previous works with emphasis on hardware level optimization for the processing time reduction in stereo matching, the present invention provides a time efficient stereo matching method which is applicable at an algorithm level, which is compatible with and thus can be employed to any types of stereo matching implementation.01-12-2012
20120008855STEREOSCOPIC IMAGE GENERATION APPARATUS AND METHOD - According to embodiments, a stereoscopic image generation apparatus for generating a disparity image based on at least one image and depth information corresponding to the at least one image is provided. The apparatus includes a calculator, selector and generator. The calculator calculates, based on the depth information, evaluation values that assume larger values with increasing hidden surface regions generated upon generation of disparity images for respective viewpoint sets each including two or more viewpoints. The selector selects one of the viewpoint sets based on the evaluation values calculated for the viewpoint sets. The generator generates, from the at least one image and the depth information, the disparity image at a viewpoint corresponding to the one of the viewpoint sets selected by the selector.01-12-2012
20120008854Method and apparatus for rendering three-dimensional (3D) object - Provided is a method and apparatus that may generate a three-dimensional (3D) object from a two-dimensional (2D) image, and render the generated 3D object.01-12-2012
20120008852SYSTEM AND METHOD OF ENHANCING DEPTH OF A 3D IMAGE - A system and method of enhancing depth of a three-dimensional (3D) image are disclosed. A depth generator generates at least one depth map associated with an image. A depth enhancer enhances the depth map by stretching a depth histogram associated with the depth map, wherein the depth histogram is a distribution of depth levels of pixels of the image.01-12-2012
20100008565Method of object location in airborne imagery using recursive quad space image processing - A method and computer workstation are disclosed which determine the location in the ground space of selected point in a digital image of the earth obtained by an airborne camera. The image is rectangular and has four corners and corresponds to an image space. The image is associated with data indicating the geo-location coordinates for the points in the ground space corresponding to the four corners of the image, e.g., an image formatted in accordance with the NITF standard. The method includes the steps of: (a) performing independently and in parallel a recursive partitioning of the image space and the ground space into successively smaller quadrants until a pixel coordinate in the image assigned to the selected point is within a predetermined limit (Δ) of the center of a final recursively partitioned quadrant in the image space. The method further includes a step of (b) calculating a geo-location of the point in the ground space corresponding to the selected point in the image space from the final recursively partitioned quadrant in the ground space corresponding to the final recursively partitioned quadrant in the image space. The methods are particularly useful for geo-location from oblique reconnaissance imagery.01-14-2010
20120008856Automatic Convergence Based on Face Detection for Stereoscopic Imaging - A method for automatic convergence of stereoscopic images is provided that includes receiving a stereoscopic image, selecting a face detected in the stereoscopic image, and shifting at least one of a left image in the stereoscopic image and a right image in the stereoscopic image horizontally, wherein horizontal disparity between the selected face in the left image and the selected face in the right image before the shifting is reduced. In some embodiments, the horizontal disparity is reduced to zero01-12-2012
20120057775INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a feature amount extracting unit configured to extract the feature amount of each frame of an image of a content for detector learning of interest that is a content to be used for learning of a highlight detector which is a model for detecting a scene in which the user is interested as a highlight scene; a clustering unit configured to use cluster information that is the information of the cluster obtained by performing cluster learning; a highlight label generating unit configured to generate a highlight label sequence; and a highlight detector learning unit configured to perform learning of the highlight detector.03-08-2012
20120057780IMAGE SIGNAL PROCESSING DEVICE AND IMAGE SIGNAL PROCESSING METHOD - When crosstalk is cancelled without considering the contents of an image signal, the effect of the crosstalk cancellation is sometimes obtained effectively, and sometimes not. In order to solve this problem, an image signal processing unit which cancels crosstalk in a three-dimensional image signal includes image adaptation control units (03-08-2012
20120057779Method and Apparatus for Confusion Learning - A method and apparatus for processing image data is provided. The method includes the steps of employing a main processing network for classifying one or more features of the image data, employing a monitor processing network for determining one or more confusing classifications of the image data, and spawning a specialist processing network to process image data associated with the one or more confusing classifications.03-08-2012
20120057776THREE-DIMENSIONAL DISPLAY SYSTEM WITH DEPTH MAP MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of a three-dimensional display system includes: calculating an edge pixel image from a source image; generating a line histogram from the edge pixel image by applying a transform; calculating a candidate line from the line histogram meeting or exceeding a line category threshold for a horizontal line category, a vertical line category, a diagonal line category, or a combination thereof; calculating a vanishing point on the candidate line; and generating a depth map for the vanishing point for displaying the source image on a first device.03-08-2012
20090148038DISTANCE IMAGE PROCESSING APPARATUS AND METHOD - A distance image processing apparatus including a distance image obtaining unit for obtaining distance values that include depth information and position information, and represent a three-dimensional shape of a subject obtained by photographing the subject, a conversion unit for converting the depth information with a quantization number such that the smaller the depth information the larger the quantization number, and an image file generation unit for generating an image file of a distance image with distance values that include the converted depth information as the pixel value of each pixel, the image file including information related to the conversion attached thereto.06-11-2009
20120027292THREE-DIMENSIONAL OBJECT DETERMINING APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT - According to one embodiment, a three-dimensional object determining apparatus includes: a detecting unit configured to detect a plurality of feature points of an object included in an image data that is acquired; a pattern normalizing unit configured to generate a normalized pattern that is normalized by a three-dimensional model from the image data using the plurality of feature points; an estimating unit configured to estimate an illumination direction in which light is emitted to the object in the image data from the three-dimensional model and the normalized pattern; and a determining unit configured to determine whether or not the object in the image data is a three-dimensional object on the basis of the illumination direction.02-02-2012
20120027291MULTI-VIEW IMAGE CODING METHOD, MULTI-VIEW IMAGE DECODING METHOD, MULTI-VIEW IMAGE CODING DEVICE, MULTI-VIEW IMAGE DECODING DEVICE, MULTI-VIEW IMAGE CODING PROGRAM, AND MULTI-VIEW IMAGE DECODING PROGRAM - The disclosed multi-view image coding/decoding device first obtains depth information for an object photographed in an area subject to processing. Next, a group of pixels in an already-coded (decoded) area which is adjacent to the area subject to processing and in which the same object as in the area subject to processing has been photographed is determined using the depth information and set as a sample pixel group. Then, a view synthesis image is generated for the pixels included in the sample pixel group and the area subject to processing. Next, correction parameters to correct illumination and color mismatches in the sample pixel group are estimated from the view synthesis image and the decoded image. A predicted image is then generated by correcting the view synthesis image relative to the area subject to processing using the estimated correction parameters.02-02-2012
20120027290OBJECT RECOGNITION USING INCREMENTAL FEATURE EXTRACTION - In one example, an apparatus includes a processor configured to extract a first set of one or more keypoints from a first set of blurred images of a first octave of a received image, calculate a first set of one or more descriptors for the first set of keypoints, receive a confidence value for a result produced by querying a feature descriptor database with the first set of descriptors, wherein the result comprises information describing an identity of an object in the received image, and extract a second set of one or more keypoints from a second set of blurred images of a second octave of the received image when the confidence value does not exceed a confidence threshold. In this manner, the processor may perform incremental feature descriptor extraction, which may improve computational efficiency of object recognition in digital images.02-02-2012
20120250977Method and System for Determining Projections in Non-Central Catadioptric Optical Systems - A three-dimensional (3D) location of a reflection point of a ray between a point in a scene (PS) and a center of projection (COP) of a camera of a catadioptric system is determined. The catadioptric system is non-central and includes the camera and a reflector, wherein a surface of the reflector is a quadric surface rotationally symmetric around an axis of symmetry. The 3D location of the reflection point is determined based on a law of reflection, an equation of the reflector, and an equation describing a reflection plane defined by the COP, the PS, and a point of intersection of a normal to the reflector at the reflection point with the axis of symmetry.10-04-2012
20120106830Texture Identification - Technologies are generally described for determining a texture of an object. In some examples, a method for determining a texture of an object includes receiving a two-dimensional image representative of a surface of the object, estimating a three-dimensional (3D) projection of the image, transforming the 3D projection into a frequency domain, projecting the 3D projection in the frequency domain onto a spherical co-ordinate system, and determining the texture of the surface by analyzing spectral signatures extracted from the 3D projection on the spherical co-ordinate system.05-03-2012
20120301013ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object.11-29-2012
20120301012IMAGE SIGNAL PROCESSING DEVICE AND IMAGE SIGNAL PROCESSING METHOD - When super-resolution processing is applied to an entire screen image at the same intensity, a blur contained in an input image is uniformly reduced over the entire screen image. Therefore, the screen image may be seen differently from when it is naturally seen. As one of methods for addressing the problem, there is such a method that: when a first image for a left eye and a second image for a right eye are inputted, each of parameters concerning image-quality correction is determined based on a magnitude of a positional deviation between associated pixels in the first image and second image respectively; and the parameters are used to perform image-quality correction processing for adjusting a sense of depth of an image.11-29-2012
20120301011DEVICES, METHODS, AND APPARATUSES FOR HOMOGRAPHY EVALUATION INVOLVING A MOBILE DEVICE - Components, methods, and apparatuses are provided that may be used to access information pertaining to a two-dimensional image of a three-dimensional object, to detect homography between said image of said three-dimensional object captured in said two-dimensional image indicative of said three-dimensional object and a reference object image and to determine whether said homography indicates pose suitable for image augmentation based, at least in part, on characteristics of an elliptically-shaped area that encompasses at least some of a plurality of inliers distributed in said two-dimensional image.11-29-2012
20120250979IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM - An image processing apparatus includes a depth control signal generation unit generating a depth control signal controlling emphasis of the feel of each region of an input image based on the depth position of a subject in each region of the input image; a face skin region control signal generation unit generating a face skin region control signal controlling emphasis of the feel of each region in the input image based on the human face skin region in the input image; a person region control signal generation unit generating a person region control signal controlling emphasis of the feel of each region in the input image based on the region of the person in the input image; and a control signal synthesis unit synthesizing the depth control signal, the face skin region control signal, and the person region control signal to generate a control signal.10-04-2012
20120063669Automatic Convergence of Stereoscopic Images Based on Disparity Maps - A method for automatic convergence of stereoscopic images is provided that includes receiving a stereoscopic image, generating a disparity map comprising a plurality of blocks for the stereoscopic image, clustering the plurality of blocks into a plurality of clusters based on disparities of the blocks, selecting a cluster of the plurality of clusters with a smallest disparity as a foreground cluster, determining a first shift amount and a first shift direction and a second shift amount and a second shift direction based on the smallest disparity, and shifting a left image in the stereoscopic image in the first shift direction by the first shift amount and a right image in the stereoscopic image in the second shift direction by the second shift amount, wherein the smallest disparity is reduced.03-15-2012
20120155744IMAGE GENERATION METHOD - A method of generating output image data representing a view from a specified spatial position in a real physical environment. The method comprises receiving data identifying the spatial position in the physical environment, receiving image data, the image data having been acquired using a first sensing modality and receiving positional data indicating positions of a plurality of objects in the real physical environment, the positional data having been acquired using a second sensing modality. At least part of the received image data is processed based upon the positional data and the data representing the specified spatial position to generate the output image data.06-21-2012
20120155742IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing device includes a plurality of parallax image generators. Each of the parallax image generators is configured to generate a first image and a second image based on an input image and a parameter for setting a distance between viewpoints. There is a first parallax between the first image and the second image. The first parallax depends on the parameter for setting the distance between viewpoints. The input image is inputted to the parallax image generators in common. A plurality of parameters for setting the distance between viewpoints different from each other are inputted to the parallax image generators, respectively.06-21-2012
20120155748APPARATUS AND METHOD FOR PROCESSING STEREO IMAGE - processing a stereo image by infrared images. Proposed are stereo image processing apparatus and method that may generate a correction pattern for stereo matching by analyzing in real time at least one of the stability of a stereo image, the number of feature points included in a camera image, and an illumination condition and emit the correction pattern toward a subject as a feedback value. According to exemplary embodiment of the present invention, it is possible to improve the stability and accuracy of the stereo image.06-21-2012
20120155747STEREO IMAGE MATCHING APPARATUS AND METHOD - The present invention relates to a stereo image matching apparatus and method. The stereo matching apparatus includes a window image extraction unit for extracting window images, each having a predetermined size around a selected pixel, for individual pixels of images that constitute stereo images. A local support-area determination unit extracts a similarity mask having similarities equal to or greater than a threshold and a local support-area mask having neighbor connections to a center pixel of the similarity mask, from each of similarity images generated depending on differences in similarity between pixels of the window images. A similarity extraction unit calculates a local support-area similarity from a sum of similarities of a local support-area. A disparity selection unit selects a pair of window images for which the local support-area similarity is maximized, from among the window images, and then determines a disparity for the stereo images.06-21-2012
20120155746ADAPTIVE HIGH SPEED/HIGH RESOLUTION 3D IMAGE RECONSTRUCTION METHOD FOR ANY MEASUREMENT DISTANCE - Disclosed is a method for performing a 3D image reconstruction at a high speed and high resolution, regardless of a measurement distance. Specifically, a weight for image reconstruction is previously set, and a 3D image reconstruction algorithm is performed at a high speed, without reducing a resolution, by a parallel processing for image reconstruction, a computation of a partial region using a database based on a measurement result, and a generation of a variable pulse waveform.06-21-2012
20120155745APPARATUS AND METHOD FOR EXTRACTING CORRESPONDENCES BETWEEN AERIAL IMAGES - Disclosed herein is an apparatus and method for extracting correspondences between aerial images. The apparatus includes a line extraction unit, a line direction determination unit, a building top area extraction unit, and a correspondence extraction unit. The line extraction unit extracts lines corresponding buildings from aerial images. The line direction determination unit defines the directions of the lines as x, y and z axis directions based on a two-dimensional (2D) coordinate system. The building top area extraction unit rotates lines in the x and y axis directions so that the lines are arranged in parallel with the horizontal and vertical directions of the 2D image, and then extracts building top areas from rectangles. The correspondence extraction unit extracts correspondences between the aerial images by comparing the locations of the building top areas extracted from the aerial images.06-21-2012
20120155743APPARATUS AND METHOD FOR CORRECTING DISPARITY MAP - Disclosed herein are an apparatus and method for correcting a disparity map. The apparatus includes a disparity map area setting unit, a pose estimation unit, and a disparity map correction unit. The apparatus removes the noise of the disparity map attributable to stereo matching and also fills in holes attributable to occlusion using information about the depth of a 3-dimensional (3D) model produced in a preceding frame of a current frame, thereby improving a disparity map and depth performance and providing high-accuracy depth information to an application to be used.06-21-2012
20110075916MODELING METHODS AND SYSTEMS - Methods and/or systems for modeling 3-dimensional objects (for example, human faces). In certain example embodiments, methods and/or systems usable for computer animation or static manipulation or modification of modeled images (e.g., faces), image processing, or for facial (or other object) recognition methods and/or systems.03-31-2011
20120155749METHOD AND DEVICE FOR CODING A MULTIDIMENSIONAL DIGITAL SIGNAL - The present invention relates to a method and a device for coding a multidimensional signal (LL06-21-2012
20100054578Method and apparatus for interactive visualization and distribution of very large image data sets - The present invention discloses a system for real-time visualization and distribution of very large image data sets using on-demand loading and dynamic view prediction. A robust image representation scheme is used for efficient adaptive rendering and a perspective view generation module is used to extend the applicability of the system to panoramic images. The effectiveness of the system is demonstrated by applying it both to imagery that does not require perspective correction and to very large panoramic data sets requiring perspective view generation. The system permits smooth, real-time interactive navigation of very large panoramic and non-panoramic image data sets on average personal computers without the use of specialized hardware.03-04-2010
20100172572Focus-Based Edge Detection - A model generator computes a first image perimeter color difference value for each of a plurality of first pixels included in a first image that is captured using a first focal length, and selects one of the first image perimeter color difference values that exceeds a perimeter color difference threshold. Next, the model generator computes a second image perimeter color difference value for each of a plurality of second pixels included in a second image that is captured using a second focal length, and selects one of the second image perimeter color difference values that exceeds the perimeter color difference threshold. The model generator then determines that an edge is located at the first focal length by detecting that the selected first image perimeter color difference value is greater than the selected second image perimeter color difference value, and generates an image accordingly.07-08-2010
201101036813D atomic scale imaging methods - The present invention is directed generally toward atom probe and TEM data and associated systems and methods. Other aspects of the invention are directed toward combining APT data and TEM data into a unified data set. Other aspects of the invention are directed toward using the data from one instrument to improve the quality of concepts data obtained from another instrument.05-05-2011
20110103680METHOD AND APPARATUS FOR PROCESSING THREE-DIMENSIONAL IMAGES - A three-dimensional sense adjusting unit displays three-dimensional images to a user. If a displayed reaches a limit of parallax, the user responds to the three-dimensional sense adjusting unit. According to acquired appropriate parallax information, a parallax control unit generates parallax images to realize the appropriate parallax in the subsequent stereo display. The control of parallaxes is realized by optimally setting camera parameters by going back to three-dimensional data. Functions to realize the appropriate parallax are made into and presented by a library.05-05-2011
20120121168COMPOUND OBJECT SEPARATION - Representations of an object in an image generated by an imaging apparatus can comprise two or more separate sub-objects, producing a compound object. Compound objects can negatively affect the quality of object visualization and threat identification performance. As provided herein, a compound object can be separated into sub-objects. Topology score map data, representing topological differences in the potential compound object, may be computed and used in a statistical distribution to identify modes that may be indicative of the sub-objects. The identified modes may be assigned a label and a voxel of the image data indicative of the potential compound object may be relabeled based on the label assigned to a mode that represents data corresponding to properties of a portion of the object that the voxel represents to create image data indicative of one or more sub-objects.05-17-2012
20100290698Distance-Varying Illumination and Imaging Techniques for Depth Mapping - A method for mapping includes projecting a pattern onto an object (11-18-2010
20100290697METHODS AND SYSTEMS FOR COLOR CORRECTION OF 3D IMAGES - A system and method for color correction of 3D images including at least two separate image streams captured for a same scene include determining three-dimensional properties of at least a portion of a selected image stream, the three-dimensional properties including light and surface reflectance properties, surface color, reflectance properties, scene geometry and the like. A look of the portion of the selected image stream is then modified by altering the value of at least one of the determined three-dimensional properties and, in one embodiment, applying image formation theory. The modifications are then rendered in an output 3D picture either automatically and/or according to user inputs. In various embodiments, corrections made to the selected one of the at least two image streams can be automatically applied to the other of the image streams.11-18-2010
20100092071SYSTEM AND METHODS FOR NAVIGATION USING CORRESPONDING LINE FEATURES - A method for navigating identifies line features in a first three-dimensional (3-D) image and a second 3-D image as a navigation platform traverses an area and compares the line features in the first 3-D image that correspond to the line features in the second 3-D image. When the lines features compared in the first and the second 3-D images are within a prescribed tolerance threshold, the method uses a conditional set of geometrical criteria to determine whether the line features in the first 3-D image match the corresponding line features in the second 3-D image.04-15-2010
20120121167FINITE DATASET INTERPOLATION METHOD - The invention provides a fast method for a high-quality interpolation of a finite multidimensional dataset. It has particular application in digital image processing, including, but not limited to, processing of both still images and real-time image/data processing. The method uses discrete cosine and sine transforms of appropriate types to covert, in blocks of desired size, the initial dataset to the frequency domain. Proposed interpolators calculate a chain of inverse transforms of non-square sizes that perform the interpolation. The larger transform is broken into smaller transforms of non-square size using a recursive size reduction process of FFT-type, and the smaller transforms are calculated directly exploiting the symmetry properties of smaller interpolator functions involved. An output dataset is then assembled using the calculated transforms. The method avoids computationally costly process of inflating the coefficient space by padding zeros exploited for DCT-based interpolations previously.05-17-2012
20110110579SYSTEMS AND METHODS FOR PHOTOGRAMMETRICALLY FORMING A 3-D RECREATION OF A SURFACE OF A MOVING OBJECT USING PHOTOGRAPHS CAPTURED OVER A PERIOD OF TIME - A method for creating a 3-D data set of a surface of a moving object includes rigidly coupling a reference frame with targets to the object such that a change in position or orientation of the object causes a corresponding change in the reference frame. A first photograph is captured of at least a portion of the object and at least some of the plurality of targets at a first camera location. A second photograph is captured of at least a portion of the object and at least some of the plurality of targets at a second camera position. The object moves between the capturing of the first photograph and the capturing of the second photograph. The captured photographs are input to a computing device that is configured and arranged to determine 3-D data points corresponding to the surface of the object captured in the photographs.05-12-2011
20110129143METHOD AND APPARATUS AND COMPUTER PROGRAM FOR GENERATING A 3 DIMENSIONAL IMAGE FROM A 2 DIMENSIONAL IMAGE - A method of generating a three dimensional image from a two dimensional image is described. In the method, the two dimensional image has a background and a first foreground object and a second foreground object located thereon, the method comprising the steps of: applying a transformation to a copy of the background, generating stereoscopically for display the background and the transformed background, generating stereoscopically for display the first and second foreground object located on the stereoscopically displayable background and the transformed background and determining whether the first and second foreground objects occlude with one another, wherein in the event of occlusion, the occluded combination of the first and second object forms a third foreground object and, the method further comprises the step of: applying a transformation to the third foreground object, wherein the transformation applied to the third foreground object is less than or equal to the transformation applied to the background; generating a copy of the third foreground object with the transformation applied thereto and generating stereoscopically for display the third foreground object with the transform applied thereto and the copy of the third foreground object displaced relative to one another by an amount determined in accordance with the position of one of the first or second foreground objects in the image.06-02-2011
20120128235APPARATUS AND METHOD FOR RECONSTRUCTING COMPUTED TOMOGRAPHY IMAGE USING COLOR CHANNEL OF GRAPHIC PROCESSING UNIT - Provided is an apparatus and method for reconstructing a computed tomography (CT) image using a color channel of a graphic processing unit (GPU) that reconstructs a three-dimensional (3D) image using a projection image obtained from a CT device. According to an embodiment of the present invention, an apparatus for reconstructing a CT image may include a tomography unit to acquire a plurality of projection images, a filter application unit to load the plurality of projection images on a texture memory having a color channel, and filter the plurality of projection images, and a back-projection application unit to apply a back-projection scheme to the plurality of projection images loaded on the texture memory having a color channel.05-24-2012
20120128234System for Generating Images of Multi-Views - The present invention provides a system for generating images of multi-views. The system includes a processing unit; an image range calculating module coupled to the processing unit to calculate the ranges of a background image and a main body image of a 2D original image of an article; a depth model generating module coupled to the processing unit to generate a depth model according to an equation; an image cutting module coupled to the processing unit to cut the 2D original image of the article or the depth model to generate a cut 2D image of the article or a depth model with a main body image outline; a pixel shifting module coupled to the processing unit to shift every pixel in the main body image of the 2D original image of the article according to the depth model with the main body image outline to obtain shifted main body images of multi-views; and an image synthesizing module coupled to the processing unit to synthesize the shifted main body images of multi-views and background figures of multi-views to obtain final images of multi-views for 3D image reconstruction.05-24-2012
20120128236METHOD AND APPARATUS FOR STEREO MISALIGNMENT ESTIMATION USING MODIFIED AFFINE OR PERSPECTIVE MODEL - A method and apparatus for estimating stereo misalignment using modified affine or perspective model. The method includes dividing a left frame and a right frame into blocks, comparing horizontal and vertical boundary signals in the left frame and the right frame, estimating the horizontal and the vertical motion vector for each block in a reference frame, selecting a reliable motion vectors from a set of motion vectors, dividing the selected block into smaller features, feeding the data to an affine or a perspective transformation model to solve for the model parameters, running the model parameters through a temporal filter, portioning the estimated misalignment parameters between the left frame and right frame, and modifying the left frame and the right frame to save some boundary space.05-24-2012
20120163700IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device which includes, a sorting circuit inputting a stereoscopic image signal formed of a left eye image and a right eye image, and outputs the left and right eye images at the same timing line by line; a parallax generation circuit generating respective parallax images from the left eye image and the right eye image which are output from the sorting circuit; each delay circuit for the left and right eye images, which delays and outputs the left and right eye images which are output from the sorting circuit by the processing time of the parallax generation circuit, respectively; and an image combining circuit which synthesizes the images which are output from the delay circuit and the parallax generation circuit, respectively, and obtains the multi-viewpoint images.06-28-2012
20120163702IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus generating a multi-viewpoint image includes a parallax detection unit that receives only one of a plurality of actually-taken images including a left-eye image and a right-eye image and detects parallax of the received image so as to generate a parallax map, a first pseudo three-dimensional image generation unit that receives the left-eye image and generates one or more externally-provided or internally-provided images, based on the parallax map generated by the parallax detection unit, a first delay unit that receives the left-eye image and outputs the left-eye image with elapse of delay time, a second pseudo three-dimensional image generation unit that receives the right-eye image and generates one or more externally-provided or internally-provided images, based on the parallax map, and a second delay unit that receives the right-eye image and outputs the right-eye image with elapse of delay time.06-28-2012
20120163701IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing device including an image input unit that inputs a two-dimensional image signal; a depth information output unit that inputs or generates depth information; a depth information reliability output unit that inputs or generates the reliability of depth information that the depth information output unit outputs; an image conversion unit that inputs an image signal, depth information, and depth information reliability, and generates and outputs a left eye image and a right eye image for realizing binocular stereoscopic vision; and an image output unit that outputs a left eye image and a right eye image, wherein the image conversion unit has a configuration of performing image generation of at least one of a left eye image and a right eye image and executes a conversion process in which the depth information and the depth information reliability are applied as conversion control data during the image conversion.06-28-2012
20120257815METHOD AND APPARATUS FOR ANALYZING STEREOSCOPIC OR MULTI-VIEW IMAGES - A method for analyzing the colors of stereoscopic or multi-view images is described. The method comprises the steps of retrieving one or more disparity maps for the stereoscopic or multi-view images, aligning one or more of the images to a reference image by warping the one or more images according to the retrieved disparity maps, and performing an analysis of discrepancies on one or more of the aligned images.10-11-2012
20100208981Method for visualization of point cloud data based on scene content - Systems and methods for associating color with spatial data are provided. In the system and method, a scene tag is selected for a portion 08-19-2010
20100208982HOUSE CHANGE JUDGMENT METHOD AND HOUSE CHANGE JUDGMENT PROGRAM - It is an object to improve the accuracy of a house change judgment based on images and the like acquired by an airplane. A terrain altitude is subtracted from an attitude value of a digital surface model (DSM) acquired from an airplane or the like to generate a normalized DSM (NDSM). A judgment target region is segmented into a plurality of regions of elevated part for each elevated part with a size corresponding to a house appearing on the NDSM. An outline of the house is extracted from each region of elevated part and a house object containing three-dimensional information on the house is defined by the outline and NDSM data within the outline. The house objects acquired at two different time points, respectively, are compared to detect a variation between the two different time points, and a judgment as to a house change is made based on the variation.08-19-2010
20110182499METHOD FOR DETERMINING THE SURFACE COVERAGE OBTAINED BY SHOT PEENING - In a method for determining the surface coverage obtained by shot peening to ensure uniform and complete strengthening of the surface of components, in particular blisk blades, a shot-peened surface topography is digitalized by an optical digital recording unit. A three-dimensional height profile is then prepared by measuring and evaluation software which includes both indentations and excrescences due to shot peening and also roughnesses due to manufacturing, which are smaller than the excrescences and indentations. The roughnesses are subsequently filtered out from the height image by a software filter using mathematical methods. A height diagram with the indentations situated below a zero line is established, with the size of these indentations being calculated in relation to the total area in the height diagram and the extent of coverage of the entire shot-peened surface being determined therefrom.07-28-2011
20110182497CASCADE STRUCTURE FOR CLASSIFYING OBJECTS IN AN IMAGE - A cascade object classification structure for classifying one or more objects in an image is provided. The cascade object classification structure includes a plurality of nodes arranged in one or more layers. Each layer includes at least one parent node and each subsequent layer includes at least two child nodes. A parent node in a layer is operatively linked to two child nodes in a subsequent layer. Further, at least one child node in one of the subsequent layers is operatively linked to two or more parent nodes in a preceding layer. Each node includes classifiers for classifying the objects as a positive object and a negative object. The positive object and the negative object classified by the parent node in each layer are further classified by one or more operatively linked child nodes in the subsequent layer.07-28-2011
20120170831PRODUCING STEREOSCOPIC IMAGE - A method of producing a digital stereoscopic image using a processor is disclosed. The method includes providing a plurality of digital image files which include digital images and the time of capture of each image and using time of capture to identify candidate pairs of images. The method further includes using the processor to analyze the image content of the candidate pairs of images to identify at least one image pair that can be used to produce a stereoscopic image; and using an identified image pair to produce the digital stereoscopic image.07-05-2012
20110176723Motion Correction in Cone-Beam CT by Tracking Internal and External Markers Using Cone-Beam Projection From a kV On-Board Imager: Four-Dimensional Cone-Beam CT and Tumor Tracking Implications - An apparatus comprising a processor configured to receive a sequence of Cone-Beam Computed Topology (CBCT) projections of a three dimensional (3D) object over a scanning period, wherein the 3D object is displaced during the scanning period, and wherein each of the CBCT projections is associated with a discrete point during the scanning period, locate a marker position in a plurality of the CBCT projections, wherein each marker position corresponds to the location of an internal marker at the corresponding discrete point during the scanning period, extract a 3D motion trajectory based on the plurality of marker positions and a plurality of time-tagged angular views, and correct the CBCT projections based on the 3D motion trajectory.07-21-2011
20120314935METHOD AND APPARATUS FOR INFERRING THE GEOGRAPHIC LOCATION OF CAPTURED SCENE DEPICTIONS - A method and apparatus for determining a geographic location of a scene in a captured depiction comprising extracting a first set of features from the captured depiction by algorithmically analyzing the captured depiction, matching the extracted features of the captured depiction against a second set of extracted features associated with reference depictions with known geographic locations and when the matching is successful, identifying the geographic location of the scene in the captured depiction based on a known geographic location of a matching reference depiction from the reference depictions.12-13-2012
20120314934INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing device is provided that includes: an image generation portion that generates a composite image by synthesizing a stereoscopic image with a two-dimensional image that is associated with the stereoscopic image, the stereoscopic image being generated from a right eye image and a left eye image which have parallax therebetween and on which perspective correction is performed; and an identification portion that, when a user operation on the two-dimensional image is detected, identifies the stereoscopic image with which the two-dimensional image is associated, as a selected target.12-13-2012
20120314933IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes an attention region estimation unit that estimates an attention region which is estimated as a user paying attention thereto on a stereoscopic image, a parallax detection unit that detects a parallax of the stereoscopic image and generates a parallax map indicating a parallax of each region of the stereoscopic image, a setting unit that sets conversion characteristics for correcting a parallax of the stereoscopic image based on the attention region and the parallax map, and a parallax conversion unit that corrects the parallax map based on the conversion characteristics.12-13-2012
20120314932IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING - According to one embodiment, an image processing apparatus includes a first setting unit, a second setting unit, and a specifying unit. The first setting unit detects a position of at least a part of an object in an image so as to obtain, for one pixel or each of a plurality of pixels in the image, a first likelihood that indicates whether the corresponding pixel is included in a region where the object is present. The second setting unit obtains, for one pixel or each of a plurality of pixels in the image, a second likelihood indicating whether the pixel is a pixel corresponding to a 3D body by using a feature amount of the pixel. The a specifying unit specifies a region, in the image, where the object is present by using the first likelihood and the second likelihood.12-13-2012
20120134575Systems and Methods for Tracking a Model - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit with in the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixels equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized.05-31-2012
20120134574IMAGE PROCESSING APPARATUS, DISPLAY APPARATUS, IMAGE PROCESSING METHOD AND IMAGE PROCESSING PROGRAM - Disclosed herein is an image processing apparatus including: a depth-information extraction section; a luminance extraction section; a contrast extraction section; a gain generation section; and a correlation estimation section.05-31-2012
20120163704APPARATUS AND METHOD FOR STEREO MATCHING - An image matching apparatus includes a bilateral filter that filters a left image and a right image to output a second left image and a second right image; a census cost calculation unit performing census transform on a window based on a first pixel of the second left image and a window based on a second pixel of the second right image to calculate a census cost corresponding to a pair of pixels of the first and second pixels; a support weight calculation unit obtaining support weights of the left and right images or the second left and second right images; a cost aggregation unit obtaining energy values of nodes corresponding to the pair of pixels of the first and second pixels using the census cost and the support weights; and a dynamic programming unit performing image matching using dynamic programming by the energy values of each node obtained.06-28-2012
20120170833MULTI-VIEW IMAGE GENERATING METHOD AND APPARATUS - According to an embodiment, a multi-view image generating method includes synthesizing images having a same depth value into a single image from among a plurality of images, based on depth values each being associated with one of the plurality of images and indicating image position in the depth direction of the image; shifting, with respect to each of a plurality of viewpoints each giving a different disparity, a synthesized image obtained at the synthesizing, according to a shift vector corresponding to the viewpoint and the depth value of the synthesized image in a direction and with an amount indicated in the shift vector, so as to generate an image having disparity given thereto; and generating a multi-view image in which the images that are shifted and that are given disparity at the shifting are arranged in a predetermined format.07-05-2012
20120170832DEPTH MAP GENERATION MODULE FOR FOREGROUND OBJECT AND METHOD THEREOF - The present invention discloses a depth map generation module for a foreground object and the method thereof. The depth map generation method for a foreground object comprises the following steps: receiving an image sequence data, wherein the image sequence data includes a plurality of image frames; selecting at least one key image frame from the image sequence data; providing at least one depth indicative information and a contour of a first segment in the at least one key image frame; and performing a signal processing steps by a microprocessor.07-05-2012
20120250976Wavelet transform on incomplete image data and its applications in image processing - A system and method for effectively performing wavelet transforms on incomplete image data includes an image processor that performs a green-pixel transformation procedure on incomplete color pixel matrices. The image processor then rearranges red, blue and transformed green-pixel into four quadrants of contiguous pixels and applies some two dimensional (2D) wavelet thresholding schemes on each quadrant. After thresholding, an inverse procedure is applied to reconstruct the pixel values on the incomplete color pixel matrices. For further de-correlation of image data, the image processor may stack similar image patches in a three dimensional (3D) array and apply incomplete-data wavelet thresholding on the 3D array. The incomplete-data wavelet thresholding procedure may be put in an improved local similarity measurement framework to achieve better performance of image processing tasks. A CPU device typically controls the image processor to effectively perform the image processing procedure.10-04-2012
20090129667DEVICE AND METHOD FOR ESTIMATIMING DEPTH MAP, AND METHOD FOR GENERATING INTERMEDIATE IMAGE AND METHOD FOR ENCODING MULTI-VIEW VIDEO USING THE SAME - The present invention relates to a device and a method for estimating a depth map, and a method for making an intermediate image and a method for encoding multi-view video using the same. More particularly, the present invention relates to a device and a method for estimating a depth map that are capable of acquiring a depth map that reduces errors and complexity, and is resistant to external influence by dividing an area into segments on the basis of similarity, acquiring a segment-unit initial depth map by using a three-dimensional warping method and a self adaptation function to which an extended gradient map is reflected, and refining the initial depth map by performing a belief propagation method by the segment unit, and achieving smoother view conversion and improved encoding efficiency by generating an intermediate image with the depth map and utilizing the intermediate image for encoding a multi-view video, and a method for generating the intermediate image and a method for encoding the multi-view video using the same.05-21-2009
20120076400METHOD AND SYSTEM FOR FAST THREE-DIMENSIONAL IMAGING USING DEFOCUSING AND FEATURE RECOGNITION - Described is a method and system for fast three-dimensional imaging using defocusing and feature recognition is disclosed. The method comprises acts of capturing a plurality of defocused images of an object on a sensor, identifying segments of interest in each of the plurality of images using a feature recognition algorithm, and matching the segments with three-dimensional coordinates according to the positions of the images of the segments on the sensor to produce a three-dimensional position of each segment of interest. The disclosed imaging method is “aware” in that it uses a priori knowledge of a small number of object features to reduce computation time as compared with “dumb” methods known in the art which exhaustively calculate positions of a large number of marker points.03-29-2012
20120076399THREE-DIMENSIONAL IMAGE EDITING DEVICE AND THREE-DIMENSIONAL IMAGE EDITING METHOD - Even when the size of a three-dimensional image is changed, the pop-out amount is automatically adjusted to one intended by the user. The pop-out amount is adjusted based on a conversion characteristic defining a relationship between the size and the pop-out amount of a three-dimensional image as the size of the three-dimensional image is changed, and therefore the pop-out amount of the three-dimensional image can be automatically adjusted to a given pop-out amount preferred by the user or intended by the user.03-29-2012
20120076398STEREOSCOPIC IMAGE PASTING SYSTEM, AND METHOD AND PROGRAM FOR CONTROLLING OPERATION OF SAME - It is arranged so that stereoscopic images will not overlap one another. A stereoscopic image to be pasted in a free-layout electronic album is selected. The amount of parallax of the stereoscopic image to be pasted in this electronic album is set. When this is done, the selected stereoscopic image is enlarged or reduced in size so as to take on the set amount of parallax. Automatic layout for pasting enlarged or reduced stereoscopic images on each page of the electronic album in such a manner that these stereoscopic images will not overlap one another is carried out. The result of the layout is displayed.03-29-2012
20120314936INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus that acquires first posture information corresponding to the information processing apparatus and a first distance coordinate corresponding to the information processing apparatus, and second posture information corresponding to another information processing apparatus and a second distance coordinate corresponding to the another information processing apparatus. The information processing apparatus then calculates an object's position in a virtual space based on the first and second posture information and the first and second distance coordinates.12-13-2012
20120314937METHOD AND APPARATUS FOR PROVIDING A MULTI-VIEW STILL IMAGE SERVICE, AND METHOD AND APPARATUS FOR RECEIVING A MULTI-VIEW STILL IMAGE SERVICE - Provided are an apparatus and a method of providing a multiview still image service. The method includes: configuring a multiview still image file format including a plurality of image areas into which a plurality of pieces of image information forming a multiview still image are inserted; inserting the plurality of pieces of image information into the plurality of image areas, respectively; inserting three-dimensional (3D) basic attribute information to three-dimensionally reproduce the multiview still image into a first image area of the plurality of image areas into which main-view image information from among the plurality of pieces of image information is inserted; and outputting multiview still image data comprising the plurality of pieces of image information based on the multiview still image file format.12-13-2012
20120177284FORMING 3D MODELS USING MULTIPLE IMAGES - A method for determining a three-dimensional model from three or more images comprising: receiving three or more images, each image being captured from a different viewpoint and including a two-dimensional image together with a corresponding range map; designating a plurality of pairs of received images, each pair including a first image and a second image. For each of the designated pairs a geometric transform is determined by identifying a set of corresponding features in the two-dimensional images; removing any extraneous corresponding features to produce a refined set of corresponding features; and determining the geometrical transformation for transforming three-dimensional coordinates for the first image to three-dimensional coordinates for the second image responsive to three-dimensional coordinates for the refined set of corresponding features. A three-dimensional model is determined responsive to the three or more received images and the geometrical transformations for the designated pairs of received images.07-12-2012
20120177285STEREO IMAGE PROCESSING APPARATUS, STEREO IMAGE PROCESSING METHOD AND PROGRAM - An imaging device (100) includes: an imaging element (103) obtained by repeatedly arranging a pixel W for entire wavelength band, a W-R pixel for R, a W-G pixel for G, and a W-B pixel for B; a filter (102) configured such that a portion corresponding to the pixel W allows the entire wavelength band of a wavelength band within a certain range to pass and portions corresponding to the W-R pixel, the W-G pixel, and the W-B pixel reflect wavelength bands of corresponding colors, respectively; a reflection amount calculating unit (113) for calculating signal values of R, G, and B by subtracting a value of an image reading signal of each of the W-R pixel, the W-G pixel, and the W-B pixel from a value of an image reading signal of the pixel W.07-12-2012
20100284609Apparatus and method for measuring size distribution of granular matter - A method and apparatus for measuring size distribution of bulk matter consisted of randomly orientated granules, such as wood chips, make use of scanning the exposed surface of the granular matter to generate three-dimensional profile image data defined with respect to a three-coordinate reference system, The image data is segmented to reveal regions associated with distinct granules, and values of the size-related parameter for the revealed regions are estimated. Then, a geometric correction to each ones of estimated size-related parameter values is applied, to compensate for the random orientation of corresponding distinct granules. Finally, the size distribution of bulk matter is statistically estimated from the corrected size-related parameter values.11-11-2010
20090060321SYSTEM FOR COMMUNICATING AND METHOD - A system communicates a representation of a scene, which includes a plurality of objects disposed on a plane, to one or more client devices. The representation is generated from one or more video images of the scene captured by a video camera. The system comprises an image processing apparatus operable to receive the video images of the scene which includes a view of the objects on the plane, to process the captured video images so as to extract one or more image features from each object, to compare the one or more image features with sample image features from a predetermined set of possible example objects which the video images may contain, to identify the objects from the comparison of the image features with the predetermined image features of the possible example objects, and to generate object path data for each object which identifies the respective object; and provides a position of the identified object on a three dimensional model of the plane in the video images with respect to time. The image processing apparatus is further operable to calculate a projection matrix for projecting the position of each of the objects according to the object path data from the plane in the video image into the three dimensional model of the plane. A distribution server is operable to receive the object path data and the projection matrix generated by the image processing apparatus for distribution of the object path data and the projection matrix to one or more client devices. The system is arranged to generate a representation of an event, such as a sporting event, which provides a substantial data in an amount of information which must be communicated to represent the event. As such, the system can be used to communicate the representation of the event, via a bandwidth limited communications network, such as the internet, from the server to one or more client devices in real time. Furthermore, the system can be used to view one or more of the objects within the video images by extracting the objects from the video images.03-05-2009
20090060319METHOD, A SYSTEM AND A COMPUTER PROGRAM FOR SEGMENTING A SURFACE IN A MULTI-DIMENSIONAL DATASET - The method according to the invention is arranged to segment a surface in a multi-dimensional dataset comprising a plurality of images, which may be acquired using a suitable data-acquisition unit at a preparatory step 03-05-2009
20090060320INFORMATION PRESENTATION SYSTEM, INFORMATION PRESENTATION APPARATUS, INFORMATION PRESENTATION METHOD, PROGRAM, AND RECORDING MEDIUM ON WHICH SUCH PROGRAM IS RECORDED - Disclosed is an information presentation system that includes a plurality of information presentation apparatuses movable and displaying images of a plurality of objects, and a control apparatus outputting control signals for controlling the information presentation apparatuses. Each information presentation apparatus includes a display unit, a moving unit, a driving unit, a position sensor, a first communication unit, and a control unit. The control apparatus includes an object position information obtaining unit, a second communication unit, and a control unit. The control unit of the information presentation apparatus control to display an image of the object, for which position information has been obtained by the object position information obtaining unit of the control apparatus, on the display unit and control to drive the driving unit based on the control signal received by the first communication unit.03-05-2009
20120257814IMAGE COMPLETION USING SCENE GEOMETRY - Image completion using scene geometry is described, for example, to remove marks from digital photographs or complete regions which are blank due to editing. In an embodiment an image depicting, from a viewpoint, a scene of textured objects has regions to be completed. In an example, geometry of the scene is estimated from a depth map and the geometry used to warp the image so that at least some surfaces depicted in the image are fronto-parallel to the viewpoint. An image completion process is guided using distortion applied during the warping. For example, patches used to fill the regions are selected on the basis of distortion introduced by the warping. In examples where the scene comprises regions having only planar surfaces the warping process comprises rotating the image. Where the scene comprises non-planar surfaces, geodesic distances between image elements may be scaled to flatten the non-planar surfaces.10-11-2012
20120257816ANALYSIS OF 3D VIDEO - An image analysis apparatus for processing a 3D pair of images representing respective left eye and right eye views of a scene comprises an image crop detector configured to detect the presence of an image crop at a lateral edge of one of the images; and a frame violation detector configured to detect, within areas of the images excluding any detected image crops, an image feature within a threshold distance of the left edge of the left image which is not found in the right image, or an image feature within a threshold distance of the right edge of the right image which is not found in the left image.10-11-2012
20120189191METHODS FOR MATCHING GAIN AND COLOR FOR STEREOSCOPIC IMAGING SYSTEMS - Stereoscopic imaging devices may include stereoscopic imagers, stereoscopic displays, and processing circuitry. The processing circuitry may be used to collect auto white balance (AWB) statistics for each image captured by the stereoscopic imager. A stereoscopic imager may include two image modules that may be color calibrated relative to each other or relative to a standard calibrator. AWB statistics may be used by the processing circuitry to determine global, local and spatial offset gain adjustments to provide intensity matched stereoscopic images for display. AWB statistics may be combined by the processing circuitry with color correction offsets determined during color calibration to determine color-transformation matrices for displaying color matched stereoscopic images using the stereoscopic display. Gain and color-transformation corrections may be continuously applied during operation of a stereoscopic imaging device to provide intensity-matched, color-matched stereoscopic images in any lighting condition.07-26-2012
20120189190AUTOMATIC DETECTION AND GROUPING OF STRAIGHT LINES IN IMAGES FOR PERSONALIZATION - As set forth herein, a computer-implemented method is employed to place personalized text into an image. A location within the image is selected where the text is to be placed, and region is grown around the selected location. The 3D geometry of the surface is estimated proximate to the location and sets of parallel straight lines in the image are identified and selected to define a bounding polygon into which text may be inserted. Optionally, a user is permitted to adjust the bounding polygon once it has been automatically generated.07-26-2012
20120263374DEVICE AND METHOD FOR TRANSFORMING 2D IMAGES INTO 3D IMAGES - A device for transforming 2D images into 3D images includes a position calculation unit and an image processing block. The position calculation unit generates multiple start points corresponding to multiple pixel lines of a panel according to a display type of the panel. The image processing block reshapes multiple input enable signals into multiple output enable signals according to the start points. The pixel lines of the panel displays the output data signal as multiple image signals respectively according to the output enable signals. The image signals include multiple left-eye image signals and multiple right-eye image signals.10-18-2012
20120082370MATCHING DEVICE, MATCHING METHOD AND MATCHING PROGRAM - Provided is a matching device capable of improving the accuracy of the degree of similarly in the calculation of the degree of similarly between data sets. Element selection means 04-05-2012
20120082369IMAGE COMPOSITION APPARATUS, IMAGE RETRIEVAL METHOD, AND STORAGE MEDIUM STORING PROGRAM - There is provided an image composition apparatus including a parallax deriving unit configured to derive a parallax of one area in a background image, the one area corresponding to one object in the background image, an image selection unit configured to select an image which has a parallax different from the parallax of the one area in the background image, as a material image, from a plurality of three-dimensional images, each of which is viewed as a specific object in a three-dimensional manner, and an image composition unit configured to superpose the material image selected by the image selection unit on the background image.04-05-2012
20120082368DEPTH CORRECTION APPARATUS AND METHOD - According to one embodiment, a depth correction apparatus includes a clusterer, a calculator and a corrector. The clusterer is configured to apply clustering to at least one of pixel values and depth values of a plurality of pixels in a calculation range corresponding to a correction target pixel, and to classify the plurality of pixels in the calculation range into a plurality of classes. The calculator is configured to calculate pixel value statistics of the respective classes using pixel values of pixels in the respective classes. The corrector is configured to determine a corresponding class of the correction target pixel based on a pixel value of the correction target pixel and the pixel value statistics of the respective classes, and to apply correction which replaces a depth value of the correction target pixel by a representative depth value of the corresponding class.04-05-2012
20120230580ANALYSIS OF STEREOSCOPIC IMAGES - A method of processing in an image processor a pair of images intended for stereoscopic presentation to identify left-eye and right-eye images of the pair. The method includes dividing both images of the pair into a plurality of like image regions, determining for each region a disparity value between the images of the pair to produce a set of disparity values, deriving for each region a confidence factor for the disparity value, determining a correlation parameter between the set of disparity values and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on the confidence factor for that region, and identifying from said correlation parameter the left-eye and right-eye images of the pair, wherein the left eye and right images form a stereoscopic pair.09-13-2012
20120230581INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes an image acquisition unit for acquiring a real-world image, a space analysis unit for analyzing a three-dimensional space structure of the real-world image, a scale reference detection unit for detecting a length, in a three-dimensional space, of an object to be a scale reference that is included in the real-world image, and a scale determination unit for determining, from the length of the object detected by the scale reference detection unit, a scale of the three-dimensional space.09-13-2012
20110123099SENSING DEVICE AND METHOD OF DETECTING A THREE-DIMENSIONAL SPATIAL SHAPE OF A BODY - A method for identifying a best fitting shoe includes the steps of scanning a foot using a photogrammetric 3D foot scanner for obtaining a digital 3D model of the foot, and providing a database in which 3D models of shapes of the 5 interiors of available shoes are stored. The 3D model of the digitized foot of the customer is compared with the 3D models of available shoes stored in the database and a shoe of which the 3D model of internal shape is the most similar to the 3D model of the customer foot is selected. The steps of comparing and selecting are performed using a computing unit. A sensing device for detecting a 10 three-dimensional spatial shape of a body includes a sensing end and a camera. A method of detecting a three-dimensional interior spatial shape includes providing the sensing device and scanning the spatial shape.05-26-2011
20110123098System and a Method for Three-dimensional Modeling of a Three-dimensional Scene Features with a Cooling System - A method and a system for three-dimensional modeling of a three-dimensional scene features, are described.05-26-2011
20110123097Method and computer program for improving the dimensional acquisition of an object - The present invention relates to a method for improving the efficiency of dimensional acquisition of an object by a dimensional measurement device directed over the object, comprising the steps: a) directing the measurement device over the object to acquire its dimensions, b) providing an indication of the resolution of the acquired regions, c) re-directing the measurement device over at least part of the acquired regions indicating insufficient resolution according to predetermined criteria, d) updating the indication of the resolution of the acquired regions, and e) repeating steps c) and d) until sufficient resolution is indicated according to the predetermined criteria, thereby efficiently acquiring the dimensions of the object at sufficient resolution. It also relates to a computer program therefor.05-26-2011
20110123095Sparse Volume Segmentation for 3D Scans - A computer readable medium is provided embodying instructions executable by a processor to perform a method for sparse volume segmentation for 3D scan of a target. The method including learning prior knowledge, providing volume data comprising the target, selecting a plurality of key contours of the image of the target, building a 3D spare model of the image of the target given the plurality of key contours, segmenting the image of the target given the 3D sparse model, and outputting a segmentation of the image of the target.05-26-2011
20090003687Segmenting Image Elements - A method of segmenting image elements into a foreground and background is described, such that only the foreground elements are part of a volume of interest for stereo matching. This reduces computational burden as compared with computing stereo matching over the whole image. An energy function is defined using a probabilistic framework and that energy function approximated to require computation only over foreground disparities. An optimization algorithm is used on the energy function to perform the segmentation.01-01-2009
20100254592CALCULATING Z-DEPTHS AND EXTRACTING OBJECTS IN IMAGES - The dual cameras produce two simultaneous images IM10-07-2010
20100254593System for Draping Meteorological Data on a Three Dimensional Terrain Image - A system for draping meteorological data on a three dimensional terrain image has been developed. The system includes a central processing server that receives meteorological data in real time and drapes the meteorological data over a three dimensional terrain image. The image is then transmitted to a display computer for use by an end user.10-07-2010
20080298673THREE-DIMENSIONAL DATA REGISTRATION METHOD FOR VISION MEASUREMENT IN FLOW STYLE BASED ON DOUBLE-SIDED TARGET - The present disclosure is directed to a three-dimensional data registration method for vision measurement in flow style based on a double-sided target. An embodiment of the disclosed method that comprises A. Setting up two digital cameras which can observe the entire measured object; B. Calibrating intrinsic parameters and a transformation between the two digital camera coordinate frames; C. A double-sided target being placed near the measured area of the measured object, the two digital cameras and a vision sensor taking images of at least three non-collinear feature points of the double-sided target; D. Removing the target, measuring the measured area by using the vision sensor; E. Respectively computing the three dimensional coordinates of the feature points in the global coordinate frame and in the vision sensor coordinate frame; F. Estimating the transformation from the vision sensor coordinate frame to the global coordinate frame through the three dimensional coordinates of the three or more non-collinear feature points obtained at step E, then transforming the three dimensional data of the measured area to the global coordinate frame; and G. Repeating step C, D, E, F, then completing three dimensional data registration for all measured areas. The present disclosure improves three dimensional data registration precision and efficiency.12-04-2008
20120263373INVERSE STEREO IMAGE MATCHING FOR CHANGE DETECTION - A system and method for finding real terrain matches in a stereo image pair is presented. A method for finding differences of underlying terrain between a first stereo image and a second stereo image includes performing epipolar rectification on a stereo image pair to produce rectified image data. The method performs a hybrid stereo image matching on the rectified image data to produce image matching data. A digital surface model (DSM) is generated based on the image matching data. Next, the method identifies areas in the DSM where the stereo image matching should fail based on the image matching data and the DSM to generate predicted failures. The method can then determine real terrain changes based on the predicted failures and the image matching data.10-18-2012
20110002533IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE AND RECORDING MEDIUM - An image processing method and an image processing device which can improve sharpness by producing a binocular rivalry intentionally are provided. An image processing device 01-06-2011
20110002530SUB-DIFFRACTION LIMIT IMAGE RESOLUTION IN THREE DIMENSIONS - The present invention generally relates to sub-diffraction limit image resolution and other imaging techniques, including imaging in three dimensions. In one aspect, the invention is directed to determining and/or imaging light from two or more entities separated by a distance less than the diffraction limit of the incident light. For example, the entities may be separated by a distance of less than about 1000 nm, or less than about 300 nm for visible light. In some cases, the position of the entities can be determined in all three spatial dimensions (i.e., in the x, y, and z directions), and in certain cases, the positions in all three dimensions can be determined to an accuracy of less than about 1000 nm. In one set of embodiments, the entities may be selectively activatable, i.e., one entity can be activated to produce light, without activating other entities. A first entity may be activated and determined (e.g., by determining light emitted by the entity), then a second entity may be activated and determined. The emitted light may be used to determine the x and y positions of the first and second entities, for example, by determining the positions of the images of these entities, and in some cases, with sub-diffraction limit resolution. In some cases, the z positions may be determined using one of a variety of techniques that uses intensity information or focal information (e.g., a lack of focus) to determine the z position. Non-limiting examples of such techniques include astigmatism imaging, off-focus imaging, or multi-focal-plane imaging.01-06-2011
20120237115Method for acquiring a 3D image dataset freed of traces of a metal object - An interpolation of data values is performed during the acquisition of a 3D image dataset which is free of traces of a metal object imaged in the underlying 2D image datasets. A target function is defined into which data values of the 3D image dataset that are dependent on said substitute data values are incorporated following preprocessing. The substitute data values are then varied iteratively until the value of the target function satisfies a predetermined criterion. Residual artifacts that still persist following the interpolation can thus be effectively reduced.09-20-2012
20120237114METHOD AND APPARATUS FOR FEATURE-BASED STEREO MATCHING - Disclosed are a method and apparatus for feature-based stereo matching. A method for stereo matching of a reference image and at least one comparative image captured by at least two cameras from different points of view using a computer device includes collecting the reference image and the at least one comparative image, extracting feature points from the reference image, tracking points corresponding to the feature points in the at least one comparative image using an optical flow technique, and generating a depth map according to correspondence-point tracking results.09-20-2012
20120237113ELECTRONIC DEVICE AND METHOD FOR OUTPUTTING MEASUREMENT DATA - A method outputs measurement data automatically using an electronic device. The method obtains measurement data of feature elements from a two dimensional (2D) image of a measured object, determines a type of measurement applied to each feature element, obtains feature elements from planes of a three dimensional (3D) image of the measured object, and maps each of the obtained feature elements in the 3D image to the 2D image. The method further obtains sequential marked numbers from the 2D image, determines a feature element which is nearest to any marked number from the mapped feature elements, determines an output axis for each of the determined feature elements, and outputs measured results and measurement codes of the determined feature elements by reference to the measurement data, the type of measurement and the output axis of each determined feature element.09-20-2012
20120237112Structured Light for 3D Shape Reconstruction Subject to Global Illumination - Depth values in a scene are measured by projecting sets of patterns on the scene, wherein each set of patterns is structured with different spatial frequency using different encoding functions. Sets of images of the scene is acquired, wherein there is one image for each pattern in each set. Depth values are determining for each pixel at corresponding locations in the sets of images. The depth values of each pixel are analyzed, and the depth value is returned if the depth values at the corresponding locations are similar. Otherwise, the depth value is marked as having an error.09-20-2012
20120237111Performing Structure From Motion For Unordered Images Of A Scene With Multiple Object Instances - A technology is described for performing structure from motion for unordered images of a scene with multiple object instances. An example method can include obtaining a pairwise match graph using interest point detection for obtaining interest points in images of the scene to identify pairwise image matches using the interest points. Multiple metric two-view and three-view partial reconstructions can be estimated by performing independent structure from motion computation on a plurality of match-pairs and match-triplets selected from the pairwise match graph. Pairwise image matches can be classified into correct matches and erroneous matches using expectation maximization to generate geometrically consistent match labeling hypotheses and a scoring function to evaluate the match labeling hypotheses. A structure from motion computation can then be performed on the subset of match pairs which have been inferred as correct.09-20-2012
20120263371Method of image fusion - A method of fusing images includes the steps of providing at least two images of the same object, each image being a digital image or being transformed in a digital image formed by an array of pixels or voxels, and of combining together the pixels or voxels of the at least two images being combined to obtain a new image formed by the combined pixels or voxels.10-18-2012
20120263372Method And Apparatus For Processing 3D Image - A first image and a second image make a stereo pair. A parallax between each subject image in the first image and a corresponding subject image in the second image is calculated. A 3D image formed by the first image and the second image is divided into a plurality of areas. Detection is made as to which of the areas each parallax calculated by the parallax calculator is present in. A desired parallax is determined on the basis of the calculated parallax or parallaxes present in one of the areas. An object image is superimposed on the first image and the second image in said one of the areas in a manner such that a parallax between the object image superimposed on the first image and the object image superimposed on the second image will be equal to the desired parallax.10-18-2012
20110038530ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object.02-17-2011
20110038529IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Image retargeting is appropriately performed on stereo pair images composed of at least two images such as in three-dimensional displays. A path of connected pixels in first image data is calculated based on pixel gradient energy. Each pixel in second image data corresponding to each pixel in connected pixels in the first image data is calculated as an initial search point, based on the stereo correspondence relationship between the first image data and the second image data. Pixels that minimize energy between pixels of the first image data and pixels of the second image data in the proximity of the initial search point is calculated as a path of connected pixels in the second image data. A path of optimal connected pixels in the first image data is calculated using the energy.02-17-2011
20120093395METHOD AND SYSTEM FOR HIERARCHICALLY MATCHING IMAGES OF BUILDINGS, AND COMPUTER-READABLE RECORDING MEDIUM - The present invention relates to a method for hierarchically matching a building image. The method includes the steps of: matching a wall of a specific building in the building image inputted as a query with a wall(s) of a building(s) in at least one panoramic image by using a technology of matching a building's shape or repeated pattern; selecting a candidate panoramic image(s) which includes a building(s) recognized to have the same or similar wall to the specific building in the panoramic image(s) as a result of matching its wall with others; matching at least one local region, if containing a recognizable string or figure, in the specific building with local region(s) in the building(s) of the candidate panoramic image(s) by using a technology of recognizing a string or a figure; and determining top n panoramic image(s) as the result of matching the local region.04-19-2012
20120093394METHOD FOR COMBINING DUAL-LENS IMAGES INTO MONO-LENS IMAGE - A method for combining dual-lens images into a mono-lens image, suitable for a three-dimensional camera having a left lens and a right lens is provided. First, the left lens and the right lens are used to capture a left-eye image and a right-eye image. Next, a disparity between each of a plurality of corresponding pixels in the left-eye image and the right-eye image is calculated. Then, an overlap area of the left-eye image and the right-eye image is determined according to the calculated disparities of pixels. Finally, the images within the overlap area of the left-eye image and the right-eye image are combined into the mono-lens image.04-19-2012
20120093393CAMERA TRANSLATION USING ROTATION FROM DEVICE - A method, apparatus, system, article of manufacture, and computer readable storage medium provides the ability to determine two or more camera viewpoint optical centers. A first image and a second image captured by camera devices (and the rotations for the camera devices) are obtained. For each pair of matched points between the first image and the second image, a linear equation is defined that utilizes the rotations, pixel coordinates of the matched points and optical centers. A matrix A04-19-2012
20120269423Analytical Multi-View Rasterization - Multi-view rasterization may be performed by calculating visibility over a camera line. Edge equations may be evaluated iteratively along a scanline. The edge equations may be evaluated using single instruction multiple data instruction sets.10-25-2012
20110216962METHOD OF EXTRACTING THREE-DIMENSIONAL OBJECTS INFORMATION FROM A SINGLE IMAGE WITHOUT META INFORMATION - Disclosed herein is a method of extracting 3-dimension object information by a shadow analysis from a single image without meta information, and a technical problem to be solved is to extract three-dimension information of an object such as a height of the object and a footprint surface position of the object from a single image without meta information.09-08-2011
20110216961INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device includes a virtual space recognition unit for analyzing 3D space structure of a real space to recognize a virtual space, a storage unit for storing an object to be arranged in the virtual space, a display unit for displaying the object arranged in the virtual space, on a display device, a detection unit for detecting device information of the display device, and an execution unit for executing predetermined processing toward the object based on the device information.09-08-2011
20110235898MATCHING PROCESS IN THREE-DIMENSIONAL REGISTRATION AND COMPUTER-READABLE STORAGE MEDIUM STORING A PROGRAM THEREOF - The matching process includes: finding first and second three-dimensional reconstruction point sets that contain three-dimensional position coordinates of segments, and first and second feature set that contain three-dimensional information regarding vertices of the segments, from image data of an object (S09-29-2011
20110235897DEVICE AND PROCESS FOR THREE-DIMENSIONAL LOCALIZATION AND POSE ESTIMATION USING STEREO IMAGE, AND COMPUTER-READABLE STORAGE MEDIUM STORING THE PROGRAM THEREOF - The device includes: 09-29-2011
20120321171IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including an image input unit configured to receive at least one of a first left eye image and a first right eye image photographed from different viewpoints and applicable to stereoscopic vision, and a stereoscopic image generation processing unit configured to receive one of the first left eye image and the first right eye image and generate a second left eye image and a second right eye image applicable to the stereoscopic vision through an image conversion process. Among the first left eye image and the first right eye image input to the image input unit and the second left eye image and the second right eye image generated by the stereoscopic image generation processing unit, two images are output as images to be applied to the stereoscopic vision.12-20-2012
20100232684CALIBRATION APPARATUS AND METHOD FOR ASSISTING ACCURACY CONFIRMATION OF PARAMETER FOR THREE-DIMENSIONAL MEASUREMENT - When computation of a three-dimensional measurement processing parameter is completed, accuracy of a computed parameter can easily be confirmed. After a parameter for three-dimensional measurement is computed through calibration processing using a calibration workpiece in which plural feature points whose positional relationship is well known can be extracted from an image produced by imaging, three-dimensional coordinate computing processing is performed using the computed parameter for the plural feature points included in the stereo image used to compute the parameter. Perspective transformation of each computed three-dimensional coordinate is performed to produce a projection image in which each post-perspective-transformation three-dimensional coordinate is expressed by a predetermined pattern, and the projection image is displayed on a monitor device.09-16-2010
20100232683Method For Displaying Recognition Result Obtained By Three-Dimensional Visual Sensor And Three-Dimensional Visual Sensor - Display suitable to an actual three-dimensional model or a recognition-target object is performed when stereoscopic display of a three-dimensional model is performed while correlated to an image used in three-dimensional recognition processing. After a position and a rotation angle of a workpiece are recognized through recognition processing using the three-dimensional model, coordinate transformation of the three-dimensional model is performed based on the recognition result, and a post-coordinate-transformation Z-coordinate is corrected according to an angle (elevation angle f) formed between a direction of a line of sight and an imaging surface. Then perspective transformation of the post-correction three-dimensional model into a coordinate system of a camera of a processing object is performed, and a height according to a pre-correction Z-coordinate at a corresponding point of the pre-coordinate-transformation three-dimensional model is set to each point of a produced projection image. Projection processing is performed from a specified direction of a line of sight to a point group that is three-dimensionally distributed by the processing, thereby producing a stereoscopic image of the three-dimensional model.09-16-2010
20100232681THREE-DIMENSIONAL VISION SENSOR - An object of the present invention is to enable performing height recognition processing by setting a height of an arbitrary plane to zero for convenience of the recognition processing. A parameter for three-dimensional measurement is calculated and registered through calibration and, thereafter, an image pickup with a stereo camera is performed on a plane desired to be recognized as having a height of zero in actual recognition processing. Further, three-dimensional measurement using the registered parameter is performed on characteristic patterns (marks m09-16-2010
20110243425Methods For Analyzing Absorbent Articles - A method for analyzing an absorbent article may include providing a three-dimensional computed tomography data set comprising a mannequin image and an article image. The article image may be constructed from projections collected while the absorbent article is fitted to a mannequin. An outer surface of the mannequin image may be identified. A desired distance may be provided. A volumetric demarcation may be spaced the desired distance away from the outer surface of the mannequin image. An image volume may be disposed between the outer surface of the mannequin image and the volumetric demarcation. A relevant portion of the article image may be enhanced using a processor. The relevant portion of the article image may be coincident with the image volume.10-06-2011
20110262032ENHANCED OBJECT RECONSTRUCTION - Processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object.10-27-2011
20120321173INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - A multi view-point image composed of a great number of images according to a shape of an object is generated or an information processing method used for generating a three-dimensional model or performing image processing of arbitrary view-point object recognition is provided, and based on a plurality of captured images obtained by imaging of the object from a plurality of view points by an imaging means, a relative position and orientation with respect to the object relative to the imaging means for each of the plurality of view points is calculated, and based on the calculated plurality of relative positions and orientations, a missing position and orientation of the imaging means in a direction in which imaging by the imaging means is missing is calculated, and an image used for displaying the calculated missing position and orientation on a display means is generated.12-20-2012
20120087571METHOD AND APPARATUS FOR SYNCHRONIZING 3-DIMENSIONAL IMAGE - There are provided a 3-D image synchronization method and apparatus. The method comprises determining a reference region for each of the frames of a first image and determining a counter region for each of the frames of a second image, corresponding to the reference region, for the first image and the second image forming a 3-D image; calculating the feature values of the reference region and the counter region; extracting a frame difference between the first image and the second image based on the feature values; and moving any one of the first image and the second image in the time domain based on the extracted frame difference.04-12-2012
20120087570Method and apparatus for converting 2D image into 3D image - A method and an apparatus for converting 2D image into 3D image are disclosed. The method includes converting an input image having pixel values into a brightness image having brightness values, generating a depth map having depth information from the brightness image, and generating at least one of a left eye image, a right eye image and a reproduction image by first parallax-processing the input image using the generated depth map. Here, a pixel value of a delay pixel is substituted for a pixel value of a pixel to be processed at present by considering depth information of N (is integer of above 2) pixels including the pixel to be processed at present in the parallax-processing. In addition, the delay pixel is determined in accordance with arrangement of the depth information of the N pixels, and the delay pixel means a pixel located before the pixel to be processed at present by M (is integer of above 0) pixel.04-12-2012
20120281906Method, System and Computer Program Product for Converting a 2D Image Into a 3D Image - For converting a two-dimensional visual image into a three-dimensional visual image, the two-dimensional visual image is segmented into regions, including a first region having a first depth and a second region having a second depth. The first and second regions are separated by at least one boundary. A depth map is generated that assigns variable depths to pixels of the second region in response to respective distances of the pixels from the boundary, so that the variable depths approach the first depth as the respective distances decrease, and so that the variable depths approach the second depth as the respective distances increase. In response to the depth map, left and right views of the three-dimensional visual image are synthesized.11-08-2012
20120281905METHOD OF IMAGE PROCESSING AND ASSOCIATED APPARATUS - A method of image processing is provided for separating an image object from a captured or provided image according to a three-dimensional (3D) depth and generating a synthesized image from the image portions identified and selectively modified in the process. The method retrieves or determines a corresponding three-dimensional (3D) depth for each portion of an image, and enables capturing a selective portion of the image as an image object according to the 3D depth of each portion of the image, so as to synthesize the image object with other image objects by selective processing and superimposing of the image objects to provide synthesized imagery.11-08-2012
20120321172CONFIDENCE MAP, METHOD FOR GENERATING THE SAME AND METHOD FOR REFINING A DISPARITY MAP - A method for generating a confidence map comprising a plurality of confidence values, each being assigned to a respective disparity value in a disparity map assigned to at least two stereo images each having a plurality of pixels, wherein a single confidence value is determined for each disparity value, and wherein for determination of the confidence value at least a first confidence value based on a match quality between a pixel or a group of pixels in the first stereo image and a corresponding pixel or a corresponding group of pixels in the second stereo image and a second confidence value based on a consistency of the corresponding disparity estimates is taken into account.12-20-2012
20120328182IMAGE FORMAT DISCRIMINATION DEVICE, METHOD OF DISCRIMINATING IMAGE FORMAT, IMAGE REPRODUCING DEVICE AND ELECTRONIC APPARATUS - An image format discrimination device includes a correlation candidate extraction unit that obtains a gradient amount of each pixel position, based on pixel data of a horizontal line of input image data and extracts as a correlation candidate, a pixel of a position where a sign of the gradient amount is changed; a correlation inspection unit that inspects whether or not a first correlation candidate range and a second correlation candidate range having correlation to each other in the horizontal line are present, based on the correlation candidate that is extracted by the correlation candidate extraction unit; and a discriminating image format unit that discriminates whether or not the input image data are three-dimensional image data of a side-by-side type, based on the inspection result of the correlation inspection unit.12-27-2012
20090220144STEREO PHOTOGRAMMETRY FROM A SINGLE STATION USING A SURVEYING INSTRUMENT WITH AN ECCENTRIC CAMERA - A method for determining, in relation to a surveying instrument, target coordinates of a point of interest, or target, identified in two images captured by a camera in the surveying instrument. The method comprises determining coordinates of the surveying instrument, capturing a first image using the camera in the first camera position; identifying, in the first image, an object point associated with the target; measuring first image coordinates of the object point in the first image; rotating the surveying instrument around the horizontal axis and the vertical axis in order to position the camera in a second camera position; capturing a second image using the camera in the second camera position; identifying, in the second image, the object point identified in the first image; measuring second image coordinates of the object point in the second image; and determining the coordinates of the target in relation to the surveying instrument.09-03-2009
20120288185IMAGE CONVERSION APPARATUS AND IMAGE CONVERSION METHOD - According to one embodiment, an image conversion apparatus includes a 3D conversion instruction module, a determination module, and a converter. The 3D conversion instruction module is configured to instruct execution of a 3D conversion required to convert an input image into a 3D image. The determination module is configured to determine validity or invalidity of the 3D conversion instruction based on a type of the input image. The converter is configured to convert, based on validity determination of the 3D conversion instruction, the input image into the 3D image in response to the 3D conversion instruction.11-15-2012
20120288184METHOD AND SYSTEM FOR ADJUSTING DEPTH VALUES OF OBJECTS IN A THREE DIMENSIONAL (3D) DISPLAY - A method of setting a plurality of depth values of a plurality of objects in a scene. The method comprises providing an image dataset depicting a scene comprising a plurality of objects having a plurality of depth values with a plurality of depth differences thereamong, selecting a depth range, simultaneously adjusting the plurality of depth values while maintaining the plurality of depth differences, the adjusting being limited by the depth range, and instructing the generation of an output image depicting the scene so that the plurality of objects having the plurality of adjusted depth values.11-15-2012
20100166296METHOD AND PROGRAM FOR EXTRACTING SILHOUETTE IMAGE AND METHOD AND PROGRAM FOR CONSTRUCTING THREE DIMENSIONAL MODEL - A present invention provides a method and a program for extracting the high accuracy silhouette by relatively simple process not using manual labor or special photography environment. A method for extracting the high accuracy silhouette comprises: extracting a number of first silhouettes from a number of object images and a number of background images by a background subtraction; constructing first visual hull from a number of the first silhouettes by a shape from silhouette method; constructing second visual hull by process to repair missed parts and/or to remove unwanted regions in first visual hull; and extracting a number of second silhouettes from the second visual hull.07-01-2010
20130011047Method, System and Computer Program Product for Switching Between 2D and 3D Coding of a Video Sequence of Images - A video sequence of images includes at least first and second images. In response to at least first and second conditions being satisfied, an encoding mode is switched between two-dimensional video coding and three-dimensional video coding. The first condition is that the second image represents a scene change in comparison to the first image. The second image is encoded according to the switched encoding mode.01-10-2013
20130011046DEPTH IMAGE CONVERSION APPARATUS AND METHOD - Provided are an apparatus and method for converting a low-resolution depth image to a depth image having a resolution identical to a resolution of a high-resolution color image. The depth image conversion apparatus may generate a discrete depth image by quantizing a depth value of an up-sampled depth image, estimate a high-resolution discrete depth image by optimizing an objective functions of the discrete depth image based on the high-resolution color image and an up-sampled depth border, and convert the up-sampled depth image to a high-resolution depth image by filtering the up-sampled depth image when a difference between discrete depth values of neighboring pixels in the high-resolution discrete depth image is less than a predetermined threshold value.01-10-2013
20130011045APPARATUS AND METHOD FOR GENERATING THREE-DIMENSIONAL (3D) ZOOM IMAGE OF STEREO CAMERA - An apparatus and method for generating a three-dimensional (3D) zoom image of a stereo camera are provided that may compute a baseline variation or a convergence angle that is associated with a magnification of a zoom image acquired from the stereo camera, may warp the zoom image using the computed baseline variation or the computed convergence angle, and may perform inpainting on the warped image to prevent a distortion of 3D information, so that a 3D zoom image may be generated without a distortion of 3D information using a zoom lens.01-10-2013
20130011048THREE-DIMENSIONAL IMAGE PROCESSING DEVICE, AND THREE-DIMENSIONAL IMAGE PROCESSING METHOD - In the three-dimensional imaging device (three-dimensional image processing device), the depth acquisition unit acquires L depth information and R depth information from a three-dimensional image. The image correction unit adjusts disparities of edge portion areas of a subject based on the L depth information and the R depth information such that the normal positions of the edge portion areas of the subject are farther away. Accordingly, when a three-dimensional image acquired by the three-dimensional imaging device is three-dimensionally displayed, the edge areas of the subject are displayed having a sense of roundness. As a result, a three-dimensional image that has been subjected to processing by this three-dimensional image processing device is a high-quality three-dimensional image that can appropriately reproduce the three-dimensional appearance and sense of thickness of the subject and has little of the cardboard effect.01-10-2013
20130011044OBJECT CONTOUR DETECTION DEVICE AND METHOD - An object contour detection method includes: allowing an image sensor to sense respectively a plurality of images of an object by moving a lens with a shallow depth of field at a plurality of positions repeatedly, meanwhile, record the plurality of positions of the lens and the plurality of images one-to-one corresponding to the plurality of positions; removing respectively unclear areas in the plurality of images to obtain a plurality of clear images, and obtaining a plurality of displacement quantities of depth of field depending on a displacement quantity between each two adjacent positions in the plurality of positions; and extending a depth of the front image to reach the corresponding displacement quantity of depth of field and then combine the front image with the rear image in sequence, allowing the plurality of clear images to combine into a stereoscopic image corresponding to the object contour.01-10-2013
20130016897METHOD AND APPARATUS FOR PROCESSING MULTI-VIEW IMAGE USING HOLE RENDERING - A method and apparatus for processing a multi-view image are provided. A priority may be assigned to each hole pixel in a hole region generated when an output view is generated. The priority of each hole pixel may be generated by combining a structure priority, a confidence priority, and a disparity priority. Hole rendering may be applied to a target patch including a hole pixel having a highest priority. The hole pixel may be restored by searching for a source patch most similar to a background of the target patch, and copying a pixel in the found source patch into a hole pixel of the target patch.01-17-2013
201300168963D Visualization of Light Detection and Ranging DataAANM Seida; Steven B.AACI WylieAAST TXAACO USAAGP Seida; Steven B. Wylie TX US - In accordance with particular embodiments, a method includes receiving LIDAR data associated with a geographic area and generating a three-dimensional image of the geographic area based on the LIDAR data. The method further includes presenting at least a first portion of the three-dimensional image to a user based on a camera at a first location. The first portion of the three-dimensional image is presented from a walking perspective. The method also includes navigating the three-dimensional image based on a first input received from the user. The first input is used to direct the camera to move along a path in the walking perspective based on the first input and the three-dimensional image. The method further includes presenting at least a second portion of the three-dimensional image to the user based on navigating the camera to a second location. The second portion of the three dimensional image presented from the walking perspective.01-17-2013
20130016898METHOD AND APPARATUS FOR LOW-BANDWIDTH CONTENT-PRESER VING ENCODING OF STEREOSCOPIC 3D IMAGESAANM Tchoukaleysky; EmilAACI TorontoAACO CAAAGP Tchoukaleysky; Emil Toronto CA - A method and apparatus are described including accepting a first and a second stereoscopic eye frame line image, determining a coarse image shift between the first stereoscopic eye frame line image and the second stereoscopic eye frame line image, determining a fine image shift responsive to the coarse image shift, forwarding one of the first stereoscopic eye frame line image and the second stereoscopic eye frame line image and forwarding data corresponding to the fine image shift and metadata for further processing. Also described are a method and apparatus including receiving a transmitted first full stereoscopic eye frame line image, extracting a difference between a first stereoscopic eye frame line image and a second stereoscopic image, subtracting the extracted difference from the first stereoscopic eye frame line image, storing the second stereoscopic eye frame line image, extracting a shift line value from metadata included in the first full stereoscopic eye frame line image and shifting the second stereoscopic eye frame line image to its original position responsive to the shift value.01-17-2013
20110158509IMAGE STITCHING METHOD AND APPARATUS - The present invention relates to an image processing technology, and discloses an image stitching method and apparatus to solve the problem of severe ghosting of an image stitched in the prior art. In the embodiments of the present invention, the overlap region of two images is found, a depth image of the overlap region is obtained, and the two images are stitched together according to the depth image. In the stitching process, the 3-dimensional information of the images is obtained by using the depth image to deghost the image. The method and apparatus under the present invention are applicable to multi-scene videoconferences and the occasions of making wide-view images or videos.06-30-2011
20110158508DEPTH-VARYING LIGHT FIELDS FOR THREE DIMENSIONAL SENSING - A method for mapping includes projecting onto an object a pattern of multiple spots having respective positions and shapes, such that the positions of the spots in the pattern are uncorrelated, while the shapes share a common characteristic. An image of the spots on the object is captured and processed so as to derive a three-dimensional (3D) map of the object.06-30-2011
20110158506METHOD AND APPARATUS FOR GENERATING 3D IMAGE DATA - A method and apparatus for generating three-dimensional (3D) image data by using 2D image data including a dummy component and an image component relating to an input image, wherein the dummy component is used to adjust a resolution of the input image, are provided. The method includes: generating a depth map that corresponds to the 2D image data; detecting a dummy area including the dummy component from the 2D image data; and correcting depth values of pixels that correspond to the dummy area in the depth map.06-30-2011
20110158505STEREO PRESENTATION METHOD OF DISPLAYING IMAGES AND SPATIAL STRUCTURE - An image stereo presentation method, comprising steps of: establishing at least one three-dimensional (3D) model corresponding to a physical stereo object; projecting a planar image onto the 3D model at a specific angle, wherein the 3D model has an interface adjoining at least two surfaces in a range of projecting the planar image, and the said two surfaces are not on the same plane; extracting at least two sub-images from the said two surfaces whereby the planar image is projected onto; and forming the said two sub-images on two physical surfaces of the physical stereo object corresponding to the said two surfaces. The image stereo presentation method is capable to present a special visual effect.06-30-2011
20110158504APPARATUS AND METHOD FOR INDICATING DEPTH OF ONE OR MORE PIXELS OF A STEREOSCOPIC 3-D IMAGE COMPRISED FROM A PLURALITY OF 2-D LAYERS - Implementations of the present invention involve methods and systems for converting a 2-D image to a stereoscopic 3-D image and displaying the depth of one or more pixels of the 3-D image through an output image of a user interface. The pixels of the output image display the perceived depth of the corresponding 3-D image such that the user may determine the relative depth of the pixels of the image. In addition, one or more x-offset values or z-axis positions may be individually selected such that any pixel of the output image that correspond to the selected values is indicated in the output image. By providing the user with a visualization tool to quickly determine the perceived position of any pixel of a stereoscopic image, the user may confirm the proper alignment of the objects of the image in relation to the image as a whole.06-30-2011
20130022262HEAD RECOGNITION METHOD - Described herein is a method for recognising a human head in a source image. The method comprises detecting a contour of at least part of a human body in the source image, calculating a depth of the human body in the source image. From the source image, a major radius size and a minor radius size of an ellipse corresponding to a human head at the depth is calculated, and, for at least several of a set of pixels of the detected contour, generating in an accumulator array at least one segment of an ellipse centred on the position of the contour pixel and having the major and minor radius sizes. Positions of local intensity maxima in the accumulator array are selected as corresponding to positions of the human head candidates in the source image.01-24-2013
20120243777SYSTEM AND METHOD FOR SEGMENTATION OF THREE-DIMENSIONAL IMAGE DATA - In one embodiment, a system for computing class identifiers for three-dimensional pixel data has been developed. The system comprises a plurality of class identifying processors, and a data grouper operatively connected to a first memory. Each class identifying processor has a plurality of inputs for at least one pixel value and a plurality of class identifiers for pixel values neighboring the at least one pixel value and each class identifying processor is configured to generate a class identifier for the at least one pixel value input with reference to the class identifiers for the neighboring pixel values. The data grouper is configured to retrieve a plurality of pixel values from the first memory and a plurality of class identifiers for pixel values neighboring the retrieved pixel values.09-27-2012
20120243775WIDE BASELINE FEATURE MATCHING USING COLLOBRATIVE NAVIGATION AND DIGITAL TERRAIN ELEVATION DATA CONSTRAINTS - A method for wide baseline feature matching comprises capturing one or more images from an image sensor on each of two or more platforms when the image sensors have overlapping fields of view, performing a 2-D feature extraction on each of the captured images in each platform using local 2-D image feature descriptors, and calculating 3-D feature locations on the ellipsoid of the Earth surface from the extracted features using a position and attitude of the platform and a model of the image sensor. The 3-D feature locations are updated using digital terrain elevation data (DTED) as a constraint, and the extracted features are matched using the updated 3-D feature locations to create a common feature zone. A subset of features from the common feature zone is selected, and the subset of features is inputted into a collaborative filter in each platform. A convergence test is then performed on other subsets in the common feature zone, and falsely matched features are pruned from the common feature zone.09-27-2012
20080240549METHOD AND APPARATUS FOR CONTROLLING DYNAMIC DEPTH OF STEREO-VIEW OR MULTI-VIEW SEQUENCE IMAGES - A method and an apparatus for controlling a dynamic depth of stereo-view or multi-view images. The method includes receiving stereo-view or multi-view images, generating a disparity histogram by estimating the disparity of two images corresponding to the received images and measuring the frequency of the estimated disparity, determining the disparity control amount of the stereo-view or multi-view images by convoluting the generated disparity histogram and a characteristic function, and rearranging stereo-view or multi-view input images by controlling parallax according to the control amount of parity.10-02-2008
20080232680Two dimensional/three dimensional digital information acquisition and display device - A two dimensional/three dimensional (2D/3D) digital acquisition and display device for enabling users to capture 3D information using a single device. In an embodiment, the device has a single movable lens with a sensor. In another embodiment, the device has a single lens with a beam splitter and multiple sensors. In another embodiment, the device has multiple lenses and multiple sensors. In yet another embodiment, the device is a standard digital camera with additional 3D software. In some embodiments, 3D information is generated from 2D information using a depth map generated from the 2D information. In some embodiments, 3D information is acquired directly using the hardware configuration of the camera. The 3D information is then able to be displayed on the device, sent to another device to be displayed or printed.09-25-2008
20080226160SYSTEMS AND METHODS FOR FILLING LIGHT IN FRAMES DURING 2-D TO 3-D IMAGE CONVERSION - The present invention is directed to systems and methods for processing 2-D to 3-D image conversion. The systems and methods fill in light among image frames when object have been removed or otherwise changed. In one embodiment, light is treated as an object and can be removed during image processing. The light is added back during the rendering process using the created light object. In other embodiments, light from other frames is filled in using weighted averaging of the light depending upon temporal distance from a particular frame and a base frame.09-18-2008
20080226159Method and System For Calculating Depth Information of Object in Image - A method and a system for calculating a depth information of objects in an image is disclosed. In accordance with the method and the system, an area occupied by two or more objects in the image is classified into an object area and an occlusion area using an outline information to obtain an accurate depth information of each of the objects.09-18-2008
20130177234SYSTEM AND METHOD FOR IDENTIFYING AN APERTURE IN A REPRESENTATION OF AN OBJECT - An iterative process for determining an aperture in a representation of an object is disclosed. The object is received and a bounding box corresponding thereto is determined. The bounding box includes a plurality of initial voxels and the object is embedded therein. An intersecting set of initial voxels is determined, as well as an internal set and an external set of initial voxels. The resolution of the voxels is iteratively decreased until the ratio of internal voxels to external voxels exceeds a predetermined threshold. The voxels corresponding to the final iteration are the final voxels. An internal set of final voxels is determined. A union set of initial voxels is determined indicating an intersection between the external set of initial voxels and the internal set of final voxels. From the union set of initial voxels and the external set of initial voxels, a location of an aperture is determined.07-11-2013
20130177235Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations - A system adapted to implement a learning rule in a three-dimensional (3D) environment is described. The system includes: a renderer adapted to generate a two-dimensional (2D) image based at least partly on a 3D scene; a computational element adapted to generate a set of appearance features based at least partly on the 2D image; and an attribute classifier adapted to generate at least one set of learned features based at least partly on the set of appearance features and to generate a set of estimated scene features based at least partly on the set of learned features. A method labels each image from among the set of 2D images with scene information regarding the 3D scene; selects a set of learning modifiers based at least partly on the labeling of at least two images; and updates a set of weights based at least partly on the set of learning modifiers.07-11-2013
20130177236METHOD AND APPARATUS FOR PROCESSING DEPTH IMAGE - An apparatus and method for processing a depth image. A depth image may be generated with reduced noise and motion blur, using depth images generated during different integration times that are generated based on the noise and motion blur of the depth image.07-11-2013
20130177237STEREO-VISION OBJECT DETECTION SYSTEM AND METHOD - An object in a visual scene is detected responsive to one or more void regions in an associated range-map image generated from associated stereo image components. In one aspect, each element of a valid-count vector contains a count of a total number of valid range values at a corresponding column position from a plurality of rows of the range-map image. The valid-count vector, or a folded version thereof, is filtered, and an integer approximation thereof is differentiated so as to provide for identifying one or more associated void regions along the plurality of rows of the range-map image. For each void region, the image pixels of an associated prospective near-range object are identified as corresponding to one or more modes of a histogram providing a count of image pixels with respect to image pixel intensity, for image pixels from one of the stereo image components within the void region.07-11-2013
20130170736DISPARITY ESTIMATION DEPTH GENERATION METHOD - A disparity estimation depth generation method, wherein after inputting an original left map and an original right map in a stereo color image, compute depth of said original left and right maps, comprising following steps: perform filtering of said original left and right maps, to generate a left map and a right map; perform edge detection of an object in said left and right maps, to determine size of at least a matching block in said left and said right maps, based on information of two edges detected in an edge-adaptive approach; perform computation of matching cost, to generate respectively a preliminary depth map, and perform cross-check to find out at least an unreliable depth region from said preliminary depth map to perform refinement; and refine errors in said unreliable depth region, to obtain correct depth of said left and said right maps.07-04-2013
20130170737STEREOSCOPIC IMAGE CONVERTING APPARATUS AND STEREOSCOPIC IMAGE DISPLAYING APPARATUS - A stereoscopic image converting apparatus is capable of displaying a stereoscopic image. The apparatus comprises a photographing condition extracting portion for extracting convergent angle conversion information when right/left images are captured; and an image converting portion for changing the convergence angle of the time when the right/left images are captured. The image converting portion comprises a convergent angle correction value calculating portion which calculates the maximum disparity value of the right/left images on the basis of the convergent angle conversion information and display size information and calculates a convergent angle correction value at which the calculated maximum disparity value is equal to or lower than a previously designated maximum disparity value; and a convergent angle conversion processing portion which generates an image in which the convergent angle when the right/left images are captured is changed on the basis of the calculated convergent angle correction value.07-04-2013
20110274343SYSTEM AND METHOD FOR EXTRACTION OF FEATURES FROM A 3-D POINT CLOUD - A method of extracting a feature from a point cloud comprises receiving a three-dimensional (3-D) point cloud representing objects in a scene, the 3-D point cloud containing a plurality of data points; generating a plurality of hypothetical features based on data points in the 3-D point cloud, wherein the data points corresponding to each hypothetical feature are inlier data points for the respective hypothetical feature; and selecting the hypothetical feature having the most inlier data points as representative of an object in the scene.11-10-2011
20130142415System And Method For Generating Robust Depth Maps Utilizing A Multi-Resolution Procedure - A system and method for generating robust depth maps includes a depth estimator that creates a depth map pyramid structure that includes a plurality of depth map levels that each have different resolution characteristics. In one embodiment, the depth map levels include a fine-scale depth map, a medium-scale depth map, and a coarse scale depth map. The depth estimator evaluates depth values from the fine-scale depth map by utilizing fine-scale confidence features, and evaluates depth values from the medium-scale depth map and the coarse-scale depth map by utilizing coarse-scale confidence features. The depth estimator then fuses optimal depth values from the different depth map levels into an optimal depth map.06-06-2013
20130083994Semi-Global Stereo Correspondence Processing With Lossless Image Decomposition - A method for disparity cost computation for a stereoscopic image is provided that includes computing path matching costs for external paths of at least some boundary pixels of a tile of a base image of the stereoscopic image, wherein a boundary pixel is a pixel at a boundary between the tile and a neighboring tile in the base image, storing the path matching costs for the external paths, computing path matching costs for pixels in the tile, wherein the stored path matching costs for the external paths of the boundary pixels are used in computing some of the path matching costs of some of the pixels in the tile, and computing aggregated disparity costs for the pixels in the tile, wherein the path matching costs computed for each pixel are used to compute the aggregated disparity costs for the pixel.04-04-2013
20120250980METHOD, APPARATUS AND SYSTEM - A method of providing, over a network, an image for recreation in a device, the image containing a background and a foreground object and the method comprising: detecting the position of the foreground object in the image and generating position information on dependence thereon; removing the foreground object from the image; and transferring to the device i) the image with the foreground object removed, ii) the removed foreground object and iii) the position information.10-04-2012
20120250978SCENE ANALYSIS USING IMAGE AND RANGE DATA - Image and range data associated with an image can be processed to estimate planes within the 3D environment in the image. By utilizing image segmentation techniques, image data can identify regions of visible pixels having common features. These regions can be used to candidate regions for fitting planes to the range data based on a RANSAC technique.10-04-2012
20130136338Methods and Apparatus for Correcting Disparity Maps using Statistical Analysis on Local Neighborhoods - Methods and apparatus for disparity map correction through statistical analysis on local neighborhoods. A disparity map correction technique may be used to correct mistakes in a disparity or depth map. The disparity map correction technique may detect and mark invalid pixel pairs in a disparity map, segment the image, and perform a statistical analysis of the disparities in each segment to identify outliers. The invalid and outlier pixels may then be corrected using other disparity values in the local neighborhood. Multiple iterations of the disparity map correction technique may be performed to further improve the output disparity map.05-30-2013
20130136337Methods and Apparatus for Coherent Manipulation and Stylization of Stereoscopic Images - Methods and apparatus for coherent manipulation and stylization of stereoscopic images. A stereo image manipulation method may use the disparity map for a stereo image pair to divide the left and right images into a set of slices, each of which is the portion of the images that correspond to a certain, small depth range. The method may merge the left and right slices for a depth into a single image. The method may then apply a stylization technique to each slice. The method may then extract the left and right portions of each stylized slice, and stack them together to create a coherent stylized stereo image. As an alternative to first extracting slices from a merged image and then applying a stylization technique to the slices, the method may first apply the stylization technique to the merged image and then extract slices from the stylized merged image.05-30-2013
20130094753FILTERING IMAGE DATA - Systems, methods, and machine-readable and executable instructions are provided for filtering image data. Filtering image data can include determining a desired depth of field of an image, determining a distance between a pixel of the image and the desired depth of field. Filtering image data can also include adjusting a contrast of the pixel in proportion to a magnitude of a weight of the pixel, wherein the weight is based on the distance.04-18-2013
20130094755METHOD FOR THE MICROSCOPIC THREE-DIMENSIONAL REPRODUCTION OF A SAMPLE - A method for the three-dimensional imaging of a sample in which image information from different depth planes of the sample is stored in a spatially resolved manner, and the three-dimensional image of the sample is subsequently reconstructed from this stored image information is provided. A reference structure is applied to the illumination light, at least one fluorescing reference object is positioned next to or in the sample, images of the reference structure of the illumination light, of the reference object are recorded from at least one detection direction and evaluated. The light sheet is brought into an optimal position based on the results and image information of the reference object and of the sample from a plurality of detection directions is stored. Transformation operators are obtained on the basis of the stored image information and the reconstruction of the three-dimensional image of the is based on these transformation operators.04-18-2013
20130094754IMAGE OUTPUT APPARATUS AND METHOD FOR OUTPUTTING IMAGE THEREOF - An image output apparatus and a method for outputting an image thereof are provided. The method of the image output apparatus determines whether a difference in grayscale values between a current image frame and a previous image frame is greater than or equal to a pre-set value, if the difference in the grayscale values is greater than or equal to the pre-set value, at least one of a maximum grayscale value and a minimum grayscale value of the current image frame is adjusted, a grayscale of the current image frame according to an input and output grayscale function having the adjusted maximum grayscale value and minimum grayscale value is adjusted and an image is output. Accordingly, a crosstalk phenomenon of a 04-18-2013
20130114887STEREO DISTANCE MEASUREMENT APPARATUS AND STEREO DISTANCE MEASUREMENT METHOD - Provided is a stereo distance measurement apparatus wherein a camera image itself is adjusted to correct the blur, thereby preventing the distance measurement time from being long, while improving the precision of disparity detection. In the apparatus (05-09-2013
20130114886POSITION AND ORIENTATION MEASUREMENT APPARATUS, POSITION AND ORIENTATION MEASUREMENT METHOD, AND STORAGE MEDIUM - A position and orientation measurement apparatus for measuring a position and orientation of a target object, comprising: storage means for storing a three-dimensional model representing three-dimensional shape information of the target object; obtaining means for obtaining a plurality of measurement data about the target object sensed by image sensing means; reliability calculation means for calculating reliability for each of the pieces of measurement data; selection means for selecting the measurement data by a predetermined number from the plurality of measurement data based on the reliability; association means for associating planes forming the three-dimensional model with each of the measurement data selected by the selection means; and decision means for deciding the position and orientation of the target object based on the result associated by the association means.05-09-2013
20130114885METHOD AND APPARATUS FOR CREATING STEREO IMAGE ACCORDING TO FREQUENCY CHARACTERISTICS OF INPUT IMAGE AND METHOD AND APPARATUS FOR REPRODUCING THE CREATED STEREO IMAGE - A method and an apparatus for creating a stereo image adaptively according to the characteristic of an input image and a method and an apparatus for reproducing the created stereo image are provided. The method for creating a stereo image includes selecting one of a left view image and a right view image that constitute the stereo image and measuring the directivity of high frequency components of the selected image, and synthesizing the left view image and the right view image into a stereo image in a format depending on the measured directivity.05-09-2013
20130114884THREE-DIMENSION IMAGE PROCESSING METHOD AND A THREE-DIMENSION IMAGE DISPLAY APPARATUS APPLYING THE SAME - A three-dimension (3D) image processing method is disclosed. A plurality of asymmetric filtering is performed on an input depth map to obtain a plurality of asymmetric filtering results. One among the asymmetric filtering results is selected as an output depth map. A two-dimension (2D) image is converted into a 3D image according to the output depth map.05-09-2013
20130114883APPARATUS FOR EVALUATING VOLUME AND METHOD THEREOF - An apparatus for evaluating a volume of an object and a method thereof are provided. The provided apparatus and the method can precisely evaluate the volume of the object with a single camera, and the required evaluation time is short. Accordingly, shipping companies can utilize the most appropriate container or cargo space for each object to deliver, thereby reducing operation costs and optimizing the transportation fleet.05-09-2013
20130101207Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.04-25-2013
20130101206Method, System and Computer Program Product for Segmenting an Image - A first depth map is generated in response to a first stereoscopic image from a camera. The first depth map includes first pixels having valid depths and second pixels having invalid depths. A second depth map is generated in response to a second stereoscopic image from the camera. The second depth map includes third pixels having valid depths and fourth pixels having invalid depths. A first segmentation mask is generated in response to the first pixels and the third pixels. A second segmentation mask is generated in response to the second pixels and the fourth pixels. In response to the first and second segmentation masks, a determination is made of whether the second stereoscopic image includes a change in comparison to the first stereoscopic image.04-25-2013
20130129195IMAGE PROCESSING METHOD AND APPARATUS USING THE SAME - A image processing method for obtaining a saliency map of a input image, includes the steps of: determining a depth map and an initial saliency map; selecting a (j,i)th depth on the depth map as a target depth, wherein i and j are natural numbers respectively smaller than or equal to integers m and n; selecting 2R+1 selected depths with a one-dimensional window, centered with the target depth, wherein R is a natural number greater than 1; for each of the 2R+1 selected depths, determining whether it is greater than the target depth; if so, having a corresponding (j,i)th saliency value adjusted with a difference; and adjusting parameters i and j to have each and every saliency values of the initial saliency map adjusted and accordingly obtain the saliency map.05-23-2013
20130129191Methods and Apparatus for Image Rectification for Stereo Display - A set of features in a pair of images is associated to selected cells within a set of cells using a base mesh. Each image of the pair of images is divided using the base mesh to generate the set of cells. The set of features is defined in terms of the selected cells. A stereo image pair is generated by transforming the set of cells with a mesh-based transformation function. A transformation of the set of cells is computed by applying an energy minimization function to the set of cells. A selected transformed mesh and another transformed mesh are generated by applying the transformation of the set of cells to the base mesh. The mesh-based transformation function preserves selected properties of the set of features in the pair of images.05-23-2013
20130136342IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - There are provided an data input unit configured to receive an input image, depth data, and a shooting parameter, a parameter input unit that receives a transformation parameter as a parameter on projective transformation of a three-dimensional model, a transformed image generating unit that generate a transformed image by performing projective transformation based on the transformation parameter in the three-dimensional model obtained from the input image, the depth data, and the shooting parameter, a blank area detecting unit that detects a blank area in the transformed image, the blank area being a group of blank pixels having no corresponding pixel in the input image, and an output unit configured to output the transformed image in the case where a blank value indicating size of the blank area is smaller than or equal to a threshold value.05-30-2013
20130142416DETECTION DEVICE AND DETECTION METHOD - A detection device capable of reliably detecting an object to be detected. An intersection region pattern setting unit (06-06-2013
20080205748Structural light based depth imaging method and system using signal separation coding, and error correction thereof - Provided is a structural light based three-dimensional depth imaging method and system using signal separation coding and error correction thereof capable of detecting, removing and correcting corresponding errors between a projection apparatus and an image photographing apparatus caused by phenomena such as reflection on an object surface, blurring by a focus, and so on, using geometrical constraints between the projection apparatus and the image photographing apparatus. Here, the projection apparatus projects light, and the image photographing apparatus obtains the light. The depth imaging method includes projecting light from a projection apparatus, obtaining the light using an image photographing apparatus, and measuring a distance or a three-dimensional depth image. Therefore, it is possible to provide a structural light based thee-dimensional depth imaging method and system using geometrical conditions capable of precisely obtaining three-dimensional depth information of target environment.08-28-2008
20110216963ROTATE AND SLANT PROJECTOR FOR FAST FULLY-3D ITERATIVE TOMOGRAPHIC RECONSTRUCTION - Disclosed herein are embodiments of a rotate-and-slant projector that takes advantage of symmetries in the geometry to compute truly volumetric projections to multiple oblique sinograms in a computationally efficient manner. It is based upon the 2D rotation-based projector using the fast three-pass method of shears, and it conserves the 2D rotator computations for multiple projections to each oblique sinogram set. The projector is equally applicable to both conventional evenly-spaced projections and unevenly-spaced line-of-response (LOR) data (where the arc correction is modeled within the projector). The LOR-based version models the exact location of the direct and oblique LORs, and provides an ordinary Poisson reconstruction framework. Speed optimizations of various embodiments of the projector include advantageously utilizing data symmetries such as the vertical symmetry of the oblique projection process, a coarse-depth compression, and array indexing schemes which maximize serial memory access.09-08-2011
20080199071CREATING 3D IMAGES OF OBJECTS BY ILLUMINATING WITH INFRARED PATTERNS - According to a general aspect, processing images includes projecting an infra-red pattern onto a three-dimensional object and producing a first image, a second image, and a third image of the three-dimensional object while the pattern is projected on the three-dimensional object. The first image and the second image include the three-dimensional object and the pattern. The first image and the second image are produced by capturing at a first camera and a second camera, respectively, light filtered through an infra-red filter. The third image includes the three-dimensional object but not the pattern. Processing the images also includes establishing a first-pair correspondence between a portion of pixels in the first image and a portion of pixels in the second image. Processing the images further includes constructing, based on the first-pair correspondence and the third image, a two-dimensional image that depicts a three-dimensional construction of the three-dimensional object.08-21-2008
20080199070THREE-DIMENSIONAL IMAGE DISPLAY APPARATUS AND METHOD FOR ENHANCING STEREOSCOPIC EFFECT OF IMAGE - A three-dimensional (3D) image display apparatus for enhancing a stereoscopic effect of an image is provided. The 3D image display apparatus includes a disparity estimator which estimates the disparity between a first image and a second image which are obtained by photographing the same object from different angles; a computing unit which computes the adjustment disparity between the first image and the second image using a histogram obtained by computing the frequency of the estimated disparity; and an output unit which applies the computed adjustment disparity to the first image and the second image and outputs the first image and the second image in which the disparity is adjusted. Therefore, the input disparity between the first image and the second image is adjusted, and an image with an enhanced stereoscopic effect may be provided to a user.08-21-2008
20080199069Stereo Camera for a Motor Vehicle - A device is described for a motor vehicle, having at least one first camera and at least one second camera, the first camera and the second camera acting as a stereo camera, the first camera and the second camera being different with respect to at least one camera property, in particular the light sensitivity of the first camera and the light sensitivity of the second camera being different. Furthermore the device is configured in such a way that the driver assistance function of night vision support and/or traffic sign recognition and/or object recognition and/or road boundary recognition and/or lane recognition and/or other functions are ensured.08-21-2008
20110229012ADJUSTING PERSPECTIVE FOR OBJECTS IN STEREOSCOPIC IMAGES - A method for manipulating a stereoscopic image, comprising receiving an original stereoscopic image including a left image and a right image; identifying one or more objects; determining actual object sizes and actual object locations in both the left and right images; determining original perceived three-dimensional object location and new perceived three-dimensional object locations for the identified one or more objects; determining a size magnification factors and location displacement values for each of the one or more objects; generating a new stereoscopic image by changing the actual object sizes and the actual object locations responsive to the corresponding size magnification factors and location displacement values; and storing the new stereoscopic image in a processor-accessible memory system.09-22-2011
20120275689SYSTEMS AND METHODS 2-D TO 3-D CONVERSION USING DEPTH ACCESS SEGIMENTS TO DEFINE AN OBJECT - The present invention is directed to systems and methods for controlling 11-01-2012
20120275688METHOD FOR AUTOMATED 3D IMAGING - A method for automated construction of 3D images is disclosed, in which a range measurement device is to initiate and control the processing of 2D images in order to produce a 3D image. The range measurement device may be integrated with an image sensor, for example the range sensor from a digital camera, or may be a separate device. Data indicating the distance to a specific feature obtained from the range sensor may be used to control and automate the construction of the 3D image.11-01-2012
20120275687System and Method for Processing Video Images - Embodiments use point clouds to form a three dimensional image of an object. The point cloud of the object may be formed from analysis of two dimensional images of the object. Various techniques may be used on the point cloud to form a three dimensional model of the object which is then used to create a stereoscopic representation of the object.11-01-2012
20120275686INFERRING SPATIAL OBJECT DESCRIPTIONS FROM SPATIAL GESTURES - Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements.11-01-2012
20120257817IMAGE OUTPUT APPARATUS - An image output apparatus (10-11-2012
20100296724Method and System for Estimating 3D Pose of Specular Objects - A method estimates a 3D pose of a 3D specular object in an environment. In a preprocessing step, a set of pairs of 2D reference images are generated using a 3D model of the object, and a set of poses of the object, wherein each pair of reference images is associated with one of the poses. Then, a pair of 2D input images are acquired of the object. A rough 3D pose of the object is estimated by comparing features in the pair of 2D input images and the features in each pair of 2D reference images using a rough cost function. The rough estimate is optionally refined using a fine cost function.11-25-2010
20100303337Methods and Apparatus for Practical 3D Vision System - A method and system for specifying an area of interest in a 3D imaging system including a plurality of cameras that include at least first and second cameras wherein each camera has a field of view arranged along a camera distinct trajectory, the method comprising the steps of presenting a part at a location within the fields of view of the plurality of cameras, indicating on the part an area of interest that is within the field of view of each of the plurality of cameras, for each of the plurality of cameras: (i) acquiring at least one image of the part including the area of interest, (ii) identifying a camera specific field of interest within the field of view of the camera associated with the area of interest in the at least one image and (iii) storing the field of interest for subsequent use.12-02-2010
20100316282Derivation of 3D information from single camera and movement sensors - In various embodiments, a camera takes pictures of at least one object from two different camera locations. Measurement devices coupled to the camera measure the change in location and the change in direction of the camera from one location to the other, and derive 3-dimensional information on the object from that information and, in some embodiments, from the images in the pictures.12-16-2010
20100316281Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking - The invention relates to a method and a device for determining the exposure of a three-dimensional object in an image, characterised in that it comprises the following steps: acquiring a three-dimensional generic model of the object, projecting the three-dimensional generic model according to at least one two-dimensional representation and associating to each two-dimensional representation an exposure information of the three-dimensional object, electing and positioning a two-dimensional representation onto the object in said image, and determining the three-dimensional exposure of the object in the image from at least the exposure information associated with the selected two-dimensional representation.12-16-2010
20100316280DESIGN DRIVEN SCANNING ALIGNMENT FOR COMPLEX SHAPES - Methods and systems for accurately determining dimensional accuracy of a complex three dimensional shape are disclosed. The invention in one respect includes determining at least a non-critical feature and at least a critical feature of the 3-D component, determining a first datum using at least the non-critical feature, aligning the first datum to at least a portion of a reference shape, determining a second datum corresponding to the critical feature subsequent to the aligning, and determining the dimensional accuracy of the 3-D component by comparing the second datum to another portion of the reference shape.12-16-2010
20130156294DEPTH MAP GENERATION BASED ON SOFT CLASSIFICATION - A method for generating a depth map for a 2D image and video includes receiving the 2D image and video; defining a plurality of object classes; analyzing content of the received 2D image and video; calculating probabilities that the received 2D image belongs to the object classes; and determining a final depth map based on a result of the analyzed content and the calculated probabilities for the object classes.06-20-2013
20130156295METHOD OF FILTERING A DISPARITY MESH OBTAINED FROM PIXEL IMAGES - A method of filtering a disparity mesh from pixel images according to the invention, where the disparity mesh comprises a plurality of points, where each point is associated with values of two planar coordinates (X, Y) and a disparity value (D) and where the values are quantization pitches, comprises the step: filtering planes by filtering 2D-lines in 2D-spaces (X-D, Y-D) of the planar coordinates (X,Y) and the disparity (D).06-20-2013
20130156296Three Dimensional Gesture Recognition in Vehicles - A method and system for performing gesture recognition of a vehicle occupant employing a time of flight (TOF) sensor and a computing system in a vehicle. An embodiment of the method of the invention includes the steps of receiving one or more raw frames from the TOF sensor, performing clustering to locate one or more body part clusters of the vehicle occupant, calculating the location of the tip of the hand of the vehicle occupant, determining whether the hand has performed a dynamic or a static gesture, retrieving a command corresponding to one of the determined static or dynamic gestures, and executing the command.06-20-2013
201301829442D TO 3D IMAGE CONVERSION - A method (and system) of processing image data in which a depth map is processed to derive a modified depth map by analysing luminance and/or chrominance information in respect of the set of pixels of the image data. The depth map is modified using a function which correlates depth with pixel height in the pixellated image and which has a different correlation between depth and pixel height for different luminance and/or chrominance values.07-18-2013
20130182945IMAGE PROCESSING METHOD AND APPARATUS FOR GENERATING DISPARITY VALUE - A method and apparatus for processing an image is provided. The image processing apparatus may adjust or generate a disparity of a pixel, by assigning similar disparities to two pixels that are adjacent to each other and have similar pixels. The image processing apparatus may generate a final disparity map that may minimize energy, based on an image and an initial disparity map, under a predetermined constraint. A soft constraint or a hard constraint may be used as the constraint.07-18-2013
20130182943SYSTEMS AND METHODS FOR DEPTH MAP GENERATION - Various embodiments are disclosed for generating depth maps. One embodiment is a method implemented in an image processing device. The method comprises retrieving, by the image processing device, a 2D image; and determining, by the image processing device, at least one region within the 2D image having a high gradient characteristic relative to other regions within the 2D image. The method further comprises identifying, by the image processing device, an out-of-focus region based on the at least one region having a high gradient characteristic; and deriving, by the image processing device, a color model according to the out-of-focus region. Based on the color model, the image processing device provides a depth map for 2D-to-stereoscopic conversion.07-18-2013
20090041339Pseudo 3D image generation device, image encoding device, image encoding method, image transmission method, image decoding device, and image decoding method - A pseudo 3D image generation device includes frame memories that store a plurality of basic depth models used for estimating depth data based on a non-3D image signal and generating a pseudo 3D image signal; a depth model combination unit that combines the plurality of basic depth models for generating a composite depth model based on a control signal indicating composite percentages for combining the plurality of basic depth models; an addition unit that generates depth estimation data from the non-3D image signal and the composite depth models; and a texture shift unit that shifts the texture of the non-3D image for generating the pseudo 3D image signal.02-12-2009
20110293172IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE DISPLAY APPARATUS - An image processing apparatus 12-01-2011
20110293171METHODS AND APPARATUS FOR HIGH-RESOLUTION CONTINUOUS SCAN IMAGING - A continuous scanning method employs one or more moveable sensors and one or more reference sensors deployed in the environment around a test subject. Each sensor is configured to sense an attribute of the test subject (e.g., sound energy, infrared energy, etc.) while continuously moving along a path and recording the sensed attribute, the position, and the orientation of each of the moveable sensors and each of the reference sensors. The system then constructs a set of transfer functions corresponding to points in space between the moveable sensors, wherein each of the transfer functions relates the test data of the moveable sensors to the test data of the reference sensors. In this way, a graphical representation of the attribute in the vicinity of test subject can be produced.12-01-2011
20110311128DIGITAL WATERMARK DETECTION IN 2D-3D CONTENT CONVERSION - A system and method are provided for analyzing 3D digital content to determine whether a watermark is detectable. The watermark may exist in 2D content that is converted to 3D, and in such cases, the survivability of the watermark to the conversion process is evaluated. An anticipated location of the watermark in left and right 3D images may be determined, and the detectability based upon the anticipated location. A report may indicate whether the watermark survived the conversion in one or both images, or neither. The process may be performed for single frames, sequences of single frames, or entire files containing many image frames. Watermark placement may also be proposed for locations in 2D content, 3D content, or both. Watermarks may similarly be placed in the content.12-22-2011
20110311131DATA RESTORATION METHOD AND APPARATUS, AND PROGRAM THEREFOR - Three-dimensional data is compressed at a high compression ratio without deteriorating resolution and accuracy, by computing a coupling coefficient from input three-dimensional data and a three-dimensional base data group obtained from a plurality of objects and outputting the coupling coefficient as compressed data. Specifically, the three-dimensional data is input to corresponding point determination means. The corresponding point determination means generates three-dimensional data to be synthesized in which vertexes of the three-dimensional data are made to correspond to vertexes of three-dimensional reference data serving as a reference to determine association relationship between vertexes. Coefficient computation means computes a coupling coefficient for coupling a three-dimensional base data group used for synthesis of three-dimensional data to synthesize three-dimensional data to be synthesized, and outputs the computed coupling coefficient as the compressed data of the three-dimensional data.12-22-2011
20110311130IMAGE PROCESSING APPARATUS, METHOD, PROGRAM, AND RECORDING MEDIUM - Extracting information corresponding to a three-dimensional object from an image captured by plural imaging apparatuses is implemented with a simple configuration and a simple processing.12-22-2011
20110311129TRAINING-FREE GENERIC OBJECT DETECTION IN 2-D AND 3-D USING LOCALLY ADAPTIVE REGRESSION KERNELS - The present invention provides a method of learning-free detection and localization of actions that includes providing a query video action of interest and providing a target video, obtaining at least one query space-time localized steering kernel (3-D LSK) from the query video action of interest and obtaining at least one target 3-D LSK from the target video, determining at least one query feature from the query 3-D LSK and determining at least one target patch feature from the target 3-D LSK, and outputting a resemblance map, where the resemblance map provides a likelihood of a similarity between each the query feature and each target patch feature to output learning-free detection and localization of actions, where the steps of the method are performed by using an appropriately programmed computer.12-22-2011
20130188860MEASUREMENT DEVICE, MEASUREMENT METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a second calculator calculates a three-dimensional position of a measurement position and error in the three-dimensional position using a first image, the measurement position, a second image, and a correspondence position. A selection unit determines whether there is an image pair, in which error in the three-dimensional position becomes smaller than the error calculated by the second calculator, from among image pairs of the plurality of images, when there is an image pair, selects the image pair, and when there is no image pair, decides on the three-dimensional position. Each time an image pair is selected, the second calculator calculates a new three-dimensional position of the measurement position and error using new first and second images each included in the image pair, and first and second projection positions where the three-dimensional positions are projected onto the new first and second images, respectively.07-25-2013
20130188861APPARATUS AND METHOD FOR PLANE DETECTION - A plane detection apparatus for detecting at least one plane model from an input depth image. The plane detection apparatus may include an image divider to divide the input depth image into a plurality of patches, a plane model estimator to calculate one or more plane models with respect to the plurality of patches including a first patch and a second patch, and a patch merger to iteratively merge patches having a plane model a similarity greater than or equal to a first threshold by comparing plane models of the plurality of patches. When a patch having the plane model similarity greater than or equal to the first threshold is absent, the plane detection apparatus may determine at least one final plane model with respect to the input depth image using previously merged patches.07-25-2013
20130188862METHOD AND ARRANGEMENT FOR CENSORING CONTENT IN IMAGES - A method for censoring content on a three-dimensional image comprises a step of identifying in said three-dimensional image a three-dimensional object to be censored, and to replace said three-dimensional object to be censored by three-dimensional replacing contents in said three dimensional image.07-25-2013
20120020549Apparatus and method for depth-image encoding with rate-distortion optimization - Provided is a rate-distortion optimizing apparatus and method for encoding a depth image. The rate-distortion optimizing apparatus may reduce a resolution in an area that does not include an edge that significantly affects image synthesis, and may use a high quantization parameter and thus, may provide a high compression performance.01-26-2012
20120020548Method for Generating Images of Multi-Views - The present invention provides a method for generating images of multi-views. The method includes obtaining a 2D original image of an article and background figures of multi-views; calculating the background image range and the main body image range of the 2D original image of the article; cutting the main body image out; generating a depth model according to an equation; cutting the depth model according to the main body image range of the cut 2D image of the article; shifting every pixel in the main body image of the 2D original image of the article according to the cut depth model to obtain shifted main body images of multi-views; and synthesizing the shifted main body images of multi-views and the background figures of multi-views to obtain the final images of multi-views for 3D image reconstruction.01-26-2012
20130195347IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a modifying unit configured to modify depth information representing depths in individual pixels of an image in accordance with content included in the image, thereby generating modified depth information, and an enhancing unit configured to perform a stereoscopic effect enhancement process of enhancing a stereoscopic effect of the image by using the modified depth information generated by the modifying unit.08-01-2013
20130195348Image processing apparatus and method - An image processing apparatus is provided. The image processing apparatus may include a determining unit configured to determine at least one pixel having a pixel value difference between a first image and a second image lower than a critical value, among a plurality of input frame images to compute a hologram pattern, and a computing unit configured to compute a hologram pattern of the first image and to compute a hologram pattern of the second image using a computation result for the at least one pixel of the first image.08-01-2013
20130195349THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS, THREE-DIMENSIONAL IMAGE-PICKUP APPARATUS, THREE-DIMENSIONAL IMAGE-PICKUP METHOD, AND PROGRAM - A sense of three-dimensionality and thickness is restored to a subject and a high-quality three-dimensional image with a low sense of a cardboard cutout effect is obtained, regardless of the cause of the cardboard cutout effect. In a three-dimensional image capturing apparatus (three-dimensional image processing apparatus) (08-01-2013
20130195350IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an image encoding device according to an embodiment includes an image generating unit, a first filtering unit, a prediction image generating unit, and an encoding unit. The image generating unit is configured to generate a first parallax image corresponding to a first viewpoint of an image to be encoded, with the use of at least one of depth information and parallax information of a second parallax image corresponding to a second viewpoint being different than the first viewpoint. The first filtering unit is configured to perform filtering on the first parallax image based on first filter information. The prediction image generating unit is configured to generate a prediction image with a reference image, the reference image being the first parallax image on which the filtering has been performed. The encoding unit is configured to generate encoded data from the image and the prediction image.08-01-2013
20080260237Method for Determination of Stand Attributes and a Computer Program for Performing the Method - The invention is concerned with a method for forest inventory and for determination of stand attributes. Stand information of trees, sample plots, stands and larger forest areas can be determined by measuring or deriving the most important attributes for individual trees. The method uses a laser scanner and overlapping images. A densification of the laser point clouds is performed and the achieved denser point clouds are used to identify individual trees and groups of trees. The invention is also concerned with a computer program for performing the method.10-23-2008
20120057778IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - There is provided an image processing apparatus including an operation recognition unit for recognizing an operation signal for identifying a focused image among images displayed on a screen of an image display unit and an image drawing unit for drawing an image on the screen so as to display the image as a stereoscopic image or a planar image on the screen, on the basis of a recognition result provided by the operation recognition unit.03-08-2012
20130202192APPARATUS AND METHOD FOR OPTICALLY MEASURING CREEP - A method of measuring creep strain in a gas turbine engine component, where at least a portion of the component has a material disposed thereon, and where the material has a plurality of markings providing a visually distinct pattern. The method may include capturing an image of at least a portion of the markings after an operational period of the gas turbine engine, and determining creep strain information of the component. The creep strain information may be determined by correlating the image captured after the operational period to an image captured before the operational period.08-08-2013
20130202193FRACTAL METHOD FOR DETECTING AND FILLING DATA GAPS WITHIN LIDAR DATA - Method for improving the quality of a set of a three dimensional (3D) point cloud data representing a physical surface by detecting and filling null spaces (08-08-2013
20130202194Method for generating high resolution depth images from low resolution depth images using edge information - A method interpolates and filters a depth image with reduced resolution to recover a high resolution depth image using edge information, wherein each depth image includes an array of pixels at locations and wherein each pixel has a depth. The reduced depth image is first up-sampled, interpolating the missing positions by repeating the nearest-neighboring depth value. Next, a moving window is applied to the pixels in the up-sampled depth image. The window covers a set of pixels centred at each pixel. The pixels covered by the window are selected according to their relative position to the edge, and only pixels that are within the same side of the edge of the centre pixel are used for the filtering procedure. A single representative depth from the set of selected pixel in the window is assigned to the pixel to produce a processed depth image.08-08-2013
20130202190IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an image detector and a controller. The image detector is utilized for receiving a surrounding image, and analyzing the surrounding image to determine a user's position. The controller is coupled to the image detector, and is utilized for receiving a stereo image, and modifying the stereo image to generate a modified stereo image by at least rotating the stereo image according to the user's position.08-08-2013
20130202195DEVICE AND METHOD FOR ACQUISITION AND RECONSTRUCTION OF OBJECTS - The purpose of this invention is a device and a method which will permit the acquisition and subsequent reconstruction of objects with volume throughout the total external surface. This invention is characterised in that it has a particular mode of acquisition on the free fall object in such a way that there is no support surface which prevents acquisition of the surface which would be hidden by said support. The invention is also characterised by special modes of distribution of the cameras which optimise image capturing and provide useful information in the subsequent reconstruction of the volume through computer means.08-08-2013
20130202197System and Method for Manipulating Data Having Spatial Co-ordinates - Systems and methods are provided for extracting various features from data having spatial coordinates. The systems and methods may identify and extract data points from a point cloud, where the data points are considered to be part of the ground surface, a building, or a wire (e.g. power lines). Systems and methods are also provided for enhancing a point cloud using external data (e.g. images and other point clouds), and for tracking a moving object by comparing images with a point cloud. An objects database is also provided which can be used to scale point clouds to be of similar size. The objects database can also be used to search for certain objects in a point cloud, as well as recognize unidentified objects in a point cloud.08-08-2013
20130202196METHOD AND APPARATUS FOR REMOTE SENSING OF OBJECTS UTILIZING RADIATION SPECKLE - Disclosed are systems and methods to extract information about the size and shape of an object by observing variations of the radiation pattern caused by illuminating the object with coherent radiation sources and changing the wavelengths of the source. Sensing and image-reconstruction systems and methods are described for recovering the image of an object utilizing projected and transparent reference points and radiation sources. Sensing and image-reconstruction systems and methods are also described for rapid sensing of such radiation patterns. A computational system and method is also described for sensing and reconstructing the image from its autocorrelation. This computational approach uses the fact that the autocorrelation is the weighted sum of shifted copies of an image, where the shifts are obtained by sequentially placing each individual scattering cell of the object at the origin of the autocorrelation space.08-08-2013
20130202191MULTI-VIEW IMAGE GENERATING METHOD AND APPARATUS USING THE SAME - A multi-view image generating method adapted to a 2D-to-3D conversion apparatus is provided. The multi-view image generating method includes the following steps. A pair of images is received. The pair of images is captured from different angles by a single image capturing apparatus rotating a rotation angle. A disparity map is generated based on one of the pair of images. A remapped disparity map is generated based on the disparity map by using a non-constant function. A depth map is generated based on the remapped disparity map. Multi-view images are generated based on the one of the pair of images and the depth map. Furthermore, a multi-view image generating apparatus adapted to the 2D-to-3D conversion apparatus is also provided.08-08-2013
20120070070LEARNING-BASED POSE ESTIMATION FROM DEPTH MAPS - A method for processing data includes receiving a depth map of a scene containing a humanoid form. Respective descriptors are extracted from the depth map based on the depth values in a plurality of patches distributed in respective positions over the humanoid form. The extracted descriptors are matched to previously-stored descriptors in a database. A pose of the humanoid form is estimated based on stored information associated with the matched descriptors.03-22-2012
20120087573Eliminating Clutter in Video Using Depth Information - A method of clutter elimination in digital images is provided that includes identifying a foreground blob in an image, determining a depth of the foreground blob, and indicating that the foreground blob is clutter when the depth indicates that the foreground blob is too close to be an object of interest. Methods for obstruction detection in depth images such as those captured by stereoscopic cameras and structured light cameras are also provided.04-12-2012
20120087572Use of Three-Dimensional Top-Down Views for Business Analytics - A method of analyzing a depth image in a digital system is provided that includes detecting a foreground object in a depth image, wherein the depth image is a top-down perspective of a scene, and performing data extraction and classification on the foreground object using depth information in the depth image.04-12-2012
20130208975Stereo Matching Device and Method for Determining Concave Block and Convex Block - A stereo matching device used in a stereoscopic display system for determining a concave block and a convex block is provided. The stereo matching device comprises a receiving module for receiving a first and a second view-angle frames, a computation module, a feature extraction module and an estimation module. The computation module generates a disparity map having disparity entries respectively corresponding to blocks of the first view-angle frame. The feature extraction module generates feature maps each having feature entries respectively corresponding to the blocks. The estimation module comprises a reliability computation unit for computing a feature reliability of each of the blocks based on the feature maps and a comparator unit for filtering out unqualified blocks according to at least one reliability threshold to generate a plurality of candidate blocks and further determining the concave block and the convex block.08-15-2013
20130208976SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR CALCULATING ADJUSTMENTS FOR IMAGES - A system, method, and computer program product are provided for calculating adjustments for images. In use, a plurality of images is identified. Additionally, one or more discrepancies are determined between the plurality of images. Further, one or more adjustments are calculated for one or more of the plurality of images, utilizing the determined one or more discrepancies.08-15-2013