Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Picture signal generator

Subclass of:

348 - Television

348042000 - STEREOSCOPIC

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
348047000 Multiple cameras 461
348049000 Single camera with optical path division 156
348050000 Single camera from multiple positions 87
Entries
DocumentTitleDate
20130044186Plane-based Self-Calibration for Structure from Motion - Robust techniques for self-calibration of a moving camera observing a planar scene. Plane-based self-calibration techniques may take as input the homographies between images estimated from point correspondences and provide an estimate of the focal lengths of all the cameras. A plane-based self-calibration technique may be based on the enumeration of the inherently bounded space of the focal lengths. Each sample of the search space defines a plane in the 3D space and in turn produces a tentative Euclidean reconstruction of all the cameras that is then scored. The sample with the best score is chosen and the final focal lengths and camera motions are computed. Variations on this technique handle both constant focal length cases and varying focal length cases.02-21-2013
20130044188STEREOSCOPIC IMAGE REPRODUCTION DEVICE AND METHOD, STEREOSCOPIC IMAGE CAPTURING DEVICE, AND STEREOSCOPIC DISPLAY DEVICE - A stereoscopic image is displayed with an appropriate amount of parallax based on auxiliary information recorded in a three-dimensional-image file. The size of a display which performs 3D display is acquired (Step S02-21-2013
201300441873D CAMERA AND METHOD OF MONITORING A SPATIAL ZONE - A 3D camera (02-21-2013
20110175984METHOD AND SYSTEM OF EXTRACTING THE TARGET OBJECT DATA ON THE BASIS OF DATA CONCERNING THE COLOR AND DEPTH - Provided are a method and system for extracting a target object from a background image, the method including: generating a scalar image of differences between the object image and the background, using a lightness and a color difference between the background and current video frame; initializing a mask to have a value equal to a value for a corresponding pixel of a mask of a previous video frame, where a value of the scalar image of differences for the pixel is less than a threshold, and to have a predetermined value otherwise; clustering the scalar image of differences and the depth data; filling the mask for each pixel position the current video frame, using a centroid of a cluster of the scalar image of differences and the depth data; and updating the background image on the basis of the filled mask and the scalar image of differences.07-21-2011
20110175982METHOD OF FLUORESCENT NANOSCOPY - An analysis of an object dyed with fluorescent coloring agents carried out with the aid of a fluorescent microscope which is modified for improved resolving power and called a nanoscope. The method is carried out with a microscope having an optical system for visualizing and projecting a sample image to a video camera which records and digitizes images of individual fluorescence molecules and nanoparticles at a low noise, a computer for recording and processing images, a sample holder arranged in front of an object lens, a fluorescent radiation exciting source and a set of replaceable suppression filters for separating the sample fluorescent light. Separately fluorescing visible molecules and nanoparticles are periodically formed in different object parts, the laser produces the oscillation thereof which is sufficient for recording the non-overlapping images of the molecules and nanoparticles and for decoloring already recorded fluorescent molecules, wherein tens of thousands of pictures of recorded individual molecule and nanoparticle images, in the form of stains having a diameter on the order of a fluorescent light wavelength multiplied by a microscope amplification, are processed by a computer for searching the coordinates of the stain centers and building the object image according to millions of calculated stain center co-ordinates corresponding to the co-ordinates of the individual fluorescent molecules and nanoparticles. With this invention it is possible to obtain a two-dimensional and a three-dimensional image with a resolving power better than 20 nm and to record a color image by dyeing proteins, nucleic acids and lipids with different coloring agents.07-21-2011
20100165082Detection apparatus - The invention relates to an apparatus for the spatially resolved detection of objects located in a monitored zone, having a transmission device for the transmission of electromagnetic radiation into a transmission region, having a reception device for the reception of radiation reflected from a reception region, wherein the transmission region and the reception region overlap or intersect in a detection region which is disposed within the monitored zone, which covers a detection angle and in which the transmitted radiation is reflected by objects, having an imaging arrangement which is disposed in the propagation path of the transmitted radiation and/or of the reflected radiation and which covers the total detection region at the transmission side and/or at the reception side at all times; and having spatial resolution means for the influencing of the propagation direction of the radiation and/or for the operation of the reception device, in particular in a time varying manner, such that the position, in particular the spacing, of the detection region relative to the imaging arrangement and/or the size of the detection region determined by the degree to which the transmission region and the reception region overlap or intersect can be changed.07-01-2010
20100165081IMAGE PROCESSING METHOD AND APPARATUS THEREFOR - An image processing method, including extracting shot information from metadata, wherein the shot information is used to classify a series of two-dimensional images into predetermined units; if it is determined, by using shot type information included in the shot information, that frames classified as a predetermined shot can be reproduced as a three-dimensional image, extracting background depth information from the metadata, wherein the background depth information is about a background of the frames classified as the predetermined shot; generating a depth map about the background of the frames by using the background depth information; if the frames classified as the predetermined shot comprise object, extracting object depth information about the object from the metadata; and generating a depth map about the object by using the object depth information.07-01-2010
20120162376Three-Dimensional Data Preparing Method And Three-Dimensional Data Preparing Device - The invention provides a three-dimensional data preparing method, using image pickup units 06-28-2012
20120162374METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR IDENTIFYING A ROUGH DEPTH MAP IN A SCENE AND FOR DETERMINING A STEREO-BASE DISTANCE FOR THREE-DIMENSIONAL (3D) CONTENT CREATION - Methods, systems, and computer program products for determining a depth map in a scene are disclosed herein. According to one aspect, a method includes collecting focus statistical information for a plurality of focus windows. The method may also include determining a focal distance for each window. Further, the method may include determining near, far, and target focus distances. The method may also include calculating a stereo-base and screen plane using the focus distances.06-28-2012
20120162371THREE-DIMENSIONAL MEASUREMENT APPARATUS, THREE-DIMENSIONAL MEASUREMENT METHOD AND STORAGE MEDIUM - A three-dimensional measurement apparatus comprises a light irradiation unit adapted to irradiate a measurement target with pattern light, an image capturing unit adapted to capture an image of the measurement target, and a measurement unit adapted to measure a three-dimensional shape of the measurement target from the captured image, the three-dimensional measurement apparatus further comprising: a change region extraction unit adapted to extract a change region where a change has occurred when comparing an image of the measurement target captured in advance with the captured image of the measurement target; and a light characteristic setting unit adapted to set characteristics of the pattern light from the change region, wherein the measurement unit measures the three-dimensional shape of the measurement target at the change region in a captured image after irradiation of the change region with the pattern light with the characteristics set by the light characteristic setting unit.06-28-2012
20120162370Apparatus and method for generating depth image - Provided is a depth image generating apparatus. The depth image generating apparatus may include a filtering unit, a modulation unit, and a sensing unit. The filtering unit may band pass filter an infrared light of a first wavelength band among infrared lights received from an object. The modulation unit may modulate the infrared light of the first wavelength band to an infrared light of a second wavelength band. The sensing unit may generate an electrical signal by sensing the modulated infrared light of the second wavelength band.06-28-2012
20110205337Motion Capture with Low Input Data Constraints - Systems, devices, method and arrangements are implemented in a variety of embodiments to facilitate motion capture of objects. Consistent with one such system, three-dimensional representations are determined for at least one object. Depth-based image data is used in the system, which includes a processing circuit configured and arranged to render a plurality of orientations for at least one object. Orientations from the plurality of orientations are assessed against the depth-based image data. An orientation is selected from the plurality of orientations as a function of the assessment of orientations from the plurality of orientations.08-25-2011
20110193943Stabilized Stereographic Camera System - A stabilized stereographic camera system may include an upright support having a longitudinal axis. A camera support coupled to one end of the upright support may mount first and second cameras, the first camera movable with respect to the second cameras. A ballast may be coupled to another end of the upright support. A balance mechanism may move a movable portion of the stereographic camera system relative to the upright support in response to movement of the first camera with respect to the second camera to maintain a constant weight distribution of the stereographic camera system.08-11-2011
20110193942Single-Lens, Single-Aperture, Single-Sensor 3-D Imaging Device - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens having a substantially oblong aperture, a sensor operable for capturing light transmitted from an object through the lens and the substantially oblong aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. The aperture may have an asymmetrical shape for distinguishing objects in front of versus in back of the focal plane. The aperture may also be rotatable, where the orientation of the observed pattern relative to the oblong aperture is varied with time thereby removing the ambiguity generated by image overlap. The disclosed device further comprises a light projection system configured to project a predetermined pattern onto a surface of the desired object thereby allowing for mapping of unmarked surfaces in three dimensions.08-11-2011
20110193941IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes an image evaluation unit evaluating properness of synthesized images as the 3-dimensional images. The image evaluation unit performs the process of evaluating the properness through analysis of a block correspondence difference vector calculated by subtracting a global motion vector indicating movement of an entire image from a block motion vector which is a motion vector of a block unit of the synthesized images, compares a predetermined threshold value to one of a block area of a block having the block correspondence difference vector and a movement amount additional value, and performs a process of determining that the synthesized images are not proper as the 3-dimensional images, when the block area is equal to or greater than a predetermined area threshold value or when the movement amount addition value is equal to or greater than a predetermined movement amount threshold value.08-11-2011
201101939403D CMOS Image Sensors, Sensor Systems Including the Same - A three-dimensional (3D) CMOS image sensor (CIS) that sufficiently absorbs incident infrared-rays (IRs) and includes an infrared-ray (IR) receiving unit formed in a thin epitaxial film, thereby being easily manufactured using a conventional CIS process, a sensor system including the 3D CIS, and a method of manufacturing the 3D CIS, the 3D CIS including an IR receiving part absorbing IRs incident thereto by repetitive reflection to produce electron-hole pairs (EHPs); and an electrode part formed on the IR receiving part and collecting electrons produced by applying a predetermined voltage thereto.08-11-2011
20110193939PHYSICAL INTERACTION ZONE FOR GESTURE-BASED USER INTERFACES - In a motion capture system having a depth camera, a physical interaction zone of a user is defined based on a size of the user and other factors. The zone is a volume in which the user performs hand gestures to provide inputs to an application. The shape and location of the zone can be customized for the user. The zone is anchored to the user so that the gestures can be performed from any location in the field of view. Also, the zone is kept between the user and the depth camera even as the user rotates his or her body so that the user is not facing the camera. A display provides feedback based on a mapping from a coordinate system of the zone to a coordinate system of the display. The user can move a cursor on the display or control an avatar.08-11-2011
20130083164ACTIVE 3D TO PASSIVE 3D CONVERSION - Various arrangements for using an active 3D signal to create passive 3D images are presented. An active 3D frame may be received. The active 3D frame may comprise a first perspective image and a second perspective image. The first perspective image may be representative of a different perspective than the second perspective image. The first perspective image may be tinted with a first color. The second perspective image may be tinted with a second color different from the first color. The first perspective image tinted with the first color may be displayed. The second perspective image tinted with the second color may be displayed.04-04-2013
20130076865POSITION/ORIENTATION MEASUREMENT APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A position/orientation measurement apparatus holds a three-dimensional shape model of a object, acquires approximate value indicating a position and an orientation of the object, acquires a two-dimensional image of the object, projects a geometric feature of the three-dimensional shape model on the two-dimensional image based on the approximate value, calculates the direction of the geometric feature of the three-dimensional shape model projected on the two-dimensional image, detects an image feature based on the two-dimensional image, calculates the direction of the image feature, associates the image feature and the geometric feature by comparing the direction of the image feature calculated based on the two-dimensional image and the direction of the geometric feature calculated based on the three-dimensional shape model, and calculates the position and orientation of the object by correcting the approximate value based on the distance between the geometric feature and the image feature associated therewith.03-28-2013
20130076863SURGICAL STEREO VISION SYSTEMS AND METHODS FOR MICROSURGERY - Surgical stereo vision systems and methods for microsurgery are described that enable hand-eye collocation, high resolution, and a large field of view. A digital stereo microscope apparatus, an operating system with a digital stereo microscope, and a method are described using a display unit located over an area of interest such that a human operator places hands, tools, or a combination thereof in the area of interest and views a magnified and augmented live stereo view of the area interest with eyes of the human operator substantially collocated with the hands of the human operator.03-28-2013
20130076864LIQUID CRYSTAL DISPLAY DEVICE - A liquid crystal display device for carrying out a 03-28-2013
20130076861METHOD AND APPARATUS FOR PROBING AN OBJECT, MEDIUM OR OPTICAL PATH USING NOISY LIGHT - A method and apparatus for optically probing an object(s) and/or a medium and/or an optical path using noisy light. Applications disclosed include but are not limited to 3D digital camera, detecting material or mechanical properties of optical fiber(s), intrusion detection, and determining an impulse response. In some embodiments, an optical detector is illuminated by a superimposition of a combination of noisy light signals. Various signal processing techniques are also disclosed herein.03-28-2013
20130076860THREE-DIMENSIONAL RELATIONSHIP DETERMINATION - Example embodiments disclosed herein relate to determining relationships between locations based on beacon information. At least three sensors of a device can be used to determine locations of a beacon. The device can determine a three-dimensional relationship between the locations.03-28-2013
20090244262IMAGE PROCESSING APPARATUS, IMAGE DISPLAY APPARATUS, IMAGING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus inputs stereoscopic images, detects a depth of each of the inputted stereoscopic images, lays out the stereoscopic images at least in partial overlap in such a manner that the lager the depth is, the more forward the corresponding stereoscopic image is placed, and records the stereoscopic images having been laid out.10-01-2009
20130038692Remote Control System - A remote control system comprises a mobile object, a remote controller for remotely controlling the mobile object, and a storage unit where background images to simulate a driving room or an operation room of the mobile object are stored. The mobile object has a stereo camera, a camera control unit for controlling image pickup direction of the stereo camera, and a first communication unit for communicating information including at least images photographed by the stereo camera. The remote controller has a second communication unit for communicating to and from the first communication unit, a control unit for controlling the mobile object, and a display unit for synthesizing at least a part of the images photographed by the stereo camera and the background images and for displaying the images so that a stereoscopic view can be displayed.02-14-2013
20130208094APPARATUS AND METHOD FOR PROVIDING THREE DIMENSIONAL MEDIA CONTENT - A system that incorporates teachings of the exemplary embodiments may include, for example, means for generating a disparity map based on a depth map, means for determining accuracy of pixels in the depth map where the determining means identifies the pixels as either accurate or inaccurate based on a confidence map and the disparity map, and means for providing an adjusted depth map where the providing means adjusts inaccurate pixels of the depth map using a cost function associated with the inaccurate pixels. Other embodiments are disclosed.08-15-2013
20100073462Three dimensional image sensor - A three-dimensional (3D) image sensor includes a plurality of color pixels, and a plurality of distance measuring pixels. Where the plurality of color pixels and the plurality of distance measuring pixels are arranged in an array, and a group of distance measuring pixels, from among the plurality of distance measuring pixels, are disposed so that a corner of each distance measuring pixel in the group of distance-measuring pixels is adjacent to a corner of an adjacent distance-measuring pixel in the group of distance-measuring pixels. The group of distance measuring pixels is capable of jointly outputting one distance measurement signal.03-25-2010
20130083166IMAGING APPARATUS AND METHOD FOR CONTROLLING SAME - An imaging element includes a plurality of photoelectric conversion units that output an image signal for each pixel through a micro lens. An imaging signal processing circuit separates image signals output from the imaging element into a left-eye image signal and a right-eye image signal. An image combining circuit generates combined image data by performing arithmetic average processing for left-eye image data and right-eye image data. A recording medium control I/F unit controls to record left-eye image data and right-eye image data for use in 3D display and combined image data for use in 2D display in different regions in an image file.04-04-2013
20130083167PROJECTOR APPARATUS AND VIDEO DISPLAY METHOD - There is provided a projector apparatus including a terminal unit supplied with video data output by a source apparatus; a video projection processing unit that generates a projection video based on the video data and projects the generated projection video through a projection lens, a distance detection unit that detects a distance to a display surface on which the projection video projected through the projection lens is displayed, a projection angle detection unit that detects a projection angle of the projection video projected through the projection lens, and a control unit that calculates a display size of the projection video on the display surface based on the distance detected by the distance detection unit and the projection angle detected by the projection angle detection unit and transmits the calculated display size as the data regarding the display capability of the projector apparatus from the terminal unit to the source apparatus.04-04-2013
20130083165APPARATUS AND METHOD FOR EXTRACTING TEXTURE IMAGE AND DEPTH IMAGE - Provided are an apparatus and method for extracting a texture image and a depth image. The apparatus of the present invention may project a pattern image on a target object, may capture a scene image on which the target image is reflected from the target object, and may simultaneously extract the texture image and the depth image using the scene image.04-04-2013
20130076862Image Acquiring Device And Image Acquiring System - An image acquiring device comprises a first camera 14 for acquiring video images, consisting of frame images continuous in time series, a second camera 15 being in a known relation with the first camera and used for acquiring two or more optical spectral images of an object to be measured, and an image pickup control device 21, and in the image acquiring device, the image pickup control device is configured to extract two or more feature points from one of the frame images, to sequentially specify the feature points in the frame images continuous in time series, to perform image matching between the frame images regarding the frame images corresponding to the two or more optical spectral images based on the feature points, and to synthesize the two or more optical spectral images according to the condition obtained by the image matching.03-28-2013
20130208092SYSTEM FOR CREATING THREE-DIMENSIONAL REPRESENTATIONS FROM REAL MODELS HAVING SIMILAR AND PRE-DETERMINED CHARACTERISITICS - A particular subject of the invention is the creation of three-dimensional representations from real models having similar and predetermined characteristics, using a system comprising a support suitable for receiving such a real object, configured to present the object in a pose similar to that of use of said at least one real object, an image acquisition device configured to obtain at least two distinct images of the real object from at least two separate viewpoints and a data processing device configured to receive these images, to clip a representation of the real object in each of these images in order to obtain at least two textures of the real object, obtaining a generic three-dimensional model of the real object and creating a three-dimensional model of the real object from the textures and the generic three-dimensional model obtained.08-15-2013
20110310228METHOD FOR ADJUSTING ROI AND 3D/4D IMAGING APPARATUS USING THE SAME - A three-dimensional/four-dimensional (3D/4D) imaging apparatus and a region of interest (ROI) adjustment method and device are provided. An ROI is adjusted through an E image in a 3D/4D imaging mode, in which the E image is refreshed in real time when the ROI is adjusted and has a scan line range larger than that of the ROI.12-22-2011
20110310227MOBILE DEVICE BASED CONTENT MAPPING FOR AUGMENTED REALITY ENVIRONMENT - Methods, apparatuses, and systems are provided to facilitate the deployment of media content within an augmented reality environment. In at least one implementation, a method is provided that includes extracting a three-dimensional feature of a real-world object captured in a camera view of a mobile device, and attaching a presentation region for a media content item to at least a portion of the three-dimensional feature responsive to a user input received at the mobile device.12-22-2011
20110310226USE OF WAVEFRONT CODING TO CREATE A DEPTH IMAGE - A 3-D depth camera system, such as in a motion capture system, tracks an object such as a human in a field of view using an illuminator, where the field of view is illuminated using multiple diffracted beams. An image sensing component obtains an image of the object using a phase mask according to a double-helix point spread function, and determines a depth of each portion of the image based on a relative rotation of dots of light of the double-helix point spread function. In another aspect, dual image sensors are used to obtain a reference image and a phase-encoded image. A relative rotation of features in the images can be correlated with a depth. Depth information can be obtained using an optical transfer function of a point spread function of the reference image.12-22-2011
20130038694METHOD FOR MOVING OBJECT DETECTION USING AN IMAGE SENSOR AND STRUCTURED LIGHT - A method for detecting moving objects including people. Enhanced monitoring, safety and security is provided through the use of a monocular camera and a structured light source, by trajectory computation, velocity computation, or counting of people and other objects passing through a laser plane arranged perpendicular to the ground, and which can be setup anywhere near a portal, a hallway or other open area. Enhanced security is provided for portals such as revolving doors, mantraps, swing doors, sliding doors, etc., using the monocular camera and structured light source to detect and, optionally, prevent access violations such as “piggybacking” and “tailgating”.02-14-2013
20130038693METHOD AND APPARATUS FOR REDUCING FRAME REPETITION IN STEREOSCOPIC 3D IMAGING - The present invention is directed towards enhancing the reproduction of three-dimensional dynamic scenes on digital light processing (DLP) and (liquid crystal display) LCD projectors and displays by adding optimal amount of motion blur to stimulate the covered eye to continue perceiving scene picture changes. Too much blur would bring smearing, but a lack of blur induces motion breaking.02-14-2013
20130038695PLAYBACK APPARATUS, DISPLAY APPARATUS, RECORDING APPARATUS AND STORAGE MEDIUM - A terminal device (02-14-2013
20130038691ASYMMETRIC ANGULAR RESPONSE PIXELS FOR SINGLE SENSOR STEREO - Depth sensing imaging pixels include pairs of left and right pixels forming an asymmetrical angular response to incident light. A single microlens is positioned above each pair of left and right pixels. Each microlens spans across each of the pairs of pixels in a horizontal direction. Each microlens has a length that is substantially twice the length of either the left or right pixel in the horizontal direction; and each microlens has a width that is substantially the same as a width of either the left or right pixel in a vertical direction. The horizontal and vertical directions are horizontal and vertical directions of a planar image array. A light pipe in each pixel is used to improve light concentration and reduce cross talk.02-14-2013
20130038690METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL IMAGE INFORMATION - A method and apparatus for generating three-dimensional image information is disclosed. The method involves directing light captured within the field of view of the lens to an aperture plane of the lens, receiving the captured light at a spatial discriminator located proximate the aperture plane, the discriminator including a first portion disposed to transmit light having a first optical state through a first portion of the single imaging path and a second portion disposed to transmit light having a second optical state through a second portion of the single imaging path. The first and second portions of the single imaging path provide respective first and second perspective viewpoints within the field of view of the lens for forming respective first and second images at an image sensor disposed at an image plane of the lens. The first image represents objects within the field of view from the first perspective viewpoint and the second image represents the objects from the second perspective viewpoint, the first and second images together being operable to represent three dimensional spatial attributes of the objects. The method also involves receiving the first image at a first plurality of sensor elements on the image sensor, the first plurality of elements being responsive to light having the first optical state, and receiving the second image at a second plurality of sensor elements on the image sensor, the second plurality of elements being responsive to light having the second optical state.02-14-2013
20130033577MULTI-LENS CAMERA WITH A SINGLE IMAGE SENSOR - A multiple-lens camera has only one image sensor to capture a number of images at different viewing angles. Using a single image sensor, instead of a number of separate image sensors, to capture multiple images simultaneously, one can avoid the calibration process to calibrate the different image sensors to make sure that color balance and the gain are the same for all the image sensors used. The camera has an adjustment mechanism for adjusting the distance between the image lenses, and a processor to receive from the image sensor electronic signals indicative of image data of the captured of images. The camera has a connector to transfer the processed image data to an external device or to an image display. The image display device is configured to display one of said plurality of images.02-07-2013
20130033576IMAGE PROCESSING DEVICE AND METHOD, AND PROGRAM - There is provided an image processing device including capturing portions that respectively capture a first image and a second image that form an image for a right eye and an image for a left eye which can be stereoscopically viewed in three dimensions, a comparison portion that compares the first image and the second image captured by the capturing portions, a determination portion that determines, based on a comparison result of the comparison portion, which of the first image and the second image is the image for the right eye and which is the image for the left eye, and an output portion that outputs each of the first image and the second image, as the image for the right eye and the image for the left eye, based on a determination result of the determination portion.02-07-2013
20130033575IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an imaging element that photographs multiple viewing point images corresponding to images observed from different viewing points and an image processing unit separates an output signal of the imaging element, acquires the plurality of viewing point images corresponding to the images observed from the different viewing points, and generates a left eye image and a right eye image for three-dimensional image display, on the basis of the plurality of acquired viewing point images. The image processing unit generates parallax information on the basis of the plurality of viewing point images obtained from the imaging element and generates a left eye image and a right eye image for three-dimensional image display by 2D3D conversion processing using the generated parallax information. By this configuration, a plurality of viewing point images are acquired on the basis of one photographed image and images for three-dimensional image display are generated.02-07-2013
20130033573OPTICAL PHASE EXTRACTION SYSTEM HAVING PHASE COMPENSATIONFUNCTION OF CLOSED LOOP TYPE AND THREE-DIMENSIONAL IMAGEEXTRACTION METHOD THEREOF - Provided is an image extraction method of optical phase extraction system. The image extraction method may include checking whether a phase error due to an environmental disturbance of optical fiber occurs by monitoring an output signal obtained by interfering reflection optical signals reflected through two paths. When a phase error occurs, an error is compensated using a phase compensation control method of closed loop type through one of the two paths and an image is extracted by capturing an image of object in a state that the image of object is shifted by the set phase value when a phase error is compensated. According to the inventive concept, a phase error occurring in an optical fiber type interferometer due to an environmental disturbance is minimized or compensated. Also, since an interference image accurately shifted by the phase value set among arbitrary various phase values is obtained through a camera, reliability of three-dimensional phase information being extracted is guaranteed.02-07-2013
20130033574METHOD AND SYSTEM FOR UNVEILING HIDDEN DIELECTRIC OBJECT - The invention relates to the remote measurement of the dielectric permittivity of dielectrics. A 3D microwave and a 3D optical range images of an interrogated scene are recorded at the same time moment. The images are digitized and overlapped. A space between the microwave and optical image is measured, and a dielectric permittivity of the space between these images is determined. If the dielectric permittivity is about 3, then hidden explosive materials or components of thereof are suspected. The invention makes it possible to remotely determine the dielectric permittivity of a moving, irregularly-shaped dielectric objects.02-07-2013
20130033572OPTIMIZING USAGE OF IMAGE SENSORS IN A STEREOSCOPIC ENVIRONMENT - The invention is directed to systems, methods and computer program products for optimizing usage of image sensors in a stereoscopic environment. The method includes: (a) providing a first image sensor, where the first image sensor is associated with a first image sensor area and a first imaging area; (b) determining a distance from the camera to an object to be captured; and (c) shifting the first imaging area along a length of the first image sensor area, where the amount of the shifting is based at least partially on the distance from the camera to the object, and where the first imaging area can shift along an entire length of the first image sensor area. The invention optimizes usage of an image sensor by permitting an increase in disparity control. Additionally, the invention reduces the closest permissible distance of an object to be captured using a stereoscopic camera.02-07-2013
20100045779THREE-DIMENSIONAL VIDEO APPARATUS AND METHOD OF PROVIDING ON SCREEN DISPLAY APPLIED THERETO - A three-dimensional (3D) video apparatus and a method of providing an OSD object applied thereto are provided. The 3D video apparatus includes an on-screen display (OSD) generation unit which receives an OSD object and generates a reduced OSD object to be displayed on the 3D image on a screen, wherein the reduced OSD object is smaller than the received OSD object. An OSD insertion unit inserts the reduced OSD object into input 3D image data.02-25-2010
20130135439STEREOSCOPIC IMAGE GENERATING DEVICE AND STEREOSCOPIC IMAGE GENERATING METHOD - A stereoscopic image generating device includes: a correction parameter calculating unit that calculates correction parameters based on a plurality of pairs of feature points corresponding to the same points on the object, from the first image and a second image photographing the object; a correction error calculating unit that, for each pair of feature points, corrects the position of the feature point on at least one image, using the correction parameters, and calculates the amount of correction error; a maldistribution degree calculating unit that finds the degree of maldistribution of feature points; a threshold value determining unit that determines a threshold value such that the threshold value is smaller when the degree of maldistribution increases; and a correction unit that, when the amount of correction error is equal to or lower than the threshold value, corrects the position of the object in the images using the correction parameters.05-30-2013
20120212582Systems and methods for monitoring caregiver and patient protocolcompliance - A system and methods is provided for facilitating, monitoring and recording caregiver and patient compliance with established hospital hand hygiene protocols. The system comprises a 3-D imaging and monitoring assembly and an optional intelligent programmable monitor/sanitizer. Three dimensional imagery tracks a caregiver's movements and location while generating a representative image value. Information acquired by the imaging system determines the proximity of a caregiver to the patient and/or contamination source and determines if the sanitizers provided have been utilized and if so, at an appropriate time and distance from the patient per hospital protocol. While being monitored, a representative Avatar based on physical characteristics derived from three dimensional images of the caregiver and patient may be generated so as to maintain anonymity of both unless a violation of institutional protocol occurs which may be forensically recorded in real-time for analysis.08-23-2012
20120212580COMPUTER-READABLE STORAGE MEDIUM HAVING DISPLAY CONTROL PROGRAM STORED THEREIN, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - A display control apparatus displays, by a virtual stereo camera taking an image of a virtual three-dimensional space in which a player object is positioned, a stereoscopically viewable image of the virtual three-dimensional space. At this time, when an object distance represents a distance from a point of view position of the virtual stereo camera to the player object, and a stereoscopic view reference distance represents a distance from the point of view position of the virtual stereo camera to a reference plane corresponding to a position at which a parallax is not generated when the image of the virtual three-dimensional space is taken by the virtual stereo camera, a camera parameter is set based on a stereoscopic view ratio which is a ratio of the stereoscopic view reference distance to the object distance, The stereoscopically viewable image is generated based on the camera parameter.08-23-2012
201300275203D IMAGE RECORDING DEVICE AND 3D IMAGE SIGNAL PROCESSING DEVICE - A 3D image signal processing device performs a signal processing on at least one image signal of a first viewpoint signal as an image signal generated at a first viewpoint and a second viewpoint signal as an image signal generated at a second viewpoint different from the first viewpoint. The device includes an image processor that executes a predetermined image processing on at least one image signal of the first viewpoint signal and the second viewpoint signal, and a controller that controls the image processor. The controller controls the image processor to perform an feathering process on at least one image signal of the first viewpoint signal and the second viewpoint signal, the feathering process being a process for smoothing pixel values of pixels positioned on a boundary between an object included in the image represented by the at least one image signal and an image adjacent to the object.01-31-2013
20130027519AUTOMATED THREE DIMENSIONAL MAPPING METHOD - An automated three dimensional mapping method estimating three dimensional models taking advantage of a plurality of images. Positions and attitudes for at least one camera are recorded when images are taken. The at least one camera is geometrically calibrated to indicate the direction of each pixel of an image. A stereo disparity is calculated for a plurality of image pairs covering a same scene position setting a disparity and a certainty measure estimate for each stereo disparity. The different stereo disparity estimates are weighted together to form a 3D model. The stereo disparity estimates are reweighted automatically and adaptively based on the estimated 3D model.01-31-2013
20130027517METHOD AND APPARATUS FOR CONTROLLING AND PLAYING A 3D IMAGE - A 3D image playing apparatus is provided. The 3D image playing apparatus includes a plurality of speakers which output a plurality of test sounds, a receiver which receives the test sounds output from the plurality of speakers, a location detector which detects a location of the receiver by receiving feedback of the test sounds received at the receiver and analyzing the test sounds, a 3D processor which adjusts a 3D effect of a 3D image according to the location detected by the location detector, and a display unit which outputs the 3D image having the 3D effect adjusted by the 3D processor.01-31-2013
20130027518MICROSCOPE STABILITY USING A SINGLE OPTICAL PATH AND IMAGE DETECTOR - Stabilization, via active-feedback positional drift-correction, of an optical microscope imaging system in up to 3-dimensions is achieved using the optical measurement path of an image sensor. Nanometer-scale stability of the imaging system is accomplished by correcting for positional drift using fiduciary references sparsely distributed within or in proximity to the experimental sample.01-31-2013
20130033579PROCESSING MULTI-APERTURE IMAGE DATA - A method and a system for processing multi-aperture image data is described wherein the method comprises: capturing image data associated of one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electro-magnetic spectrum; and, generating depth information associated with said captured image on the basis of first sharpness information in at least one area of said first image data and second sharpness information in at least one area of said second image data.02-07-2013
20130033580THREE-DIMENSIONAL VISION SENSOR - Enabling height recognition processing by setting a height of an arbitrary plane to zero for convenience of the recognition processing. A parameter for three-dimensional measurement is calculated and registered through calibration and, thereafter, an image pickup with a stereo camera is performed on a plane desired to be recognized as having a height of zero in actual recognition processing. Three-dimensional measurement using the registered parameter is performed on characteristic patterns (marks m02-07-2013
20130033578PROCESSING MULTI-APERTURE IMAGE DATA - A method and a system for processing multi-aperture image data are described, wherein the method comprises: capturing image data associated with one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second and third aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electromagnetic spectrum; and, generating depth information associated with said captured image on the basis displacement information in said second image data, preferably on the basis of displacement information in an auto-correlation function of the high-frequency image data associated with said second image data.02-07-2013
20130033571METHOD AND SYSTEM FOR CROPPING A 3-DIMENSIONAL MEDICAL DATASET - A method and gesture-based control system for manipulating a 3-dimensional medical dataset include translating a body part, detecting the translation of the body part with a camera system. The method and system include translating a crop plane in the 3-dimensional medical dataset based on the translating the body part. The method and system include cropping the 3-dimensional medical dataset at the location of the crop plane after translating the crop plane and displaying the cropped 3-dimensional medical dataset using volume rendering.02-07-2013
20100066813STEREO PROJECTION WITH INTERFERENCE FILTERS - The invention relates to a stereo projection system and a method for generating an optically perceptible three-dimensional pictorial reproduction. For each of the two perspective partial images (left or right) of the stereo image, regions of the visible spectrum, which are defined differently by colour filters, are masked in such a way that a plurality of only limited spectral intervals is transmitted in the region of the colour perception blue (B), green (G), and red (R). The position of the transmitted intervals is selected differently for the two perspective partial images. The number of transmitted intervals for the two perspective partial images is, according to the invention, selected as lower than 6 (b) either for the image generation or for the image detection by the stereo glasses, and equal to 6 (a). In the event of a reduced number (b), at least one transmitted interval for one of the perspective partial images is selected in transmission in the region of two colour perceptions blue (B), green (G) or red (R), and created by right-left permutation and subsequent combination with an adjacent interval. According to the permutated intervals, the associated image data is analogously permutated. In this way, the cost of the filters, especially interference filters for the stereo projection system or for the method for producing an optically perceptible, three-dimensional pictorial reproduction, is significantly reduced without considerably affecting the reproduction quality. The unpleasant flickering is also reduced.03-18-2010
20100328432IMAGE REPRODUCING APPARATUS, IMAGE CAPTURING APPARATUS, AND CONTROL METHOD THEREFOR - An image reproducing apparatus for reproducing a stereoscopic image shot by a stereoscopic image capturing apparatus, the image reproducing apparatus comprises: an input unit which inputs image data of the stereoscopic image and additional data recorded in association with the image data; an acquisition unit which acquires depth information indicating a depth of a point of interest in the stereoscopic image set, during shooting, on the basis of the additional data; a generation unit which generates images to be superimposed on right and left images of the stereoscopic image, the images to be superimposed having parallax corresponding to the depth indicated by the depth information, on the basis of the depth information; and a display output unit which combines the right and left images of the stereoscopic image with the images to be superimposed, and outputs the combined right and left images of the stereoscopic image to a display apparatus.12-30-2010
20100328430LENS MODULE FOR FORMING STEREO IMAGE - The disclosure provides a lens module for forming a stereo image. The lens module includes a point light source, a two-dimensional scanning unit, a camera sensor unit, and a data processing unit. The two-dimensional scanning unit is configured for controlling the light from the point light source to project onto an object to obtain image points, which are reflected and arrayed in a matrix on the object and scanning the image points. The camera sensor unit is configured for receiving the light reflected by the object and capturing an image of the image points. The data processing unit is configured for receiving the image from the camera sensor unit and performing an analysis on the image to obtain depth information of the object.12-30-2010
20120182395THREE-DIMENSIONAL IMAGING DEVICE AND OPTICAL TRANSMISSION PLATE - A 3D image capture device includes: a light-transmitting section 07-19-2012
201201823943D IMAGE SIGNAL PROCESSING METHOD FOR REMOVING PIXEL NOISE FROM DEPTH INFORMATION AND 3D IMAGE SIGNAL PROCESSOR THEREFOR - A three-dimensional (3D) image signal processing method increases signal-to-noise ratio by performing pixel binning on depth information obtained by a 3D image sensor, without changing a filter array detecting the depth information. The processing method may be used in a 3D image signal processor, and a 3D image processing system including the 3D image signal processor.07-19-2012
20120182393PORTABLE APPARATUS AND MICROCOMPUTER - The data processing unit generates image data such that the camera unit is caused to acquire a plurality of data captured with a focal length changed in response to instructions for imaging operation by an operation unit and three-dimensional display data are generated from the plurality of captured data based on the correlation of focused images which are different according to the focal lengths of the acquired plurality of captured data with the focal length thereof. Since each of the plurality of data captured with a focal length changed is different in a focused image according to the focal length, the plurality of captured data is subjected to the processing for generating three-dimensional display data based on the correlation of a focused image different according to the focal length with the focal length to allow the three-dimensional display data to be generated.07-19-2012
20120182392Mobile Human Interface Robot - A method of object detection for a mobile robot includes emitting a speckle pattern of light onto a scene about the robot while maneuvering the robot across a work surface, receiving reflections of the emitted speckle pattern off surfaces of a target object in the scene, determining a distance of each reflecting surface of the target object, constructing a three-dimensional depth map of the target object, and classifying the target object.07-19-2012
20120182391DETERMINING A STEREO IMAGE FROM VIDEO - A method of producing a stereo image from a digital video includes receiving a digital video including a plurality of digital images captured by an image capture device; and using a processor to produce stereo suitability scores for at least two digital images from the plurality of digital images. The method further includes selecting a stereo candidate image based on the stereo suitability scores; producing a stereo image from the selected stereo candidate image wherein the stereo image includes the stereo candidate image and an associated stereo companion image based on the plurality of digital images from the digital video; and storing the stereo image whereby the stereo image can be presented for viewing by a user.07-19-2012
20120182390Counting system for vehicle riders - There is provided a system and method for counting riders arbitrarily positioned in a vehicle. There is provided a method comprising receiving, from at least one camera filtered to capture non-visible light, video data corresponding to the vehicle passing through a light source filtered for non-visible light, converting the video data into a 3D height map, and analyzing the 3D height map to determine a number of riders in the vehicle. The camera and light source may be mounted in a permanent position using a gantry or another suitable system where the vehicle travels across the camera and light system in a determined manner, for example through a vehicle track. Multiple cameras may be used to increase detection accuracy. To detect persons in the 3D height map, the analysis may search for height patterns indicating heads and shoulders of persons, compare against height map templates, or use machine-learning methods.07-19-2012
20130070056METHOD AND APPARATUS TO MONITOR AND CONTROL WORKFLOW - A method to monitor a vehicle inspection workflow, includes (i) identifying an inspector in a vehicle inspection area, (ii) determining an inspection procedure being performed by the inspector, the inspection procedure being at least a portion of a vehicle inspection, (iii) determining a workflow rule that corresponds to the inspection procedure, the workflow rule including at least one workflow limit, (iv) generating workflow data by monitoring the inspector within the inspection area, the workflow data being determined from recorded video data, (v) determining whether the inspector violated the workflow rule by comparing the workflow data to the at least one workflow limit of the workflow rule, and (vi) indicating that the inspector did not adequately perform the inspection procedure if it is determined that the inspector violated the workflow rule.03-21-2013
20130070058SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.03-21-2013
20130070055SYSTEM AND METHOD FOR IMPROVING METHODS OF MANUFACTURING STEREOSCOPIC IMAGE SENSORS - Described herein are methods, systems and apparatus to improve imaging sensor production yields. In one method, a stereoscopic image sensor pair is provided from a manufacturing line. One or more images of a correction pattern are captured by the image sensor pair. Correction angles of the sensor pair are determined based on the images of the correction pattern. The correction angles of the sensor pair are represented graphically in a three dimensional space. Analysis of the graphical representation of the correction angles through statistical processing results in a set of production correction parameters that may be input into a manufacturing line to improve sensor pair yields.03-21-2013
20130070057METHOD AND SYSTEM FOR GENERATING A HIGH RESOLUTION IMAGE - A method for generating an image is provided. The method includes estimating a high resolution image from a plurality of low resolution images and downsampling the estimated high resolution image to obtain estimates of a plurality of low resolution images. The method also includes generating a desired high resolution image based upon comparison of the downsampled low resolution images and the plurality of low resolution images.03-21-2013
20130070053IMAGE CAPTURING DEVICE AND IMAGE CAPTURING METHOD OF STEREO MOVING IMAGE, AND DISPLAY DEVICE, DISPLAY METHOD, AND PROGRAM OF STEREO MOVING IMAGE - If there is a discrepancy between a capturing system and a display system of a stereoscopic moving image, the phenomena, such as that the contour of a motion area of an image looks double, may occur, thus posing a problem in the quality of a reproduced image. An image capturing device capturing a stereoscopic moving image includes: an image capturing unit configured to capture a right-eye moving image and a left-eye moving image constituting the stereoscopic moving image, respectively; a unit configured to set an image capturing mode corresponding to the display system of a display device displaying the stereoscopic moving image; and a synchronous signal control unit configured to supply to the image capturing unit a synchronous signal used for capturing the right-eye moving image and the left-eye moving image, respectively, and control a phase of the synchronous signal to supply in accordance with the set image capturing mode.03-21-2013
20130070054IMAGE PROCESSING APPARATUS, FLUORESCENCE MICROSCOPE APPARATUS, AND IMAGE PROCESSING PROGRAM - A three-dimensional image without luminance irregularity is generated while achieving good contrast. An image processing apparatus is provided, including an image combining portion that generates combined images by combining, for each depth position in a specimen, a plurality of fluorescence images captured with differing exposure levels at each of different depth positions of the specimen; a smoothed-luminance calculating portion that calculates a representative luminance from the individual combined images and that calculates a smoothed luminance for the individual combined images by smoothing the calculated representative luminance in the depth direction; a luminance correcting portion that generates corrected images by correcting the luminance of the individual combined images on the basis of differences between the smoothed luminance and the representative luminance calculated; and a three-dimensional image generating portion that generates a three-dimensional image of the specimen from the plurality of corrected images.03-21-2013
20100265317IMAGE PICKUP APPARATUS - An image pickup apparatus of the present invention includes: a photographing section that can photograph a subject from a plurality of viewpoints with parallax, and can photograph a 2D moving image of the subject obtained by photographing from at least one of the viewpoints and a 3D image of the subject obtained by photographing from the plurality of the viewpoints; a recording section that records the 2D moving image and the 3D image; a subject situation determination section that determines a timing suitable for photographing the 3D image while photographing the 2D moving image; and a photographing control section that controls the photographing section so as to photograph the 3D image when the subject situation determination section determines that the timing is suitable for photographing the 3D image.10-21-2010
20130088575METHOD AND APPARATUS FOR OBTAINING DEPTH INFORMATION USING OPTICAL PATTERN - Provided is an apparatus and method for obtaining depth information using an optical pattern. The apparatus for obtaining depth information using the optical pattern may include: a pattern projector to generate the optical pattern using a light source and an optical pattern projection element (OPPE), and to project the optical pattern towards an object area, the OPPE comprising a pattern that includes a plurality of pattern descriptors; an image obtaining unit to obtain an input image by photographing the object area; and a depth information obtaining unit to measure a change in a position of at least one of the plurality of pattern descriptors in the input image, and to obtain depth information of the input image based on the change in the position.04-11-2013
20130088577MOBILE DEVICE, SERVER ARRANGEMENT AND METHOD FOR AUGMENTED REALITY APPLICATIONS - A mobile device (04-11-2013
20130088574Detective Adjusting Apparatus for Stereoscopic Image and Related Method - A detective adjusting apparatus for a stereoscopic image and a related method is disclosed in the present invention. The apparatus includes an image capturing device, an image processing device, a stereoscopic display and an image analyzing/displaying device. The present invention utilizes the method to calculate an angle between eyes' position and a central position according to the face position, to determine the stereoscopic image by analyzing parameters of the stereoscopic image and the stereoscopic display, and to display the continuous stereoscopic image having at least two viewpoints, so the a viewer can watch the stereoscopic image clearly. The present invention is applied to an optical grating or an auto-stereoscopic screen made by lens.04-11-2013
20130088576OPTICAL TOUCH SYSTEM - An image system comprises a light source, an image sensing device, and a computing apparatus. The light source is configured to illuminate an object comprising at least one portion. The image sensing device is configured to generate a picture comprising an image. The image is produced by the object and comprises at least one part corresponding to the at least one portion of the object. The computing apparatus is configured to determine an intensity value representing the at least one part and to determine at least one distance between the at least one portion and the image sensing device using the intensity value and a dimension of the at least one part of the image.04-11-2013
20130088573METHODS FOR CONTROLLING SCENE, CAMERA AND VIEWING PARAMETERS FOR ALTERING PERCEPTION OF 3D IMAGERY - Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.04-11-2013
20120218386Systems and Methods for Comprehensive Focal Tomography - A method and system for forming a three-dimensional image of a three-dimensional scene using a two-dimensional image sensor are disclosed. Formation of a three-dimensional image is enabled by locating a coded aperture in an image field provided by a collector lens, wherein the coded aperture modulates the image field to form a modulated image at the image sensor. The three-dimensional image is reconstructed by deconvolving the modulation code from the image data, thereby enabling high-resolution images to be formed at a plurality of focal ranges.08-30-2012
20130057651METHOD AND SYSTEM FOR POSITIONING OF AN ANTENNA, TELESCOPE, AIMING DEVICE OR SIMILAR MOUNTED ONTO A MOVABLE PLATFORM - Method and system for positioning an antenna (03-07-2013
20130057654METHOD AND SYSTEM TO SEGMENT DEPTH IMAGES AND TO DETECT SHAPES IN THREE-DIMENSIONALLY ACQUIRED DATA - A method and system analyzes data acquired by image systems to more rapidly identify objects of interest in the data. In one embodiment, z-depth data are segmented such that neighboring image pixels having similar z-depths are given a common label. Blobs, or groups of pixels with a same label, may be defined to correspond to different objects. Blobs preferably are modeled as primitives to more rapidly identify objects in the acquired image. In some embodiments, a modified connected component analysis is carried out where image pixels are pre-grouped into regions of different depth values preferably using a depth value histogram. The histogram is divided into regions and image cluster centers are determined. A depth group value image containing blobs is obtained, with each pixel being assigned to one of the depth groups.03-07-2013
20130057653APPARATUS AND METHOD FOR RENDERING POINT CLOUD USING VOXEL GRID - A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane.03-07-2013
20130057652Handheld Scanning Device - A handheld, cordless scanning device for the three-dimensional image capture of patient anatomy without the use of potentially hazardous lasers, optical reference targets for frame alignment, magnetic reference receivers, or the requirement that the scanning device be plugged in while scanning. The device generally includes a housing having a front end and a rear end. The rear end includes a handle and trigger. The front end includes a pattern projector for projecting a unique pattern onto a target object and a camera for capturing live video of the projected pattern as it is deformed around the object. The front end of the housing also includes a pair of focus beam generators and an indexing beam generator. By utilizing data collected with the present invention, patient anatomy such as anatomical features and residual limbs may be digitized to create accurate three-dimensional representations which may be utilized in combination with computer-aided-drafting programs.03-07-2013
20130057650OPTICAL GAGE AND THREE-DIMENSIONAL SURFACE PROFILE MEASUREMENT METHOD - An optical gage (03-07-2013
20120307014METHOD AND APPARATUS FOR ULTRAHIGH SENSITIVE OPTICAL MICROANGIOGRAPHY - Embodiments herein provide an ultrahigh sensitive optical microangiography (OMAG) system that provides high sensitivity to slow flow information, such as that found in blood flow in capillaries, while also providing a relatively low data acquisition time. The system performs a plurality of fast scans (i.e., B-scans) on a fast scan axis, where each fast scan includes a plurality of A-scans. At the same time, the system performs a slow scan (i.e., C-scan), on a slow scan axis, where the slow scan includes the plurality of fast scans. A detector receives the spectral interference signal from the sample to produce a three dimensional (3D) data set. An imaging algorithm is then applied to the 3D data set in the slow scan axis to produce at least one image of the sample. In some embodiments, the imaging algorithm may separate flow information from structural information of the sample.12-06-2012
20100097444Camera System for Creating an Image From a Plurality of Images - Methods and apparatus to create and display stereoscopic and panoramic images are disclosed. Apparatus is provided to control the position of a lens in relation to a reference lens. Methods and apparatus are provided to generate multiple images that are combined into a stereoscopic or a panoramic image. An image may be a static image. It may also be a video image. A controller provides correct camera settings for different conditions. An image processor creates a stereoscopic or a panoramic image from the correct settings provided by the controller. A panoramic video wall system is also disclosed.04-22-2010
20090268014Method and apparatus for generating a stereoscopic image - A method of generating a stereoscopic image is disclosed. The method includes defining at least two, three or more regions in a scene representing a region of interest, a near regions and/or a far region. This is followed by forming an image pair for each region, this image pair containing the information relating to objects in or partially in their respective region. The perceived depth within the regions is altered to provide the idea or best perceived depth within the region of interest and acceptable or more compressed perceived depths in the other regions. The image pairs are then mapped together to form a display image pair for viewing on a display device.10-29-2009
20090268013STEREO CAMERA UNIT - An adjuster plate is provided between a front rail and a camera unit body having cameras. Pre-dimensioned positioning pins protrude from upper and lower surfaces of the adjuster plate. The positioning pins protruding from the upper surface of the adjuster plate are positioned by being fitted in pin fitting holes provided in the front rail. The positioning pins protruding from the lower surface of the adjuster plate are positioned by being fitted in pin fitting holes provided in the camera unit body. Even when the positions of the pin fitting holes in the front rail are changed, it is possible to cope with the change by only changing the protruding positions of the positioning pins.10-29-2009
20120224027STEREO IMAGE ENCODING METHOD, STEREO IMAGE ENCODING DEVICE, AND STEREO IMAGE ENCODING PROGRAM - The stereo image encoding device 09-06-2012
20130063566DETERMINING A DEPTH MAP FROM IMAGES OF A SCENE - A technique determines a depth measurement associated with a scene captured by an image capture device. The technique receives at least first and second images of the scene, in which the first image is captured using at least one different camera parameter than that of the second image. At least first and second image patches are selected from the first and second images, respectively, the selected patches corresponding to a common part of the scene. The selected image patches are used to determine which of the selected image patches provides a more focused representation of the common part. At least one value is calculated based on a combination of data in the first and second image patches, the combination being dependent on the more focused image patch. The depth measurement of the common part of the scene is determined from the at least one calculated value.03-14-2013
20130063565INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - Provided is an information processing apparatus including a correction unit that corrects at least position coordinates used to specify a position of an enlarged-image group generated in accordance with a scanning method of a microscope that scans a sample in vertical, horizontal, and depth directions and generates an image data group corresponding to the enlarged-image group of the sample, and a stereoscopic image generation unit that generates a stereoscopic image of the enlarged-image group by giving parallax to the corrected image data group.03-14-2013
20130063567PORTAL WITH RFID TAG READER AND OBJECT RECOGNITION FUNCTIONALITY, AND METHOD OF UTILIZING SAME - An RFID/object recognition system monitors the passage of an object through a portal into a space. An RFID reader adjacent the portal communicates with an RFID tag within a preselected distance from the RFID reader. A data processor processes data from the RFID reader. A 3-dimensional scanner has an RGB camera and a depth sensor with an infrared laser projector and a monochrome CMOS sensor. An infrared laser controller is electronically coupled with the infrared laser projector, and a monochrome CMOS processor is electronically coupled with the monochrome CMOS sensor. The infrared laser controller, monochrome CMOS processor, and RGB camera are electronically coupled with a processor. The RFID reader receives data from an RFID tag when an RFID-tagged object passes within the preselected distance from the RFID reader through the portal. The 3-dimensional object recognition assembly identifies where the RFID-tagged object is located within the defined space.03-14-2013
20130063561VIRTUAL ADVERTISING PLATFORM - In embodiments, a virtual advertising platform may use a three-dimensional mapping algorithm to insert a virtual image within a digital video stream. The virtual advertising platform may apply a three-dimensional mapping algorithm to the virtual digital image, wherein the three-dimensional mapping algorithm causes the virtual digital image to be recomposited within a plurality of frames within a received two-dimensional digital data feed in place of a spatial region within the two-dimensional data feed. The mapping algorithm may enable application of analogous geometric changes to the virtual digital image that are present in the spatial region within the plurality of video frames within the two-dimensional digital video data feed, and may send the recomposited digital data feed for display to a user, wherein the recomposited digital data feed is a virtualized digital data feed that includes the virtual digital image in place of the spatial region.03-14-2013
20130063569IMAGE-CAPTURING APPARATUS AND IMAGE-CAPTURING METHOD - This invention relates to capturing an image of a subject as a three-dimensional image using a single image-capturing apparatus. The image-capturing apparatus includes a first polarization means, a lens system, and an image-capturing device array having a second polarization means. The first polarization means includes first and second regions arranged along a first direction, and the second polarization means includes multiple third and fourth regions arranged alternately along a second direction. First region transmission light having passed the first region passes the third region and reaches the image-capturing device, and second region transmission light having passed the second region passes the fourth region and reaches the image-capturing device. Thus, an image is captured to obtain a three-dimensional image in which a distance between a barycenter BC03-14-2013
20130063564IMAGE PROCESSOR, IMAGE PROCESSING METHOD AND PROGRAM - Disclosed herein is an image processor including: a first emission section for emitting light at a first wavelength to a subject; a second emission section for emitting light at a second wavelength longer than the first wavelength to the subject; an imaging section for capturing an image of the subject; a detection section for detecting a body region representing at least one of the skin and eyes of the subject based on a first captured image acquired by image capture at the time of emission of the light at the first wavelength and a second captured image acquired by image capture at the time of emission of the light at the second wavelength; a calculation section for calculating viewpoint information; and a display control section for controlling a display mechanism adapted to allow the subject to visually recognize an image as a stereoscopic image.03-14-2013
20130063563TRANSPROJECTION OF GEOMETRY DATA - Systems and methods for transprojection of geometry data acquired by a coordinate measuring machine (CMM). The CMM acquires geometry data corresponding to 3D coordinate measurements collected by a measuring probe that are transformed into scaled 2D data that is transprojected upon various digital object image views captured by a camera. The transprojection process can utilize stored image and coordinate information or perform live transprojection viewing capabilities in both still image and video modes.03-14-2013
20130063562METHOD AND APPARATUS FOR OBTAINING GEOMETRY INFORMATION, LIGHTING INFORMATION AND MATERIAL INFORMATION IN IMAGE MODELING SYSTEM - A method and apparatus for obtaining geometry information, material information, and lighting information in an image modeling system are provided. Geometry information, material information, and lighting information of an object may be extracted from a single-view image captured in a predetermined light condition, by applying pixel values defined by a geometry function, a material function, and a lighting function.03-14-2013
20130063560COMBINED STEREO CAMERA AND STEREO DISPLAY INTERACTION - One embodiment of the present invention provides a system that facilitates interaction between a stereo image-capturing device and a three-dimensional (3D) display. The system comprises a stereo image-capturing device, a plurality of trackers, an event generator, an event processor, and a 3D display. During operation, the stereo image-capturing device captures images of a user. The plurality of trackers track movements of the user based on the captured images. Next, the event generator generates an event stream associated with the user movements, before the event processor in a virtual-world client maps the event stream to state changes in the virtual world. The 3D display then displays an augmented reality with the virtual world.03-14-2013
20130063568CAMERA SYSTEM COMPRISING COLOR DISPLAY AND PROCESSOR FOR DECODING DATA BLOCKS IN PRINTED CODING PATTERN - A camera system including: a substrate having a coding pattern printed thereon and03-14-2013
20120194648VIDEO/ AUDIO CONTROLLER - An apparatus for controlling video and/or audio material, the apparatus comprising: at least one sensor that generates a signal responsive to a physiological parameter in a user's body; a processor, that receives the signal and determines an emotional state of the user responsive to the signal; and a controller that controls the V/A material in accordance with the emotion state of the user.08-02-2012
20130162780STEREOSCOPIC IMAGING DEVICE AND SHADING CORRECTION METHOD - Provided is a technique for improving the quality of an image obtained by a pupil-division-type stereoscopic imaging device. First and second images obtained by the stereoscopic imaging device according to the present invention have the shading of an object in a pupil division direction. Therefore, when the first and second images are composed, reference data in which shading is cancelled is generated. The amount of shading correction for the first and second images is determined on the basis of the reference data and shading correction is performed on the first and second images on the basis of the determined amount of shading correction.06-27-2013
20130162779IMAGING DEVICE, IMAGE DISPLAY METHOD, AND STORAGE MEDIUM FOR DISPLAYING RECONSTRUCTION IMAGE - A sub-image extractor extracts a target sub-image from a light field image. A partial area definer defines a predetermined area in the target sub-image as a partial area. A pixel extractor extracts pixels from the partial area, the number of pixels meeting correspondence areas of a generation image. The pixel arranger arranges the extracted pixels to the correspondence areas of the generation image in an arrangement according to the optical path of the optical system which photographs the light field image. Pixels are extracted for all sub-images in the light field image, and are arranged to the generation image to generate a reconstruction image.06-27-2013
201301627773D CAMERA MODULE AND 3D IMAGING METHOD USING SAME - An 3D camera module includes a first and a second imaging units, a storage unit, a color separation unit, a main processor unit, an image processing unit, a driving unit, an image combining unit and two OIS units. The first and second imaging units capture images of an object(s) from different angles. The color separation unit separates the images into red, green and blue colors. The main processor unit calculates MTF values of the images and determines a shooting mode of the 3D camera module. The image processing unit processes the images to compensate for blurring of the images caused by being out of focus. The driving unit drives the first and second imaging units to optimum focusing positions according to MTF values. The image combining unit combines the images into a 3D image. The OIS units respectively detect and compensate for shaking of the first and second imaging units.06-27-2013
20120229607SYSTEMS AND METHODS FOR PERSISTENT SURVEILLANCE AND LARGE VOLUME DATA STREAMING - In general, the present disclosure relates to persistent surveillance (PS), wide medium and small in area, and large volume data streaming (LVSD), e.g., from orbiting aircraft or spacecraft, and PS/LVSD specific data compression. In certain aspects, the PS specific data or LVSD compression and image alignment may utilize registration of images via accurate knowledge of camera position, pointing, calibration, and a 3D model of the surveillance area. Photogrammetric systems and methods to compress PS specific data or LVSD while minimizing loss are provided. In certain embodiments, to achieve data compression while minimizing loss, a depth model is generated, and imagery is registered to the depth model.09-13-2012
20090009591Image synthesizing apparatus and image synthesizing method - A stereoscopic image supplier acquires stereoscopic image data in a side-by-side layout format. Visual information supplier acquires visual information to be added to a stereoscopic image. Based on a 3D display for displaying a stereoscopic image, a 3D display information supplier acquires the coordinates of portions that are not used for 3D display representation as the coordinates of the pixels with which visual information is combined. An image synthesizer combines visual information obtained at Step S01-08-2009
20120113227APPARATUS AND METHOD FOR GENERATING A FULLY FOCUSED IMAGE BY USING A CAMERA EQUIPPED WITH A MULTI-COLOR FILTER APERTURE - Provided are an apparatus and method for generating a fully focused image. A depth map generation unit generates a depth map of an input image obtained by a multiple color filter aperture (MCA) camera. A channel shifting & alignment unit extractes subimages which include objects with same focal distance based on the depth map, and performing color channel alignment and removing out-of-focus blurs for each subimages obtained from the depth map. An image fusing unit fuses the subimages to generate a fully focused image.05-10-2012
201201132263D IMAGING DEVICE AND 3D REPRODUCTION DEVICE - A 3D imaging device is provided that includes an identification unit, a parallax information decision unit, a display position decision unit, and a 3D display control unit. The identification unit calculates parallax information with respect to an object image based on a first image signal and a second image signal. The identification unit also sets identification information for identifying an object image. The identification unit further outputs first parallax information and identification information. The parallax information decision unit decides on second parallax information based on first parallax information so that identification information is visually recognizable at a depth separate from the object image. The 3D display control unit is coupled to at least one of the identification unit, the parallax information decision unit, and the display position decision unit. The 3D display control unit displays identification information superimposed on the first second image signals based on second parallax information.05-10-2012
20120113225BIOMETRIC MEASUREMENT SYSTEMS AND METHODS - In various embodiments, the present disclosure provides a method of generating crop biometric information in field conditions that includes scanning top surfaces of various plant crown structures of a plurality of plants in one or more rows of plants within a field to collect scan data of the crown structures. Additionally, the method includes converting the scan data into a high spatial resolution 3-dimensional field contour map that illustrates an aggregate 3-dimensional field contour of the scanned plants. The method further includes extracting, from the high spatial resolution 3-dimensional field contour map, biometric information relating to the plants in each of one or more selected rows of the scanned rows of plants.05-10-2012
20120113224Determining Loudspeaker Layout Using Visual Markers - A method consistent with certain implementations involves at a listening position, capturing a plurality of photographic images with a camera of a corresponding plurality of loudspeakers forming part of an audio system; determining from the plurality of captured images, a geometric configuration representing a positioning of the plurality of loudspeakers connected to the audio system; and outputting the geometric configuration of the plurality of loudspeakers to the audio system. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.05-10-2012
20120113223User Interaction in Augmented Reality - Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object.05-10-2012
20120236122IMAGE PROCESSING DEVICE, METHOD THEREOF, AND MOVING BODY ANTI-COLLISION DEVICE - An image processing device is disclosed that is able to accurately recognize objects at a close distance. The image processing device includes a camera unit, and an image processing unit. The camera unit includes a lens, a focusing unit and an image pick-up unit. The focusing unit drives the lens to sequentially change the focusing distance of the camera unit to perform a focus-sweep operation, so that clear images of objects at different positions in an optical axis of the lens are sequentially formed on the image pick-up unit. The image processing unit receives the plurality of images obtained by the image pick-up unit in the focus-sweep operation, identifies objects with clear images formed in the plurality of images, and produces an object distribution view according to the focusing distances used when picking up the plurality of image to show a position distribution of the identified objects.09-20-2012
20120236121Methods of Operating a Three-Dimensional Image Sensor Including a Plurality of Depth Pixels - In a method of operating a three-dimensional image sensor according to example embodiments, modulated light is emitted to an object of interest, the modulated light that is reflected from the object of interest is detected using a plurality of depth pixels, and a plurality of pixel group outputs respectively corresponding to a plurality of pixel groups are generated based on the detected modulated light by grouping the plurality of depth pixels into the plurality of pixel groups including a first pixel group and a second pixel group that have different sizes from each other.09-20-2012
20120236120AUTOMATIC STEREOLOGICAL ANALYSIS OF BIOLOGICAL TISSUE INCLUDING SECTION THICKNESS DETERMINATION - Systems and methods are provided for automatic determination of slice thickness of an image stack in a computerized stereology system, as well as automatic quantification of biological objects of interest within an identified slice of the image stack. Top and bottom boundaries of a slice can be identified by applying a thresholded focus function to determine just-out-of-focus focal planes. Objects within an identified slice can be quantified by performing a color processing segmentation followed by a gray-level processing segmentation. The two segmentation processes generate unique identifiers for features in an image that can then be used to produce a count of the features.09-20-2012
20120236119APPARATUS AND METHOD FOR ESTIMATING CAMERA MOTION USING DEPTH INFORMATION, AND AUGMENTED REALITY SYSTEM - Provided is a camera motion estimation method and apparatus that may estimate a motion of a depth camera in real time. The camera motion estimation apparatus may extract an intersection point from plane information of a depth image, calculate a feature point associated with each of planes included in the plane information using the extracted intersection point, and extract a motion of a depth camera providing the depth image using the feature point. Accordingly, in an environment where an illumination environment dynamically varies, or regardless of a texture state within a space, the camera motion estimation apparatus may estimate a camera motion.09-20-2012
20120236118ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY ADJUSTING VIEWING ANGLE OF 3D IMAGES - In a method for adjusting a viewing angle of 3D images using an electronic device, the electronic device includes a distance sensor, a camera lens and a 3D display screen. The distance sensor senses a distance between a viewer and the 3D display screen, and the camera lens to capture a digital image of the viewer. The method calculates a viewing angle of the viewer according to the distance and a displacement between the viewer and the 3D display screen, and calculates an angle difference between the viewing angle of the viewer and a viewing angle range of the 3D display screen. The method further adjusts a viewing angle of a 3D image according to the angle difference, and displays the 3D image on the 3D display screen according to the viewing angle of the 3D image.09-20-2012
20130162778MOTION RECOGNITION DEVICE - A motion recognition device capable of recognizing the motion of an object without contact with the object is provided. Further, a motion recognition device that has a simple structure and can recognize the motion of an object regardless of the state of the object is provided. By using a 3D TOF range image sensor in the motion recognition device, information on changes in position and shape is detected easily. Further, information on changes in position and shape of a fast-moving object is detected easily. Motion recognition is performed on the basis of pattern matching. Imaging data used for pattern matching is acquired from a 3D range measuring sensor. Object data is selected from imaging data on an object that changes over time, and motion data is estimated from a time change in selected object data. The motion recognition device performs operation defined by output data generated from the motion data.06-27-2013
20120268564ELECTRONIC APPARATUS AND VIDEO DISPLAY METHOD - An electronic apparatus includes a camera, a tracking module, a three-dimensional video adjusting module, a display, and an output direction controller. The tracking module is configured to recognize a position of a user based on a video picked up by the camera. The three-dimensional video adjusting module is output one of first video data and second video data by adjusting an input three-dimensional video signal, the first video data corresponding to a first stereoscopic video that appears stereoscopically when viewed from a predetermined position, the second video data corresponding to a second stereoscopic video that appears stereoscopically when viewed from the recognized position of the user.10-25-2012
20110043611DEPTH AND LATERAL SIZE CONTROL OF THREE-DIMENSIONAL IMAGES IN PROJECTION INTEGRAL IMAGING - A method disclosed herein relates to displaying three-dimensional images. The method comprising, projecting integral images to a display device, and displaying three-dimensional images with the display device. Further disclosed herein is an apparatus for displaying orthoscopic 3-D images. The apparatus comprising, a projector for projecting integral images, and a micro-convex-mirror array for displaying the projected images.02-24-2011
20110001795FIELD MONITORING SYSTEM USING A MOBIL TERMINAL - The invention relates to a field monitoring system using a mobile terminal, the system comprising: a mobile terminal, which transmits context information and receives 3D image information corresponding to said context information, the context information consisting of audio-video information generated by photographing images of the field and by recording sounds in the field, and location information obtained by applying sensed signals from an accelerometer and form a Gyroscope sensor to a GPS signal including latitude, longitude and time; and a control server which receives the contest information, matches location information of the context information with a pre-stored map or architectural drawing information to generate 3D image information for the current location of the mobile terminal, and transmits the generated information to the mobile terminal. Therefore, by using a wireless terminal that utilizes a commercial communication module, GPS and INS, the invention presents advantages in finding out the location of each personnel member who is sent to even a wide-area disaster, and photographing any unexpected accident or situation or blind spots.01-06-2011
20120307013FOOD PROCESSING APPARATUS FOR DETECTING AND CUTTING TOUGH TISSUES FROM FOOD ITEMS - A food processing apparatus for detecting and cutting tough tissues from food items such as fish, meat, or poultry. At least one x-ray machine associated to a first conveyor for imaging incoming food items on the first conveyor based on a generated x-ray image indicating the location of the tough tissues in the food items. A vision system supplies second image data of the food items subsequent to the imaging by the x-ray machine. The second image data including position related data indicating the position of the food items on the second conveyor prior to the cutting. A mapping mechanism determines an estimated coordinate position of the food items on the second conveyor by utilizing the x-ray image and tracking position data. The processor compares the estimated coordinate position of the food items to the actual position on the second conveyor based on the second image data.12-06-2012
20120320157COMBINED LIGHTING, PROJECTION, AND IMAGE CAPTURE WITHOUT VIDEO FEEDBACK - A “Concurrent Projector-Camera” uses an image projection device in combination with one or more cameras to enable various techniques that provide visually flicker-free projection of images or video, while real-time image or video capture is occurring in that same space. The Concurrent Projector-Camera provides this projection in a manner that eliminates video feedback into the real-time image or video capture. More specifically, the Concurrent Projector-Camera dynamically synchronizes a combination of projector lighting (or light-control points) on-state temporal compression in combination with on-state temporal shifting during each image frame projection to open a “capture time slot” for image capture during which no image is being projected. This capture time slot represents a tradeoff between image capture time and decreased brightness of the projected image. Examples of image projection devices include LED-LCD based projection devices, DLP-based projection devices using LED or laser illumination in combination with micromirror arrays, etc.12-20-2012
20120320161DEPTH AND LATERAL SIZE CONTROL OF THREE-DIMENSIONAL IMAGES IN PROJECTION INTEGRAL IMAGING - A method disclosed herein relates to displaying three-dimensional images. The method comprises projecting integral images to a display device, and displaying three-dimensional images with the display device. Further disclosed herein is an apparatus for displaying orthoscopic 3-D images. The apparatus comprises a projector for projecting integral images, and a micro-convex-mirror array for displaying the projected images.12-20-2012
20110261161APPARATUS AND METHOD FOR PHOTOGRAPHING AND DISPLAYING THREE-DIMENSIONAL (3D) IMAGES - A method and apparatus for shooting and displaying a three-dimensional (3D) image are provided. Light intensities and light wavelengths of light entering a single point from various directions may be measured and the process is repeated to photographing a 3D image. Light may be emitted, from each point, to corresponding directions based on the measured light intensity and the measured light wavelength measured for each direction and thus, may display a natural 3D image.10-27-2011
20110279650ARRANGEMENT AND METHOD FOR DETERMINING A BODY CONDITION SCORE OF AN ANIMAL - An arrangement for determining a body condition score of an animal comprises a three-dimensional camera system directed towards the animal and provided for recording at least one three-dimensional image of the animal; and an image processing device connected to the three-dimensional camera system and provided for forming a three-dimensional surface representation of a portion of the animal from the three-dimensional image recorded by the three-dimensional camera system; for statistically analyzing the surface of the three-dimensional surface representation; and for determining the body condition score of the animal based on the statistically analyzed surface of the three-dimensional surface representation.11-17-2011
20100171814APPARATUS FOR PROCESSING A STEREOSCOPIC IMAGE STREAM - A system is provided for processing a compressed image stream of a stereoscopic image stream, the compressed image stream having a plurality of frames in a first format, each frame consisting of a merged image comprising pixels sampled from a left image and pixels sampled from a right image. A receiver receives the compressed image stream and a decompressing module in communication with the receiver decompresses the compressed image stream. The left and right images of the decompressed image stream are stored in a frame buffer. A serializing unit reads pixels of the frames stored in the frame buffer and outputs a pixel stream comprising pixels of a left frame and pixels of a right frame. A stereoscopic image processor receives the pixel stream, buffers the pixels, performs interpolation in order to reconstruct pixels of the left and right images and outputs a reconstructed left pixel stream and a reconstructed right pixel stream, the reconstructed streams having a format different from the first format. A display signal generator receives the stereoscopic pixel stream to provide an output display signal.07-08-2010
20110279648SCANNED-BEAM DEPTH MAPPING TO 2D IMAGE - A method for constructing a 3D representation of a subject comprises capturing, with a camera, a 2D image of the subject. The method further comprises scanning a modulated illumination beam over the subject to illuminate, one at a time, a plurality of target regions of the subject, and measuring a modulation aspect of light from the illumination beam reflected from each of the target regions. A moving-mirror beam scanner is used to scan the illumination beam, and a photodetector is used to measure the modulation aspect. The method further comprises computing a depth aspect based on the modulation aspect measured for each of the target regions, and associating the depth aspect with a corresponding pixel of the 2D image.11-17-2011
20090189974Systems Using Eye Mounted Displays - A display device is mounted on and/or inside the eye. The eye mounted display contains multiple sub-displays, each of which projects light to different retinal positions within a portion of the retina corresponding to the sub-display. The projected light propagates through the pupil but does not fill the entire pupil. In this way, multiple sub-displays can project their light onto the relevant portion of the retina. Moving from the pupil to the cornea, the projection of the pupil onto the cornea will be referred to as the corneal aperture. The projected light propagates through less than the full corneal aperture. The sub-displays use spatial multiplexing at the corneal surface. Various electronic devices interface to the eye mounted display.07-30-2009
20120086779IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus is provided with a parallax detector configured to detect parallax between a left-eye image and a right-eye image used to display a 3D image, a parallax range computing unit configured to compute a range of parallax between the left-eye image and the right-eye image, a determining unit configured to determine whether or not the sense of depth when viewing the 3D image exceeds a preset range that is comfortable for a viewer, on the basis of the computed range of parallax, and a code generator configured to generate a code corresponding to the determination result of the determining unit.04-12-2012
20110285820USING 3D TECHNOLOGY TO PROVIDE SECURE DATA VIEWING ON A DISPLAY - A method and system using 3D technology provides secure data viewing on a display. Secure data viewing is enable by having an image source module provide images to a processor module. The processor module receives the images provided by the image source module to create multiple series of images that interfere with each other resulting in an unreadable series of images displayed on a display module. An authorized viewer is able to view a readable series of images from the multiple interfering series of images displayed by the display module.11-24-2011
20110298896SPECKLE NOISE REDUCTION FOR A COHERENT ILLUMINATION IMAGING SYSTEM - Described are methods and apparatus for reducing speckle noise in images, such as images of objects illuminated by coherent light sources and images of objects illuminated by interferometric fringe patterns. According to one method, an object is illuminated with a structured illumination pattern of coherent radiation projected along a projection axis. An angular orientation of the projection axis is modulated over an angular range during an image acquisition interval. Advantageously, shape features of the structured illumination pattern projected onto the surface of the object remain unchanged during image acquisition and the acquired images exhibit reduced speckle noise. The structured illumination pattern can be a fringe pattern such as an interferometric fringe pattern generated by a 3D metrology system used to determine surface information for the illuminated object.12-08-2011
20110298892IMAGING SYSTEMS WITH INTEGRATED STEREO IMAGERS - An imaging system may include an integrated stereo imager that includes first and second imager arrays on a single integrated circuit. Image readout circuitry may be located between the first and second imager arrays and a horizontal electronic rolling shutter may be used to read image data out of the arrays. The layout of the arrays and image readout circuitry on the integrated circuit may help to reduce the size of the integrated circuit while maximizing the baseline separation between the arrays. Memory buffer circuitry may be used to convert image data from the arrays into raster-scan compliant image data. The raster-scan compliant image data may be provided to a host system.12-08-2011
201102988953D VIDEO FORMATS - Several implementations relate to 3D video formats. One or more implementations provide adaptations to MVC and SVC to allow 3D video formats to be used. According to a general aspect, a set of images including video and depth is encoded. The set of images is related according to a particular 3D video format, and are encoded in a manner that exploits redundancy between the set of images. The encoded images are arranged in a bitstream in a particular order, based on the particular 3D video format that relates to the images. The particular order is indicated in the bitstream using signaling information. According to another general aspect, a bitstream is accessed that includes the encoded set of images. The signaling information is also accessed. The set of images is decoded using the signaling information.12-08-2011
20110298894STEREOSCOPIC FIELD SEQUENTIAL COLOUR DISPLAY CONTROL - The present invention relates to a method, an apparatus, a method and a computer program suitable for controlling a stereoscopic field sequential colour display to provide a first primary colour component image for a user's first eye and a second primary colour component image for the user's second eye, wherein the first and the second primary colour component images are both provided either at least partially overlapping in time or alternately in uninterrupted succession and wherein the primary colour components of the first and second primary colour component images are different from each other.12-08-2011
20110298893APPARATUS FOR READING SPECTRAL INFORMATION - The apparatus for reading spectral information out of image patterns includes a solid-state image sensor for taking pictures of image patterns, a unit for making one-dimensional images out of lights having reflected at the image patterns, a spectroscope introducing the one-dimensional images into the solid-state image sensor, a shutter unit located in front of the solid-state image sensor, and a synchronizer turning the shutter unit on or off in synchronization with movement of the image patterns, the spectroscope disperses lights having entered thereinto into each of wavelengths, and makes three-dimensional image spectrum which is wavelength dispersive for each of pixels in association with each of locations of the image patterns.12-08-2011
20110285823Device and Method for the Three-Dimensional Optical Measurement of Strongly Reflective or Transparent Objects - The invention relates to a device for three-dimensionally measuring an object, comprising a first projection device having a first infrared light source for projecting a displaceable first pattern onto the object, and at least one image capturing device for capturing images of the object in an infrared spectral range. The invention further relates to a method for three-dimensionally measuring an object, comprising the steps of projecting a first infrared pattern onto the object using a first projection device having a first infrared light source; and capturing images of the object using at least one image capturing device sensitive to infrared radiation, wherein the pattern is shifted between image captures.11-24-2011
20110285824METHOD FOR RECONSTRUCTING A THREE-DIMENSIONAL SURFACE OF AN OBJECT - Method for determining a disparity value of a disparity of each of a plurality of points on an object, the method including the procedures of detecting by a single image detector, a first image of the object through a first aperture, and a second image of the object through a second aperture, correcting the distortion of the first image, and the distortion of the second image, by applying an image distortion correction model to the first image and to the second image, respectively, thereby producing a first distortion-corrected image and a second distortion-corrected image, respectively, for each of a plurality of pixels in at least a portion of the first distortion-corrected image representing a selected one of the points, identifying a matching pixel in the second distortion-corrected image, and determining the disparity value according to the coordinates of each of the pixels and of the respective matching pixel.11-24-2011
20110285822METHOD AND SYSTEM FOR FREE-VIEW RELIGHTING OF DYNAMIC SCENE BASED ON PHOTOMETRIC STEREO - A method and a related system of free-view relighting for a dynamic scene based on photometric stereo, the method including the steps of: 1) performing multi-view dynamic videos of an object using a multi-view camera array under a predetermined controllable varying illumination; 2) obtaining a three-dimensional shape model and surface reflectance peculiarities of the object; 3) obtaining a static relighted three-dimensional model of the object and a three-dimensional trajectory of the object; 4) obtaining a dynamic relighted three-dimensional model; and 5) performing a free-view dependent rendering to the dynamic relighted three-dimensional model of the object.11-24-2011
20110285821INFORMATION PROCESSING APPARATUS AND VIDEO CONTENT PLAYBACK METHOD - According to one embodiment, an information processing apparatus executes a browser and player software plugged in the browser. The player software is configured to play back video content received from a server. A capture module captures two-dimensional video data from the player software, the two-dimensional video data being obtained by playback of the video content. A converter converts the captured two-dimensional video data to three-dimensional video data, the three-dimensional video data includes left-eye video data and right-eye video data. A three-dimensional display control module displays a three-dimensional video on a display based on the left-eye video data and right-eye video data.11-24-2011
20120098937Markerless Geometric Registration Of Multiple Projectors On Extruded Surfaces Using An Uncalibrated Camera - A method for registering multiple projectors on a vertically extruded three dimensional display surface with a known aspect ratio includes recovering both the camera parameters and the three dimensional shape of the surface from a single image of the display surface from an uncalibrated camera, capturing images from the projectors to relate the projector coordinates with the display surface points, and segmenting parts of the image for each projector to register the projectors to create a seamlessly wall-paper projection on the display surface using a representation between the projector coordinates with display surface points using a rational Bezier patch. A method for performing a deterministic geometric auto-calibration to find intrinsic and extrinsic parameters of each projector is included.04-26-2012
20120098936PHOTOGRAPHING EQUIPMENT - Photographing equipment includes an image pickup portion, a display portion which displays an image acquired by the image pickup portion, an object detecting portion which detects a reference object of a predetermined size or a larger size within an image pickup range of the image pickup portion among objects in the image acquired by the image pickup portion, and a display controlling portion which displays a representation recommending 3D photographing on the display portion if the object detecting portion detects the reference object.04-26-2012
201200989353D TIME-OF-FLIGHT CAMERA AND METHOD - The present invention relates to a 3D time-of-flight camera for acquiring information about a scene, in particular for acquiring depth images of a scene, information about phase shifts of a scene or environmental information about the scene. The proposed camera particularly compensates motion artifacts by real-time identification of affected pixels and, preferably, corrects its data before actually calculating the desired scene-related information values from the raw data values obtained from radiation reflected by the scene.04-26-2012
20120098934Methods and Systems for Presenting Adjunct Content During a Presentation of a Media Content Instance - An exemplary method includes an adjunct content presentation system including adjunct content within a first image of a media content instance by setting a pixel value of a first group of pixels included in the first image to be greater than a predetermined neutral pixel value, including the adjunct content within a second image of the media content instance by setting a pixel value of a second group of pixels included in the second image and corresponding to the first group of pixels to be less than the predetermined neutral pixel value, and presenting the first and second images. The respective pixel values are set to result in the adjunct content being perceptible to a first viewer viewing only one of the first and second images and substantially imperceptible to a second viewer viewing both the first and second images. Corresponding methods and systems are also disclosed.04-26-2012
20120098933CORRECTING FRAME-TO-FRAME IMAGE CHANGES DUE TO MOTION FOR THREE DIMENSIONAL (3-D) PERSISTENT OBSERVATIONS - An imaging platform minimizes inter-frame image changes when there is relative motion of the imaging platform with respect to the scene being imaged, where the imaging platform may be particularly susceptible to image change, especially when it is configured with a wide field of view or high angular rate of movement. In one embodiment, a system is configured to capture images and comprises: a movable imaging platform having a sensor that is configured to capture images of a scene, each image comprising a plurality of pixels; and an image processor configured to: digitally transform captured images with respect to a common field of view (FOV) such that the transformed images appear to be taken by a non-moving imaging platform, wherein the pixel size and orientation of pixels of each transformed image are the same. A method for measuring and displaying 3-D features is also described.04-26-2012
20110292179IMAGING SYSTEM AND METHOD - According to one embodiment, an apparatus for determining the gradients of the surface normals of an object includes a receiving unit, establishing unit, determining unit, and selecting unit. The receiving unit is configured to receive data of three 2D images of the object, wherein each image is taken under illumination from a different direction. The establishing unit is configured to establish which pixels of the image are in shadow such that there is only data available from two images from these pixels. The determining unit is configured to determine a range of possible solutions for the gradient of the surface normal of a shadowed pixel using the data available for the two images. The selecting unit is configured to select a solution for the gradient using the integrability of the gradient field over an area of the object as a constraint and minimising a cost function.12-01-2011
20110292178THREE-DIMENSIONAL IMAGE PROCESSING - Systems and methods of 3D image processing are disclosed. In a particular embodiment, a three-dimensional (3D) media player is configured to receive input data including at least a first image corresponding to a scene and a second image corresponding to the scene and to provide output data to a 3D display device. The 3D media player is responsive to user input including at least one of a zoom command and a pan command. The 3D media player includes a convergence control module configured to determine a convergence point of a 3D rendering of the scene responsive to the user input.12-01-2011
20110292180Microscope - A microscope having a night vision apparatus, which apparatus can be impinged upon by beam paths proceeding from a specimen or object to be observed.12-01-2011
20090002483Apparatus for and method of generating image, and computer program product - An apparatus includes a stereoscopic display region calculator calculating a stereoscopic display region to reproduce a three-dimensional positional relationship in image data displayed on a stereoscopic display device, based on two-dimensional or three-dimensional image data, a position of a target of regard of a virtual camera set in processing of rendering the image data, and orientations of the light beams output from the stereoscopic display device. The apparatus also includes an image processor performing image processing on image data outside a region representing the outside of the stereoscopic display region calculated by the stereoscopic display region calculator. The image processing is different from image processing on image data inside a region representing the inside of the stereoscopic display region. The apparatus also includes an image generator generating stereoscopic display image data from the two-dimensional or three-dimensional image data after processed by the image processor.01-01-2009
20110267429ELECTRONIC EQUIPMENT HAVING LASER COMPONENT AND CAPABILITY OF INSPECTING LEAK OF LASER AND INSPECTING METHOD FOR INSPECTING LEAK OF LASER THEREOF - The invention provides an electronic equipment having a laser component and capability of inspecting leak of laser and an inspecting method for inspecting leak of laser thereof. The electronic equipment according to the invention includes a three-dimensional image-capturing device. According to the invention, the three-dimensional image-capturing device is controlled to capture a two-dimensional image, and to measure an actual depth map. The captured two-dimensional image is processed to obtain an estimated depth map. The invention selectively determines that the laser component occurs leak of laser or malfunctions in accordance with the estimated depth map and the actual depth map.11-03-2011
20110216166IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes an operation reception portion which receives an instruction operation for displaying a desired image from a plane image or a stereoscopic image that is stored in a recording medium; an information output portion that is connected to a display device which displays the plane image or the stereoscopic image to output image information for displaying the image stored in the recording medium on the display device; and a control portion.09-08-2011
20110216165Electronic apparatus, image output method, and program therefor - Provided is an electronic apparatus including: a storage to store digital photograph images, shooting date and time information, and shooting location information; a current date and time obtaining unit to obtain a current date and time; a current location obtaining unit to obtain a current location; a controller to draw each of digital photograph images at a drawing position based on the shooting date and time and the shooting location in a virtual three-dimensional space, and to image the virtual three-dimensional space, in which each of digital photograph images is drawn; and an output unit to output the imaged virtual three-dimensional space.09-08-2011
20110261160IMAGE INFORMATION PROCESSING APPARATUS, IMAGE CAPTURE APPARATUS, IMAGE INFORMATION PROCESSING METHOD, AND PROGRAM - The present invention relates to an image information processing apparatus, an image capture apparatus, an image information processing method, and a program that allow a depth value to smoothly transition in a scene change of stereoscopic content.10-27-2011
20110261165MODEL FORMING APPARATUS, MODEL FORMING METHOD, PHOTOGRAPHING APPARATUS AND PHOTOGRAPHING METHOD - The present invention provides a model forming apparatus that can simply and efficiently form a three-dimensional model of an object using previously obtained three-dimensional model data of the object as a starting point. The apparatus comprises a photographing section 10-27-2011
20120140041METHOD AND SYSTEM FOR THE REMOTE INSPECTION OF A STRUCTURE - The invention relates to a method for the remote inspection of a structure, comprising the following operations: 06-07-2012
20110169921METHOD FOR PERFORMING OUT-FOCUS USING DEPTH INFORMATION AND CAMERA USING THE SAME - A method for performing out-focus of camera having a first lens and a second lens, comprising: photographing a first image with the first lens and photographing a second image with the second lens; extracting depth information of the photographed first image and second image; and performing out-focus on the first image or the second image using the extracted depth information.07-14-2011
20100007719Method and apparatus for 3D digitization of an object - In a method for 3D digitization of an object (01-14-2010
20100007718MONOCULAR THREE-DIMENSIONAL IMAGING - A three-dimensional imaging system uses a single primary optical lens along with various configurations of apertures, refocusing facilities, and the like to obtain three offset optical channels each of which can be separately captured with an optical sensor.01-14-2010
20100118121Device and Method for Visually Recording Two-Dimensional or Three-Dimensional Objects - A device for visually recording two-dimensional or three-dimensional objects, which comprises a camera for recording images of the two-dimensional or three-dimensional object and which is provided with, can be connected to or is connected to at least one evaluation unit for evaluating the recorded images. A single camera and at least one adjustable or pivotal mirror element are provided. According to the method for visually recording two-dimensional or three-dimensional objects while using a device of the aforementioned type, a camera and at least one adjustable mirror element are arranged relative to one another so that the objects to be recorded are situated in the coverage area of the at least one mirror element. The adjustable mirror element for recording the objects to be recorded is displaced or pivoted about one or two axes with an adjustable velocity. The camera records the objects projected in the at least one mirror element, and the recorded objects are routed from the camera to an evaluation unit for evaluation and are processed.05-13-2010
20100123771Pixel circuit, photoelectric converter, and image sensing system including the pixel circuit and the photoelectric converter - Provided are a pixel circuit, a photoelectric converter, and an image sensing system thereof. The pixel circuit includes a photodiode and an output unit. The photodiode generates a first photo charge to detect the distance from an object and a second photo charge to detect the color of the object. The output unit generates at least one depth signal used to detect the distance based on the first photo charge generated by the photodiode, and a color signal used to detect the color of the object based on the second photo charge.05-20-2010
20100128108APPARATUS AND METHOD FOR ACQUIRING WIDE DYNAMIC RANGE IMAGE IN AN IMAGE PROCESSING APPARATUS - A method is provided for acquiring a Wide Dynamic Range (WDR) image in an image processing apparatus, in which images corresponding to each of consecutive frames are photographed with a stereo camera including at least two image pickup devices with different exposure times, a correlation between images photographed in each frame at different exposure times is checked, and the images photographed at the different exposure times are synthesized into one image based on the correlation.05-27-2010
20110007135Image processing device, image processing method, and program - An image processing apparatus includes: an imaging unit configured to generate an imaged image by imaging a subject; a depth information generating unit configured to generate depth information relating to the imaged image; an image processing unit configured to extract, from the imaged image, an image of an object region including a particular subject out of subjects included in the imaged image and a surrounding region of the subject, based on the depth information, and generate a difference image to display a stereoscopic image in which the subjects included in the imaged image are viewed stereoscopically based on the extracted image; and a recording control unit configured to generate a data stream in which data corresponding to the imaged image and data corresponding to the difference image are correlated, and record the data stream as a moving image file.01-13-2011
20130188021MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal and a control method of the mobile terminal are disclosed. A mobile terminal and a control method of the mobile terminal according to the present invention comprises a memory storing captured stereoscopic images; a display displaying the stored stereoscopic images; and a controller obtaining at least one information of a user viewing the display and changing attributes of the stereoscopic images according to the obtained information of the user and displaying the attribute-changed images. The present invention, by changing attributes of stereoscopic images according to obtained information of a user and displaying the attribute-changed images, can provide stereoscopic images optimized to a user watching a display apparatus.07-25-2013
20100118125METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL (3D) IMAGE DATA - A method of generating three-dimensional (3D) image data from first and second image data obtained by photographing the same subject at different points of time, the method including generating third image data by adjusting locations of pixels in the second image data so that the second image data corresponds to the first image data, and generating the 3D image data based on a relationship between the third image data and the first image data.05-13-2010
20100118123DEPTH MAPPING USING PROJECTED PATTERNS - Apparatus for mapping an object includes an illumination assembly (05-13-2010
201101878333-D Camera Rig With No-Loss Beamsplitter Alternative - An apparatus and method for stereoscopic photography in which a direct view camera with a direct view lens is mounted to a support to obtain a direct view camera shot while a reflected view camera with a reflected view lens is mounted to the support in a down-looking camera configuration to obtain a reflected view camera shot without use of a beamsplitter when an interaxial spacing between the direct view camera and the reflected view camera does not cause an overlap between a direct view active optical area of the beamsplitter that would be used by the direct view lens and a reflected view active optical area of the beamsplitter that would be used by the reflected view lens. A reflective planar mirror can be positioned to (substantially fully) reflect light from a surface of the reflective planar mirror to the reflected view lens while a transparent planar glass is positioned to allow light to pass substantially perpendicularly through the transparent planar glass to the direct view lens, both the transparent planar glass and reflective planar mirror having substantially parallel surfaces and being integral or not. Alternatively, the transparent planar glass can be eliminated and a spacer used to adjust the mounting of the direct view camera to the support so as to restore a sight line for the direct view camera. In still other alternatives, either one or both of the direct and reflected view lenses can be replaced with a pinhole lens.08-04-2011
20120033043METHOD AND APPARATUS FOR PROCESSING AN IMAGE - A method and apparatus for processing an image are provided. The method includes obtaining an image, generating 3-dimensional (3D) disparity information that represents a degree of stereoscopic effects of the image, and outputting the 3D disparity information.02-09-2012
20090207235Method for Determining Scattered Disparity Fields in Stereo Vision - In a system for stereo vision including two cameras shooting the same scene, a method is performed for determining scattered disparity fields when the epipolar geometry is known, which includes the steps of: capturing, through the two cameras, first and second images of the scene from two different positions; selecting at least one pixel in the first image, the pixel being associated with a point of the scene and the second image containing a point also associated with the above point of the scene; and computing the displacement from the pixel to the point in the second image minimising a cost function, such cost function including a term which depends on the difference between the first and the second image and a term which depends on the distance of the above point in the second image from a epipolar straight line, and a following check whether it belongs to an allowability area around a subset to the epipolar straight line in which the presence of the point is allowed, in order to take into account errors or uncertainties in calibrating the cameras.08-20-2009
20120287242ADAPTIVE HIGH DYNAMIC RANGE CAMERA - An embodiment of the invention provides a time of flight 3D camera comprising a photosensor having a plurality of pixels that generate and accumulate photoelectrons responsive to incident light, which photosensor is tiled into a plurality of super pixels, each partitioned into a plurality of pixel groups and a controller that provides a measure of an amount of light incident on a super pixel responsive to quantities of photoelectrons from pixel groups in the super pixel that do not saturate a readout pixel comprised in the photosensor.11-15-2012
20120287243STEREOSCOPIC CAMERA USING ANAGLYPHIC DISPLAY DURING CAPTURE - A digital camera for capturing stereoscopic images, comprising: an image sensor; an optical system; a user interface; a color image display; a data processing system; a buffer memory; a storage memory; and a program memory storing instructions configured to implement a method for capturing stereoscopic images. The method includes: capturing a first digital image of a scene in response to user activation of a user interface element; storing the first digital image; displaying a stream of stereoscopic preview images on the color image display, wherein the stereoscopic preview images are anaglyph stereoscopic images formed by combining the stored first digital image with a stream of evaluation digital images of the scene captured using the image sensor; capturing a second digital image of the scene in response to user activation of a user interface element; and storing a stereoscopic image based on the first digital image and the second digital image.11-15-2012
20120287245OCCUPANCY SENSOR AND ASSOCIATED METHODS - A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environment. The device may also include an interface through which the data is accessible. The device may generate results respective to the location in the environment. The microcontroller may be in communication with a network. The video frames may be concatenated to create an overview to display the video frames substantially seamlessly respective to the location in which the sensor is positioned. The overview may be viewable using the interface and the results of the analysis performed by the rules engine may be accessible using the interface.11-15-2012
20120287241SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR AQUATIC DISPLAY - The present disclosure relates to a computer-implemented method for providing an aquatic display. The method may include capturing real-time live video of aquatic content. The method may further include providing the real-time live video of aquatic content to one or more display devices and displaying the real-time live video of aquatic content as a screensaver. Numerous other features are also within the scope of the present disclosure.11-15-2012
201101699183D IMAGE SENSOR AND STEREOSCOPIC CAMERA HAVING THE SAME - A three-dimensional (3D) image sensor and a stereoscopic camera having the same are provided. The 3D image sensor includes one or more image acquisition regions, each image acquisition region having a plurality of pixels; and an output signal generation controller configured to extract pixel signals from two regions of interest (ROIs) set within the image acquisition regions and output image signals based on the pixel signals, the ROIs being apart from each other. The output signal generation controller may minutely adjust the position of at least one of the ROIs in accordance with a convergence adjustment signal. The output signal generation controller may vertically or horizontally move each of the ROIs in accordance with a camera shake signal and may thus correct camera shake. The output signal generation controller may align the ROIs with left and right lenses in an optical system. The output signal generation controller may vertically or horizontally move each of the ROIs in accordance with an optical axis correction signal and may thus correct an optical axis error resulting from an optical is axis misalignment.07-14-2011
20110169916METHOD OF DISPLAYING IMAGE AND DISPLAY APPARATUS FOR PERFORMING THE SAME - A method of producing and a display apparatus for a three-dimensional or two-dimensional image is presented. First unit pixel data and second unit pixel data are generated from an input image. The first unit pixel data and the second unit pixel data are provided to first and second unit pixels to display first and second images, respectively. The first unit pixel has a wavelength range corresponding to a primary color that is different from a wavelength range of the second unit pixel corresponding to the primary color. When a stereoscopic image is displayed, the first image for a left eye and the second image for a right eye may be selectively provided to the left eye and the right eye of an observer although the first and second images are displayed at the same time. Thus, an afterimage may be prevented.07-14-2011
20100201785Method and system for displaying stereoscopic detail-in-context presentations - A method for generating a stereoscopic presentation of a region-of-interest in a monoscopic information representation. The method includes the steps of: (a) selecting first and second viewpoints for the region-of-interest; (b) creating a lens surface having a predetermined lens surface shape for the region-of-interest, the lens surface having a plurality of polygonal surfaces constructed from a plurality of points sampled from the lens surface shape; (c) creating first and second transformed presentations by overlaying the representation on the lens surface and perspectively projecting the lens surface with the overlaid representation onto a plane spaced from the first and second viewpoints, respectively; and, (d) displaying the first and second transformed presentations on a display screen to generate the stereoscopic presentation.08-12-2010
20100201784METHOD FOR THE MICROSCOPIC THREE-DIMENSIONAL REPRODUCTION OF A SAMPLE - A method for the three-dimensional imaging of a sample in which image information from different depth planes of the sample is stored in a spatially resolved manner, and the three-dimensional image of the sample is subsequently reconstructed from this stored image information is provided. A reference structure is applied to the illumination light, at least one fluorescing reference object is positioned next to or in the sample, images of the reference structure of the illumination light, of the reference object are recorded from at least one detection direction and evaluated. The light sheet is brought into an optimal position based on the results and image information of the reference object and of the sample from a plurality of detection directions is stored. Transformation operators are obtained on the basis of the stored image information and the reconstruction of the three-dimensional image of the is based on these transformation operators.08-12-2010
20100201783Stereoscopic Image Generation Apparatus, Stereoscopic Image Generation Method, and Program - Influences of physiological stereoscopic elements are removed by image processing using projection transformation. A horopter-plane image projection unit 08-12-2010
20090295908Method and device for high-resolution three-dimensional imaging which obtains camera pose using defocusing - A method and device for high-resolution three-dimensional (3-D) imaging which obtains camera pose using defocusing is disclosed. The device comprises a lens obstructed by a mask having two sets of apertures. The first set of apertures produces a plurality of defocused images of the object, which are used to obtain camera pose. The second set of optical filters produces a plurality of defocused images of a projected pattern of markers on the object. The images produced by the second set of apertures are differentiable from the images used to determine pose, and are used to construct a detailed 3-D image of the object. Using the known change in camera pose between captured images, the 3-D images produced can be overlaid to produce a high-resolution 3-D image of the object.12-03-2009
20090262183Three-dimensional image obtaining device and processing apparatus using the same - An optical flux is radiated, which is focused at a measurement point in a specimen space, and a transmitted light amount is measured. A minute light absorption amount is measured from a transmitted light signal and a reference signal. While three-dimensionally scanning, a three-dimensional map in which light absorption amounts are represented by voxels (volume cells) is obtained. On this three-dimensional map, deconvolution processing with a light intensity distribution image in the vicinity of the measurement point being a convolution kernel is performed, so as to obtain a three-dimensional image of a specimen that is almost transparent in a non-dyed state.10-22-2009
20090262182THREE-DIMENSIONAL IMAGING APPARATUS - A three-dimensional imaging apparatus for imaging a three-dimensional object may include a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device. A telecentric relay system may include a field lens and a macro objective that may include a macro lens and an aperture stop. A method of imaging a three-dimensional object may include providing a three-dimensional imaging apparatus including a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device; and generating a plurality of elemental images on the sensor device, wherein each of the plurality of elemental images has a different perspective of the three-dimensional object.10-22-2009
20090262181REAL-TIME VIDEO SIGNAL INTERWEAVING FOR AUTOSTEREOSCOPIC DISPLAY - A method, apparatus and system for simultaneously capturing a plurality of video signals that carry images of a changing real three-dimensional scene, taken from respective different directions, and for combining them, in real time, into an interwoven video signal, ready to be fed to an active-matrix-based autostereoscopic display device.10-22-2009
20110169920SMALL STEREOSCOPIC IMAGE PHOTOGRAPHING APPARATUS - Disclosed is a compact type 3D image photographing apparatus which adjusts a convergence angle with respect to a subject using two lenses in order to photograph a stereoscopic image. The compact type image photographing apparatus includes a housing; a first actuator having a first lens and disposed in the housing so as to be moved left and right; a second actuator having a second lens, disposed in the housing so as to be moved left and right, and disposed to be spaced apart from the first actuator; a left/right driving part which is disposed at each of the first and second actuators so as to move the first actuator or the second actuator left and right when power is applied; an image sensor which is disposed at each lower side of the first and second actuators so as to photograph a subject through the first and second lenses; and a control part which is disposed at a lower side of the image sensor so as to control power supplied to the left/right driving part, the first actuator and the second actuator.07-14-2011
20110169915Structured light system - A structured light system based on a fast, linear array light modulator and an anamorphic optical system captures three-dimensional shape information at high rates and has strong resistance to interference from ambient light. A structured light system having a modulated light source offers improved signal to noise ratios. A wand permits single point detection of patterns in structured light systems.07-14-2011
20100128109Systems And Methods Of High Resolution Three-Dimensional Imaging - Embodiments of the invention provide systems and methods for three-dimensional imaging with wide field of view and precision timing. In accordance with one aspect, a three-dimensional imaging system includes an illumination subsystem configured to emit a light pulse with a divergence sufficient to irradiate a scene having a wide field of view. A sensor subsystem is configured to receive over a wide field of view portions of the light pulse reflected or scattered by the scene and including: a modulator configured to modulate as a function of time an intensity of the received light pulse portion to form modulated received light pulse portions; and means for generating a first image corresponding to the received light pulse portions and a second image corresponding to the modulated received light pulse portions. A processor subsystem is configured to obtain a three-dimensional image based on the first and second images.05-27-2010
20090102914METHOD AND APPARATUS FOR GENERATING STEREOSCOPIC IMAGES FROM A DVD DISC - A system and method described herein provide stereoscopic video using standard DVD video data combined with enhancement data. In various embodiments, the enhancement data may be stored on the same DVD as the standard video, or provided via downloading and/or streaming to a stereoscopic DVD player. When stored on the DVD, the enhancement data is provided in various forms, including MPEG (-1 or -2) program Stream level; or the MPEG elementary stream level. In one embodiment, the enhancement data consists of a difference signal between left- and right-eye images taken on a pixel-by-pixel basis.04-23-2009
20110169917System And Process For Detecting, Tracking And Counting Human Objects of Interest - A system is disclosed that includes: at least one image capturing device at the entrance to obtain images; a reader device; and a processor for extracting objects of interest from the images and generating tracks for each object of interest, and for matching objects of interest with objects associated with RFID tags, and for counting the number of objects of interest associated with, and not associated with, particular RFID tags.07-14-2011
20110169919FRAME FORMATTING SUPPORTING MIXED TWO AND THREE DIMENSIONAL VIDEO DATA COMMUNICATION - Systems and methods are provided that relate to frame formatting supporting mixed two and three dimensional video data communication. For example, frames in frame sequence(s) may be formatted to indicate that a first screen configuration is to be used for displaying first video content, that a second screen configuration is to be used for displaying second video content, and so on. The screen configurations may be different or the same. In another example, the frames in the frame sequence(s) may be formatted to indicate that the first video content is to be displayed at a first region of a screen, that the second video content is to be displayed at a second region of the screen, and so on. The regions of the screen may partially overlap, fully overlap, not overlap, be configured such that one or more regions are within one or more other regions, etc.07-14-2011
20100277569MOBILE INFORMATION KIOSK WITH A THREE-DIMENSIONAL IMAGING EFFECT - The present invention discloses a mobile information kiosk with a three-dimensional imaging effect, which is primarily applied to a hand-held mobile information kiosk. The kiosk includes a dual-lens photographing device with various light traveling angles, a displayer, a stereoscopic optical element which is provided on the displayer, and a data processing module for three-dimensional display. The displayer displays an interleaved grid-shape pattern which is processed by the data processing module. The grid-shape pattern is deflected leftward and rightward in a longitudinal series through the stereoscopic optical element and is projected respectively to both eyes of a user, such that the user can visually sense a three-dimensional image.11-04-2010
201202936253D-CAMERA AND METHOD FOR THE THREE-DIMENSIONAL MONITORING OF A MONITORING AREA - A 3D-camera (11-22-2012
20120293626THREE-DIMENSIONAL DISTANCE MEASUREMENT SYSTEM FOR RECONSTRUCTING THREE-DIMENSIONAL IMAGE USING CODE LINE - Disclosed herein is a 3D distance measurement system. The 3D distance measurement system includes an image projection device for projecting a pattern image including one or more patterns on a target object, and an image acquisition device for acquiring a projected pattern image, analyzing the projected pattern image using the patterns, and then reconstructing a 3D image. Each of the patterns includes one or more preset identification factors so that the patterns can be uniquely recognized, and each of the identification factors is one of a point, a line, and a surface, or a combination of two or more of a point, a line, and a surface. The 3D distance measurement system is advantageous in that it reconstructs a 3D image using a single pattern image, thus greatly improving processing speed and the utilization of a storage space and enabling a 3D image to be accurately reconstructed.11-22-2012
20120293624SYSTEM AND METHOD OF REVISING DEPTH OF A 3D IMAGE PAIR - A method of revising depth of a three-dimensional (3D) image is disclosed. The method comprises the following steps: firstly, at least one initial depth map associated with one image of the 3D image pair based on stereo matching technique is received, wherein the one image comprises a plurality of pixels, and the initial depth map carries an initial depth value of each pixel. Then, the inconsistence among the pixels of the one image of the 3D image pair is detected to estimate a reliable map. Finally, the initial depth value is interpolated according to the reliable map and the proximate pixels, so as to generate a revised depth map by revising the initial depth value.11-22-2012
20120287240CAMERA CALIBRATION USING AN EASILY PRODUCED 3D CALIBRATION PATTERN - A system for computing one or more calibration parameters of a camera is disclosed. The system comprises a processor and a memory. The processor is configured to provide a first object either marked with or displaying three or more fiducial points. The fiducial points have known 3D positions in a first object reference frame. The processor is further configured to provide a second object either marked with or displaying three or more fiducial points. The fiducial points had known 3D positions in a second object reference frame. The processor is further configured to place the first object and the second object in a fixed position such that the fiducial point positions of the first and second objects are non-planar. The processor is further configured to compute one or more calibration parameters of the second camera using computations based on images taken of the fiducials.11-15-2012
20110205339NONDIFFRACTING BEAM DETECTION DEVICES FOR THREE-DIMENSIONAL IMAGING - Embodiments of the present invention relate a nondiffracting beam detection module for generating three-dimensional image data that has a surface layer having a first surface and a light transmissive region, a microaxicon, and a light detector. The microaxicon receives light through the light transmissive region from outside the first surface and generates one or more detection nondiffracting beams based on the received light. The light detector receives the nondiffracting beams and generates three-dimensional image data associated with an object located outside the first surface based on the one or more detection nondiffracting beams received. In some cases, the light detector can localize a three-dimensional position on the object associated with each detection nondiffracting beam received. In other cases, the light detector can determine perspective projections based on the detection nondiffracting beams received and generates the three-dimensional image data, using tomography, based on the determined perspective projections.08-25-2011
20110205338Apparatus for estimating position of mobile robot and method thereof - An apparatus and method for estimating the position of a mobile robot capable of reducing the time required to estimate the position is provided. The mobile robot position estimating apparatus includes a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data, a storage unit configured to store a plurality of patches, each including points around a feature point which is extracted from previously acquired 3D point cloud data, and a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of patches from the acquired 3D point cloud data.08-25-2011
20120194644Mobile Camera Localization Using Depth Maps - Mobile camera localization using depth maps is described for robotics, immersive gaming, augmented reality and other applications. In an embodiment a mobile depth camera is tracked in an environment at the same time as a 3D model of the environment is formed using the sensed depth data. In an embodiment, when camera tracking fails, this is detected and the camera is relocalized either by using previously gathered keyframes or in other ways. In an embodiment, loop closures are detected in which the mobile camera revisits a location, by comparing features of a current depth map with the 3D model in real time. In embodiments the detected loop closures are used to improve the consistency and accuracy of the 3D model of the environment.08-02-2012
20110199461FLOW LINE PRODUCTION SYSTEM, FLOW LINE PRODUCTION DEVICE, AND THREE-DIMENSIONAL FLOW LINE DISPLAY DEVICE - A motion locus creation system which is capable of displaying the trajectory of movement of an object to be tracked in an understandable way even if using no 3D model information. A camera unit forms a detection flag indicating whether or not the object to be tracked has been able to be detected from a captured image. A motion locus-type selection section determines the display type of a motion locus according to the detection flag. A motion locus creation section produces a motion locus according to coordinate data acquired by a tag reader section and a motion locus-type instruction signal selected by the motion locus-type selection section.08-18-2011
20110199460GLASSES FOR VIEWING STEREO IMAGES - Spectacles are disclosed for use in viewing images or videos, and include at least two lenses each having an adjustable optical property and an image capture device associated with the spectacles for receiving a digital image comprising a plurality of digital image channels each containing multiple pixels. The spectacles further include the image capture device for capturing a digital image and a processor for computing a feature vector from pixel values of the digital image channels wherein the feature vector includes information that would indicate whether the image is an anaglyph or a non-anaglyph image.08-18-2011
20110267430Detection Device of Planar Area and Stereo Camera System - An object of the present invention is to provide a detection device of a planar area, which suppresses matching of luminance values in areas other than a planar area, and non-detection and erroneous detection of the planar area arising from an error in projection transform.11-03-2011
20110267431METHOD AND APPARATUS FOR DETERMINING THE 3D COORDINATES OF AN OBJECT - In a method of determining the 3D coordinates of the surface (11-03-2011
20110267428SYSTEM AND METHOD FOR MAPPING A TWO-DIMENSIONAL IMAGE ONTO A THREE-DIMENSIONAL MODEL - In one embodiment, a system includes a turbine comprising multiple components in fluid communication with a working fluid. The system also includes an imaging system in optical communication with at least one component. The imaging system is configured to receive a two-dimensional image of the at least one component during operation of the turbine, and to map the two-dimensional image onto a three-dimensional model of the at least one component to establish a composite model.11-03-2011
20120287246IMAGE PROCESSING APPARATUS CAPABLE OF DISPLAYING IMAGE INDICATIVE OF FACE AREA, METHOD OF CONTROLLING THE IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM - An image processing apparatus capable of appropriately displaying a face frame in a manner superimposed on a three-dimensional video image. In a three-dimensional photography image pickup apparatus as the image processing apparatus, two video images are acquired by shooting an object, and a face area is detected in each of the two video images. The face area detected in one of the two video images and the face area detected in the other video image are associated with each other. The three-dimensional photography image pickup apparatus generates face area-related information including positions on a display panel where face area images are to be displayed. The face area images are generated according to the face area-related information. The two video images are combined with the respective face area images, and the combined video images are output to the display panel.11-15-2012
20080278570Single-lens, single-sensor 3-D imaging device with a central aperture for obtaining camera position - A device and method for three-dimensional (3-D) imaging using a defocusing technique is disclosed. The device comprises a lens, a central aperture located along an optical axis for projecting an entire image of a target object, at least one defocusing aperture located off of the optical axis, a sensor operable for capturing electromagnetic radiation transmitted from an object through the lens and the central aperture and the at least one defocusing aperture, and a processor communicatively connected with the sensor for processing the sensor information and producing a 3-D image of the object. Different optical filters can be used for the central aperture and the defocusing apertures respectively, whereby a background image produced by the central aperture can be easily distinguished from defocused images produced by the defocusing apertures.11-13-2008
201202936273D IMAGE INTERPOLATION DEVICE, 3D IMAGING APPARATUS, AND 3D IMAGE INTERPOLATION METHOD - A 3D image interpolation device performs frame interpolation on 3D video. The 3D image interpolation device includes: a range image interpolation unit that generates at least one interpolation range image to be interpolated between a first range image indicating a depth of a first image included in the 3D video and a second range image indicating a depth of a second image included in the 3D video; an image interpolation unit that generates at least one interpolation image to be interpolated between the first image and the second image; and an interpolation parallax image generation unit generates, based on interpolation image, at least one pair of interpolation parallax images having parallax according to a depth indicated by the interpolation range image.11-22-2012
20120293623METHOD AND SYSTEM FOR INSPECTING SMALL MANUFACTURED OBJECTS AT A PLURALITY OF INSPECTION STATIONS AND SORTING THE INSPECTED OBJECTS - A method and system for inspecting small, manufactured objects at a plurality of inspection stations and sorting the inspected objects are provided. Coins, coin blanks, tablets or pills are fed from a centrifugal feeder and conveyed or transferred by a transfer subsystem. The objects are spaced at equal intervals during conveyance to provide a “metering effect” which allows the proper spacing between objects for inspection and rejection of defects. The inspection stations may include imaging assemblies in the form of conventional cameras and/or three-dimensional sensors such as triangulation or confocal sensors. The inspection stations may include a circumference vision station and/or an eddy current station. Circumferential defects (like in edge lettering) on coins or rim defects on pills can be detected at the circumference vision station by another imaging assembly. Metal chips, foreign metallic debris, etc. in or on the tablets/pills can be detected at the eddy current station.11-22-2012
20120293629IRIS SCANNING APPARATUS EMPLOYING WIDE-ANGLE CAMERA, FOR IDENTIFYING SUBJECT, AND METHOD THEREOF - Embodiments provide an iris scanning apparatus for identifying a subject, employing a wide-angle image collector, and a method thereof. A wide angle camera is employed in the iris scanning apparatus to allow a user to easily locate a small eye region of a subject without having to check back and forth between an image display and the subject's face. The apparatus and method are also capable of measuring the distance to the subject's eye and displaying the distance information on the image display, and informing the user as to whether the eye of the subject is within operating range of the iris scanning apparatus. Also, iris scanning is automatically performed without the user's input when an eye is positioned within operating range, and is not performed if an image captured by the iris scanning apparatus does not contain an eye region, in order to prevent erroneous operation.11-22-2012
20080309754Systems and Methods for Displaying Three-Dimensional Images - Systems and methods for displaying three-dimensional (3D) images are described. In particular, the systems can include a display block made from a transparent material with optical elements three-dimensional disposed therein. Each optical element becomes luminous when illuminated by a light ray. The systems can also include a computing device configured to generate two-dimensional (2D) images formatted to create 3D images when projected on the display block, by a video projector coupled to the computing device. The video projector is configured to project the 2D images on the block to create the 3D images by causing a set of the passive optical elements to become luminous. Various other systems and methods are described for displaying 3D images.12-18-2008
20120033046IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including: an acquisition unit which acquires stereoscopic image information used for displaying a stereoscopic image on a display unit; a controller which performs one of a first control of allowing the stereoscopic image to be displayed as a planar image on the display unit and a second control of performing an image process on the stereoscopic image so that a parallax direction of the stereoscopic image and a parallax direction of the display unit are coincident with each other and allowing the stereoscopic image, which is subject to the image process, to be displayed on the display unit in the case where a parallax direction of the stereoscopic image displayed on the display unit and a parallax direction of the display unit are not coincident with each other based on the stereoscopic image information.02-09-2012
20100265316THREE-DIMENSIONAL MAPPING AND IMAGING - Imaging apparatus includes an illumination subassembly, which is configured to project onto an object a pattern of monochromatic optical radiation in a given wavelength band. An imaging subassembly includes an image sensor, which is configured both to capture a first, monochromatic image of the pattern on the object by receiving the monochromatic optical radiation reflected from the object and to capture a second, color image of the object by receiving polychromatic optical radiation, and to output first and second image signals responsively to the first and second images, respectively. A processor is configured to process the first and second signals so as to generate and output a depth map of the object in registration with the color image.10-21-2010
20120293628CAMERA INSTALLATION POSITION EVALUATING METHOD AND SYSTEM - A camera installation position evaluating system includes a processor, the processor executing a process including setting a virtual plane orthogonal to the optic axis of a camera mounted on a camera mounted object, generating virtually a camera image to be captured by the camera, with use of data about a three-dimensional model of the camera mounted object, data about the virtual plane set by the setting and parameters of the camera, and computing a boundary between an area of the three-dimensional model of the camera mounted object and an area of the virtual plane set by the setting, on the camera image generated by the generating. Accordingly, the camera installation position evaluating system is able to quantitatively obtain the camera's view range at the present camera installation position based on the computed boundary.11-22-2012
20130021444CAMERA SYSTEM WITH COLOR DISPLAY AND PROCESSOR FOR REED-SOLOMON DECODING - A handheld digital camera device including: a digital camera unit having a first image sensor for capturing images and a color display for displaying captured images to a user; an integral processor configured for: controlling operation of the first image sensor and color display; decoding an imaged coding pattern printed on a substrate, the printed coding pattern employing Reed-Solomon encoding; and performing an action in the handheld digital camera device based on the decoded coding pattern. The decoding includes the steps of: detecting target structures defining an extent of a data area; determining the data area using the detected target structures; and Reed-Solomon decoding the coding pattern contained in the determined data area.01-24-2013
20080239063STEREOSCOPIC OBSERVATION SYSTEM - A stereoscopic observation system includes a stereoscopic image pick up apparatus to pick up left and right images at an inward angle, a stereoscopic image display apparatus to transmit the left and right images picked up by the stereoscopic image pick up apparatus to an observer so that the images are stereoscopically observed at a convergence angle, a convergence angle change portion provided in the stereoscopic image display apparatus and to change the convergence angle, a recognition portion to recognize the stereoscopic image pick up apparatus, and a control portion to control the convergence angle change portion on the basis of the result of the recognition in the recognition portion so that the convergence angle is substantially equal to the inward angle.10-02-2008
20080284842STEREO VIDEO SHOOTING AND VIEWING DEVICE - A stereo video shooting and viewing device includes: a body, having two groups of eyepieces spaced apart from each other by a certain distance corresponding to a distance between two human eyes; two micro display screens, disposed on front ends of the eyepieces; two digital camera lenses, disposed on an outer side of the body, spaced apart from each other by a certain distance corresponding to the distance between two human eyes, and used for synchronously capturing images with a visual angle difference corresponding to that of the human eyes; and a main control unit (MCU), connected to the two micro display screens and the two digital camera lenses, and used for processing the images synchronously captured by the two digital camera lenses and image signals received from exterior, and displaying the images on the two micro display screens separately. In this embodiment, the images with the visual angle difference corresponding to that of the human eyes captured by the camera lenses are separately displayed on the micro display screens, so as to form a stereo image once being viewed by human eyes, thereby having the advantages of simple structure and vivid stereoscopic effects.11-20-2008
20110007136Image signal processing apparatus and image display - An image signal processing apparatus and an image display which are allowed to achieve stereoscopic image display with a more natural sense of depth are provided. The image signal processing apparatus includes: a first motion vector detection section and an information obtaining section. The first motion vector detection section detects one or more two-dimensional motion vectors as motion vectors along an X-Y plane of an image, from an image-for-left-eye and an image-for-right-eye which have a parallax therebetween. The information obtaining section obtains, based on the detected two-dimensional motion vectors, information pertaining to a Z-axis direction. The Z-axis direction is a depth direction in a stereoscopic image formed with the image-for-left-eye and the image-for-right-eye.01-13-2011
20100141739STEREO VIDEO MICROSCOPE SYSTEM - A stereo video microscope system (06-10-2010
20120092456VIDEO SIGNAL PROCESSING DEVICE, VIDEO SIGNAL PROCESSING METHOD, AND COMPUTER PROGRAM - A video signal processing device which includes, a stereoscopic image input unit which alternately inputs a video frame for a left eye and a video frame for a right eye, in a time sharing manner; a plane memory which maintains graphic data which overlaps with the video frame; a read phase addition unit which gives a phase difference when reading the graphic data from the plane memory at the time of displaying the video frame for the left eye and the video frame for the right eye; and a video overlapping unit which overlaps each graphic data of which a read phase is provided with a difference, with each of the video frame for the left eye and the video frame for the right eye.04-19-2012
20130215231LIGHT FIELD IMAGING DEVICE AND IMAGE PROCESSING DEVICE - This image capture device includes: an image sensor 08-22-2013
20130215228METHOD AND APPARATUS FOR ROBUSTLY COLLECTING FACIAL, OCULAR, AND IRIS IMAGES USING A SINGLE SENSOR - The present invention relates to a method and apparatus for standoff facial and ocular acquisition. Embodiments of the invention address the problems of atmospheric turbulence, defocus, and field of view in a way that minimizes the need for additional hardware. One embodiment of a system for acquiring an image of a facial feature of a subject includes a single wide field of view sensor configured to acquire a plurality of images over a large depth of field containing the subject and a post processor coupled to the single sensor and configured to synthesize the image of the facial feature from the plurality of images.08-22-2013
20130120534DISPLAY DEVICE, IMAGE PICKUP DEVICE, AND VIDEO DISPLAY SYSTEM - A display device comprising a storage unit that stores a video signal of a subject, image capturing position information, and shooting direction information, a display unit, a display posture information acquiring unit, a calculating unit that calculates a relative positional relation of a plurality of image capturing positions in a stereoscopic space based on the image capturing position information stored in the storage unit, and calculates a relative directional relation of a plurality of shooting directions in the display direction of the display unit based on the shooting direction information stored in the storage unit and the display direction information acquired by the display posture information acquiring unit, and a control unit that controls the display unit displaying a rotation video obtained by rotating the video of the subject corresponding to the video signal stored in the storage unit according to the relative directional relation calculated by the calculating unit.05-16-2013
20130120535THREE-DIMENSIONAL IMAGE PROCESSING APPARATUS AND ELECTRIC POWER CONTROL METHOD OF THE SAME - A three-dimensional image processing apparatus and a method of controlling power of same are provided. The three-dimensional image processing apparatus may include a display, a three-dimensional image filter disposed a prescribed distance from the display to adjust optical paths of the displayed view images, a camera configured to capture an image of a user, an ambient light sensor, and a controller configured to control the view images, the three-dimensional image filter, or the camera. The controller may determine a position of the user based on the captured image and adjust a perceived three-dimensional view of the view images based on the determined position of the user. Moreover, the controller may control an operational state of the camera and the at least one process based on the determined position of the user or a detected amount of ambient light.05-16-2013
20130120536Optical Self-Diagnosis of a Stereoscopic Camera System - The present invention relates to a method for the optical self-diagnosis of a camera system and to a camera system for carrying out the method. The method comprises recording stereo images obtained from in each case at least two partial images (05-16-2013
201102053403D TIME-OF-FLIGHT CAMERA SYSTEM AND POSITION/ORIENTATION CALIBRATION METHOD THEREFOR - A camera system comprises a 3D TOF camera for acquiring a camera-perspective range image of a scene and an image processor for processing the range image. The image processor contains a position and orientation calibration routine implemented therein in hardware and/or software, which position and orientation calibration routine, when executed by the image processor, detects one or more planes within a range image acquired by the 3D TOF camera, selects a reference plane among the at least one or more planes detected and computes position and orientation parameters of the 3D TOF camera with respect to the reference plane, such as, e.g., elevation above the reference plane and/or camera roll angle and/or camera pitch angle.08-25-2011
20120140039RUNNING-ENVIRONMENT RECOGNITION APPARATUS - A running-environment recognition apparatus includes an information recognition section mounted in a vehicle and configured to recognize an information of at least a frontward region of the vehicle relative to a traveling direction of the vehicle; and a road-surface calculating section configured to calculate a road surface of a traveling road and a portion lower than the road surface in the frontward region of the vehicle, from the information recognized by the information recognition section.06-07-2012
20120069150IMAGE PROJECTION KIT AND METHOD AND SYSTEM OF DISTRIBUTING IMAGE CONTENT FOR USE WITH THE SAME - An image projection kit and an imagery content distribution system and method. In one aspect, the invention is a method of distributing projection clip files and/or displaying imagery associated with projection clip files on an architecture comprising: a) storing a plurality of projection clip files on a server that is accessible via a wide area network; b) authenticating a user's identity prior to allowing downloading of projection clip files stored on the server; c) identifying the projection clip files downloaded by the authenticated user; and d) charging the user a fee for the projection clip files downloaded by the user.03-22-2012
20120069149PHOTOGRAPHING DEVICE AND CONTROLLING METHOD THEREOF, AND THREE-DIMENSIONAL INFORMATION MEASURING DEVICE - To make it easy to recognize that pixel resolving power is changed during photography operation.03-22-2012
20120069148IMAGE PRODUCTION DEVICE, IMAGE PRODUCTION METHOD, PROGRAM, AND STORAGE MEDIUM STORING PROGRAM - The image production device includes a deviation detecting device and an information production section. The deviation detecting device is configured to calculate the amount of relative deviation of left-eye image data and right-eye image data included with input image data. The information production section is configured to produce evaluation information related to the suitability of three-dimensional imaging based on reference information produced by the deviation detecting device which calculates the relative deviation amount.03-22-2012
20090185029CAMERA, IMAGE DISPLAY DEVICE, AND IMAGE STORAGE DEVICE - Control information distinguishing whether an image is 2D or 3D is added to the image and then stored in a storage part. An output controlling part is provided which outputs the control information added to the image as a control signal when reading the image from the storage part and outputting it. As a result, when 2D images and 3D images which are stored in a mixed manner are replayed, it can be determined whether an output image is a 2D image or a 3D image according to the control signal, so that a suitable image can be displayed according to an output end device.07-23-2009
20120287239Robot Vision With Three Dimensional Thermal Imaging - A robot vision system provides images to a remote robot viewing station, using a single long wave infrared camera on-board the robot. An optical system, also on-board the robot, has mirrors that divide the camera's field of view so that the camera receives a stereoscopic image pair. An image processing unit at the viewing station receives image data from the camera and processes the image data to provide a stereoscopic image.11-15-2012
20110141238STEREO IMAGE DATA TRANSMITTING APPARATUS, STEREO IMAGE DATA TRANSMITTING METHOD, STEREO IMAGE DATA RECEIVING DATA RECEIVING METHOD - [Object] To maintain perspective consistency among individual objects in an image in display of superimposition information.06-16-2011
20090086015SITUATIONAL AWARENESS OBSERVATION APPARATUS - A positionable sensor assembly for a real-time remote situation awareness apparatus includes a camera for capturing an image of a scene, a plurality of first acoustic transducers for capturing an audio input signal from an environment including the scene, at least one second acoustic transducer excitable to emit an audio output signal, a support structure for supporting the camera, the plurality of first acoustic transducers and the at least one second acoustic transducer, the support structure connected to a base, moveably at least about an axis of rotation relative to the base by a remote controllable support structure positioning actuator, and a transmission unit adapted to transfer in real-time between the transducer assembly and a remote location a captured image of the scene, a captured audio input signal from the environment, an excitation signal to the second acoustic transducer, and a control signal to the support structure positioning actuator. A positionable sensor assembly for a real-time remote situation awareness apparatus. The sensor assembly comprises a camera arranged to capture an image of a scene, a plurality of first acoustic transducers adapted to capture an audio input signal from an environment comprising said scene, at least one second acoustic transducer excitable to emit an audio output signal, a support structure arranged to support said camera, said plurality of first acoustic transducers and said at least one second acoustic transducer, said support structure connected to a base, moveably at least about an axis of rotation relative to said base by a support structure positioning actuator controllable from a remote location, and a transmission means adapted to transfer in real-time between said transducer assembly and said remote location a captured image of said scene, a captured audio input signal from said environment, an excitation signal to said second acoustic transducer, and a control signal to said support structure positioning actuator.04-02-2009
20110228051Stereoscopic Viewing Comfort Through Gaze Estimation - A method of improving stereo video viewing comfort is provided that includes capturing a video sequence of eyes of an observer viewing a stereo video sequence on a stereoscopic display, estimating gaze direction of the eyes from the video sequence, and manipulating stereo images in the stereo video sequence based on the estimated gaze direction, whereby viewing comfort of the observer is improved.09-22-2011
20130215232STEREO IMAGE PROCESSING DEVICE AND STEREO IMAGE PROCESSING METHOD - An image segmenting unit (08-22-2013
20130215230Augmented Reality System Using a Portable Device - A system and a method are disclosed for capturing real world objects and reconstructing a three-dimensional representation of real world objects. The position of the viewing system relative to the three-dimensional representation is calculated using information from a camera and an inertial motion unit. The position of the viewing system and the three-dimensional representation allow the viewing system to move relative to the real objects and enables virtual content to be shown with collision and occlusion with real world objects.08-22-2013
201002653183D Biplane Microscopy - A microscopy system is configured for creating 3D images from individually localized probe molecules. The microscopy system includes a sample stage, an activation light source, a readout light source, a beam splitting device, at least one camera, and a controller. The activation light source activates probes of at least one probe subset of photo-sensitive luminescent probes, and the readout light source causes luminescence light from the activated probes. The beam splitting device splits the luminescence light into at least two paths to create at least two detection planes that correspond to the same or different number of object planes of the sample. The camera detects simultaneously the at least two detection planes, the number of object planes being represented in the camera by the same number of recorded regions of interest. The controller is programmable to combine a signal from the regions of interest into a 3D data.10-21-2010
20090015663METHOD AND SYSTEM FOR CONFIGURING A MONITORING DEVICE FOR MONITORING A SPATIAL AREA - A monitoring device for monitoring a spatial area comprises at least one image recording unit. A three-dimensional image of the spatial area is recorded and displayed in order to configure. the monitoring device. A configuration plane is defined using a plurality of spatial points which have been determined within the three-dimensional image. Subsequently, at least one variable geometry element is defined relative to the configuration plane. A data record which represents a transformation of the geometry element into the spatial area is generated and transferred to the monitoring device.01-15-2009
20080316299VIRTUAL STEREOSCOPIC CAMERA - The subject matter relates to a virtual stereoscopic camera for displaying 3D images. In one implementation, left and right perspectives of a source are captured by image capturing portions. The image capturing portions include an array of image capturing elements that are interspersed with an array of display elements in a display area. The image capturing elements are confined within limited portions of the display area and are separated by an offset distance. The captured left and right perspectives are synthesized so as to generate an image that is capable of being viewed in 3D.12-25-2008
20110141237DEPTH MAP GENERATION FOR A VIDEO CONVERSION SYSTEM - In accordance with at least some embodiments of the present disclosure, a process for generating a depth map for converting a two-dimensional (2D) image to a three-dimensional (3D) image is described. The process may include generating a depth gradient map from the 2D image, wherein the depth gradient map is configured to associate one or more edge counts with one or more depth values, extracting an image component from the 2D image, wherein the image component is associated with a color component in a color space, determining a set of gains to adjust the depth gradient map based on the image component, and generating the depth map by performing depth fusion based on the depth gradient map and the set of gains.06-16-2011
20120105591METHOD AND APPARATUS FOR HIGH-SPEED CALIBRATION AND RECTIFICATION OF A STEREO CAMERA - A calibration and rectification method includes arranging a monitor vertically relative to the optical axis of the stereo camera; displaying 3D patterns, similar to patterns obtained by projecting pattern images of various postures produced by a panel virtually located in front of the stereo camera, onto the monitor through a 3D graphical technique; and the stereo camera acquiring the 3D patterns displayed on the monitor to perform calibration and rectification, thereby correcting images of the stereo camera.05-03-2012
20090244261METHOD FOR THE THREE-DIMENSIONAL MEASUREMENT OF FAST-MOVING OBJECTS - Light sectioning or fringe projection methods are used to measure the surface of objects (10-01-2009
20120194646Method of Enhancing 3D Image Information Density - A method of enhancing 3D image information density, comprising providing a confocal fluorescent microscope and a rotational stage. 3D image samples at different angles are collected. A deconvolution process of the 3D image samples by a processing unit is performed. A registration process of the deconvoluted 3D image samples by the processing unit is performed. An interpolation process of the registered 3D image samples by the processing unit is performed to output a 3D image in high resolution.08-02-2012
20120169847ELECTRONIC DEVICE AND METHOD FOR PERFORMING SCENE DESIGN SIMULATION - A method performs scene design simulation using an electronic device. The method obtains a scene image of a specified scene, determines edge pixels of the scene image, fits the edge pixels to a plurality of feature lines, and determines a part of the feature lines to obtain an outline of the scene image. The method further determines a vanishing point and a plurality of sight lines of the specified scene to create a 3D model of the specified scene, and adjusts a display status of a received virtual 3D image in the 3D model of the specified scene according to the vanishing point, the sight lines, and an actual size of the specified scene.07-05-2012
20120169846METHOD FOR CAPTURING THREE DIMENSIONAL IMAGE - A method for capturing a three dimensional (3D) image by using a single-lens camera is provided. First, a first image is captured. According to a focus distance of the single-lens camera in capturing the first image and an average distance between two human eyes, an overlap width between the first image and a second image required for capturing the second image of the 3D image is calculated. Then, the first image and a real-time image captured by the single-lens camera are displayed, and an overlap area is marked on the first image according to the calculated overlap width. A horizontal shift of the single-lens camera is adjusted, to locate the real-time image in the overlap area. Finally, the real-time image is captured as the second image, and the first and second images are output as the 3D image.07-05-2012
20100149315Apparatus and method of optical imaging for medical diagnosis - Described herein is a novel 3-D optical imaging system based on active stereo vision and motion tracking for to tracking the motion of patient and for registering the time-sequenced images of suspicious lesions recorded during endoscopic or colposcopic examinations. The system quantifies the acetic acid induced optical signals associated with early cancer development. The system includes at least one illuminating light source for generating light illuminating a portion of an object, at least one structured light source for projecting a structured light pattern on the portion of the object, at least one camera for imaging the portion of the object and the structured light pattern, and means for generating a quantitative measurement of an acetic acid-induced change of the portion of the object.06-17-2010
20100149316System for accurately repositioning imaging devices - The invention teaches a method of automatically creating a 3D model of a scene of interest from an acquired image, and the use of such a 3D model for enabling user to determine real world distances from a displayed image of the scene of interest.06-17-2010
20110228050System for Positioning Micro Tool of Micro Machine and Method Thereof - The present invention discloses a system for positioning micro tool of micro machine is provided in this invention, wherein the system comprises a stereo-photographing device, an image analysis system, a PC-based controller, and a micro machine. The image analysis system can analyze position of the micro tool of the micro machine and work piece by using algorithm, and the micro tool is then positioned to a pre-determined location. A method for positioning micro tool of the micro machine is also provided in this invention.09-22-2011
20100177166Creating and Viewing Three Dimensional Virtual Slides - Systems and methods for creating and viewing three dimensional virtual slides are provided. One or more microscope slides are positioned in an image acquisition device that scans the specimens on the slides and makes two dimensional images at a medium or high resolution. This two dimensional images are provided to an image viewing workstation where they are viewed by an operator who pans and zooms the two dimensional image and selects an area of interest for scanning at multiple depth levels (Z-planes). The image acquisition device receives a set of parameters for the multiple depth level scan, including a location and a depth. The image acquisition device then scans the specimen at the location in a series of Z-plane images, where each Z-plane image corresponds to a depth level portion of the specimen within the depth parameter.07-15-2010
20100177164Method and System for Object Reconstruction - A system and method are presented for use in the object reconstruction. The system comprises an illuminating unit, and an imaging unit (see FIG. 07-15-2010
20100157019CAMERA FOR RECORDING SURFACE STRUCTURES, SUCH AS FOR DENTAL PURPOSES - A 3-D camera for obtaining an image of at least one surface of at least one object. The camera comprises a light source, arranged to illuminate the object, wherein a light beam emitted from the light source defines a projection optical path. The camera also includes at least one first aperture having a first predetermined size, interposed in the projection optical path such that the light beam passes through it. An image sensor receives light back-scattered by the object, the back-scattered light defining an observation optical path. At least one second aperture having a second predetermined size, is interposed in the observation optical path such that the back-scattered light passes through it. In one example embodiment of the invention, the first predetermined size is greater than the second predetermined size, and at least one optic is arranged in both the projection and observation optical paths.06-24-2010
20120033044VIDEO DISPLAY SYSTEM, DISPLAY DEVICE AND SOURCE DEVICE - A video display system includes a source device for reproducing and outputting contents; and a display device for displaying contents which is output from the source device. Upon receiving a message for requesting display of a 3D video from the source device in a state of unreadiness to display the 3D video, the display device transmits a message for stopping reproduction of 3D contents to the source device. Upon receiving the message for stopping reproduction of 3D contents, the source device stops reproduction of the 3D contents. Upon completing preparations for displaying the 3D video, the display device transmits a message for reproducing the 3D contents to the source device. Upon receiving the message for reproducing the 3D contents, the source device reproduces and outputs the 3D contents.02-09-2012
20100188483Single camera device and method for 3D video imaging using a refracting lens - An example embodiment of the present invention may include an apparatus that captures 3D images having a lens barrel, including a lens disposed at a first end of the lens barrel, an image capture element at the second end of the lens barrel, and a refracting lens positioned along the optical axis of the lens barrel. The image capture device may have an adjustable active region, the adjustable active region being a region capable of capturing an image that is smaller than the total image capture area of the image capture element. The image capture element may capture images continuously at a predetermined frame rate. The image capture element may change the adjustable active region and the set of positioning elements may be adapted to continuous change the position of the refracting lens among a series of predetermined positions at a rate corresponding to the predetermined frame rate.07-29-2010
20100259599Display system and camera system - A display apparatus and an imaging apparatus constructed such that a high-resolution clear three-dimensional video image can be viewed from any direction. The display apparatus projects frame images, projected from a projector such as an electronic projector, to a video image projection surface of a three-dimensional screen through a polygonal mirror provided around the three-dimensional screen, thereby providing a polyhedral video image such as a three-dimensional image to a person viewing from around the video image projection surface. The three-dimensional screen has a view field angle limiting filter and a directional reflection screen. The view field angle filter limits the angle of a view field in the left/right direction, the angle being the angle of the projection on the video image projection surface (10-14-2010
20130215229REAL-TIME COMPOSITING OF LIVE RECORDING-BASED AND COMPUTER GRAPHICS-BASED MEDIA STREAMS - The present disclosure relates to providing composite media streams based on a live recording of a real scene and a synchronous virtual scene. A method for providing composite media streams comprises recording a real scene using a real camera; creating a three-dimensional representation of the real scene based on the recording; synchronizing a virtual camera inside a virtual scene with the real camera; generating a representation of the virtual scene using the virtual camera; compositing at least the three-dimensional representation of the real scene with the representation of the virtual scene to generate a media stream; and providing the media stream at the real camera. A camera device, a processing pipeline for compositing media streams, and a system are defined.08-22-2013
20100188484IMAGE DATA OBTAINING METHOD AND APPARATUS THEREFOR - An image data obtaining method and apparatus therefor, where the image data obtaining method involves determining an image-capturing mode from among a first image-capturing mode for capturing an image of a target subject by using a filter having a first area for transmitting light and a second area for blocking light, and a second image-capturing mode for capturing the image of the target subject without using the filter; capturing the image of the target subject by selectively using the filter according to the determined image-capturing mode; and processing captured image data.07-29-2010
20100188485LIVE CONCERT/EVENT VIDEO SYSTEM AND METHOD - One aspect of the invention is a method of providing video to attendees of a live concert. Video of different views of the live concert is captured. A plurality of video streams are provided to attendees of the live concert while the live concert is occurring. The plurality of digital video streams enable an attendee of the live concert to select which of the plurality of digital video streams to view using a portable digital device associated with that attendee such that the attendee may choose from among the different views of the live concert.07-29-2010
20100259597FACE DETECTION APPARATUS AND DISTANCE MEASUREMENT METHOD USING THE SAME - Provided are a face detection apparatus and a distance measurement method using the same. The face detection apparatus detects a face using left and right images which are acquired from a stereo camera. The face detection apparatus measures distance from the stereo camera to the face using an image frame which is provided from the stereo camera without a stereo matching process. Accordingly, the face detection apparatus simultaneously performs face detection and distance measurement even in a low-performance system.10-14-2010
20100238271Sensing Apparatus and Method for Detecting a Three-Dimensional Physical Shape of a Body - A sensing device having a sensing arrangement with a sensing end (09-23-2010
20100225743Three-Dimensional (3D) Imaging Based on MotionParallax - Techniques and technologies are described herein for motion parallax three-dimensional (3D) imaging. Such techniques and technologies do not require special glasses, virtual reality helmets, or other user-attachable devices. More particularly, some of the described motion parallax 3D imaging techniques and technologies generate sequential images, including motion parallax depictions of various scenes derived from clues in views obtained of or created for the displayed scene.09-09-2010
20110058022APPARATUS AND METHOD FOR PROCESSING IMAGE DATA IN PORTABLE TERMINAL - An apparatus and a method for processing image data in a portable terminal are provided. More specifically, an apparatus and a method are provided for reducing computation in similar block estimation of left and right image data when an image is compressed or a stereoscopic image is processed. The apparatus includes a camera for obtaining two or more image data; and a similar block detector for converting the obtained image data to binary data and estimating block similarity by comparing the converted binary data.03-10-2011
20110058021RENDERING MULTIVIEW CONTENT IN A 3D VIDEO SYSTEM - There is disclosed methods and apparatuses for multi-view video encoding, decoding and display. A depth map is provided for each of the available views. The depth maps of the available views are used to synthesize a target view for rendering an image from the perspective of the target view based on images of the available views.03-10-2011
20120242794PRODUCING 3D IMAGES FROM CAPTURED 2D VIDEO - A method of producing a stereo image from a temporal sequence of digital images, comprising: receiving a temporal sequence of digital images; analyzing pairs of digital images to produce corresponding stereo suitability scores, wherein the stereo suitability score for a particular pair of images is determined responsive to the relative positions of corresponding features in the particular pair of digital image; selecting a pair of digital images including a first image and a second image based on the stereo suitability scores; using a processor to analyze the selected pair of digital images to produce a motion consistency map indicating regions of consistent motion, the motion consistency map having an array of pixels; producing a stereo image pair including a left view image and a right view image by combining the first image and the second image responsive to the motion consistency map; and storing the stereo image pair in a processor-accessible memory.09-27-2012
20120242793DISPLAY DEVICE AND METHOD OF CONTROLLING THE SAME - Disclosed are a display device and a method of controlling the same. The display device and the method of controlling the same include a camera capturing a gesture made by a user, a display displaying a stereoscopic image, and a controller controlling presentation of the stereoscopic image in response to a distance between the gesture and the stereoscopic image on a virtual space and an approach direction of the gesture with respect to the stereoscopic image. Accordingly, the presentation of the stereoscopic image can be controlled in response to a distance and an approach direction with respect to the stereoscopic image.09-27-2012
20120242795DIGITAL 3D CAMERA USING PERIODIC ILLUMINATION - A method of operating a digital camera, includes providing a digital camera, the digital camera including a capture lens, an image sensor, a projector and a processor; using the projector to illuminate one or more objects with a sequence of patterns; and capturing a first sequence of digital images of the illuminated objects including the reflected patterns that have depth information. The method further includes using the processor to analyze the first sequence of digital images including the depth information to construct a second, 3D digital image of the objects; capturing a second 2D digital image of the objects and the remainder of the scene without the reflected patterns, and using the processor to combine the 2D and 3D digital images to produce a modified digital image of the illuminated objects and the remainder of the scene.09-27-2012
20120194647THREE-DIMENSIONAL MEASURING APPARATUS - A three-dimensional measuring apparatus includes: a projecting unit that projects a stripe to a projectable region which is a peripheral region of an intersection point on a measurement object when a vertical line is drawn down toward the measurement object; an imaging unit that includes a plurality of imaging regions at which the measurement object to which the stripe is projected is imaged within the projectable region; and a control unit that performs a process of three-dimensionally measuring the measurement object based on an image imaged by the imaging unit.08-02-2012
20120194645LIVING ROOM MOVIE CREATION - A system and method are disclosed living room movie creation. Movies can be directed, captured, and edited using a system that includes a depth camera. A virtual movie set can be created by using ordinary objects in the living room as virtual props. The system is able to capture motions of actors using the depth camera and to generate a movie based thereon. Therefore, there is no need for the actors to wear any special markers to detect their motion. A director may view scenes from the perspective of a “virtual camera” and record those scenes for later editing.08-02-2012
20090322859Method and System for 3D Imaging Using a Spacetime Coded Laser Projection System - A desktop three-dimensional imaging system and method projects a modulated plane of light that sweeps across a target object while a camera is set to collect an entire pass of the modulated plane of light over the object in one image to create a line stripe pattern. A spacetime coding scheme is applied to the modulation controller whereby a plurality of images of line stripe patterns can be analyzed and decoded to yield a three-dimensional image of the target object in a reduced scan time and with better accuracy than existing close range scanners.12-31-2009
20130127996METHOD OF RECOGNIZING STAIRS IN THREE DIMENSIONAL DATA IMAGE - A method of recognizing stairs in a 3D data image includes an image acquirer that acquires a 3D data image of a space in which stairs are located. An image processor calculates a riser height between two consecutive treads of the stairs in the 3D data image, identifies points located between the two consecutive treads according to the calculated riser height, and detects a riser located between the two consecutive treads through the points located between the two consecutive treads. Then, the image processor calculates a tread depth between two consecutive risers of the stairs in the 3D data image, identifies points located between the two consecutive risers according to the calculated tread depth, and detects a tread located between the two consecutive risers through the points located between the two consecutive risers.05-23-2013
20110032336BODY SCAN - A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.02-10-2011
20110032334PREPARING VIDEO DATA IN ACCORDANCE WITH A WIRELESS DISPLAY PROTOCOL - In general, techniques are described for preparing video data in accordance with a wireless display protocol. For example, a portable device comprising a module to store video data, a wireless display host module and a wireless interface may implement the techniques of this disclosure. The wireless display host module determines one or more display parameters of a three-dimensional (3D) display device external from the portable device and prepares the video data to generate 3D video data based on the determined display parameters. The wireless interface then wirelessly transmits the 3D video data to the external 3D display device. In this way, a portable device implements the techniques to prepare video data in accordance with a wireless display protocol.02-10-2011
20120033047IMAGING DEVICE - An imaging device is provided that comprises a movement detection component configured to detect movement of the imaging device based on a force imparted to the imaging device, an imaging component configured to produce image data by capturing a subject image, a movement vector detection component configured to detect a movement vector based on a plurality of sets of image data produced by the imaging component, and a three-dimensional NR component configured to reduce a noise included in first image data produced by the imaging component, based on second image data produced earlier than the first image data, wherein the three-dimensional NR component is configured to decide whether to correct the second image data in response to the detection result of the movement vector detection component, based on both the detection result of the movement detection component and the detection result of the movement vector detection component.02-09-2012
20120033045Multi-Path Compensation Using Multiple Modulation Frequencies in Time of Flight Sensor - A method to compensate for multi-path in time-of-flight (TOF) three dimensional (3D) cameras applies different modulation frequencies in order to calculate/estimate the error vector. Multi-path in 3D TOF cameras might be caused by one of the two following sources: stray light artifacts in the TOF camera systems and multiple reflections in the scene. The proposed method compensates for the errors caused by both sources by implementing multiple modulation frequencies.02-09-2012
20110128352FAST 3D-2D IMAGE REGISTRATION METHOD WITH APPLICATION TO CONTINUOUSLY GUIDED ENDOSCOPY - A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.06-02-2011
20110001794SYSTEM AND METHOD FOR SHAPE CAPTURING AS USED IN PROSTHETICS, ORTHOTICS AND PEDORTHICS - Systems and methods are provided to capture a digital 3D image of a portion of a subject's body. The systems and methods may be effective under static or dynamic conditions, either under the weight of a load or under non-weighted conditions. The system includes a grid of intersecting, flexible fibers arranged so as to achieve a variable surface contour. The surface contour of the grid conforms to and matches the surface contour of the subject when the grid covers a portion of the subject (i.e. residual limb, or deformity to correct). The coordinates of each point of intersection of two or more flexible fibers of the grid is recorded and produces a signal that generates a digital 3D image corresponding to the surface contour of the subject. The method includes covering a portion of the subject (i.e. residual limb or deformity to correct) with a grid of intersecting, flexible fibers and generating a digital three dimensional image. Optionally, the method may further include using the digital 3D image to create a prosthesis, orthosis, or foot orthosis having a surface contour matching the surface contour of the subject.01-06-2011
20110001793THREE-DIMENSIONAL SHAPE MEASURING APPARATUS, INTEGRATED CIRCUIT, AND THREE-DIMENSIONAL SHAPE MEASURING METHOD - It is possible to perform three-dimensional shape measurement with easy processing, regardless of whether an object is moving or not. An image capturing unit (01-06-2011
201102422813D DENTAL CAMERA FOR RECORDING SURFACE STRUCTURES OF AN OBJECT BE MEASURED BY MEANS OF TRIANGULATION - The invention relates to a 3D dental camera for recording surface structures of a measuring object (10-06-2011
20110025823Three-dimensional measuring apparatus - A three-dimensional measuring apparatus includes a measurement stage on which an object is placed, a reference scale member having a plurality of reference points, an imaging unit, a driving mechanism, a high brightness detecting unit, and a three-dimensional measuring unit. The imaging unit captures an optical image of the object and the optical images of the plurality of reference points in the same field of view. The high brightness detecting unit detects the brightest portion of the object at each of N relative movement positions of the imaging unit and detects a reference point indicating the maximum brightness among the plurality of reference points, from a plurality of images that is continuously captured by the imaging unit. The three-dimensional measuring unit sets the height of the brightest portion at each of the relative movement positions to a height associated with the detected reference point.02-03-2011
20110018967RECORDING OF 3D IMAGES OF A SCENE - A method of recording 3D images of a scene based on the time-of-flight principle comprises illuminating a scene by emitting light carrying an intensity modulation, imaging the scene onto a pixel array using an optical system, detecting, in each pixel, intensity-modulated light reflected from the scene onto the pixel and determining, for each pixel, a distance value based on the phase of light detected in the pixel. The determination of the distance values comprises a phase-sensitive de-convolution of the scene imaged onto the pixel array such that phase errors induced by light spreading in the optical system are compensated for.01-27-2011
20110025824MULTIPLE EYE PHOTOGRAPHY METHOD AND APPARATUS, AND PROGRAM - First macro photography is performed with each of the imaging systems being focused on a main subject to obtain first images, second photography is performed with one of the plurality of imaging systems being focused on a position farther away than the main subject to obtain a second image, processing is performed on each of the first images to transparentize an area other than the main subject, and each of the transparentized first images and an area other than the main subject of the second image are combined to generate a combined image corresponding to each of the imaging systems.02-03-2011
20120200670Method and apparatus for a disparity limit indicator - In accordance with an example embodiment of the present invention, an apparatus is disclosed. The apparatus includes a stereoscopic camera system, a user interface, and a disparity range system. The user interface includes a display screen. The user interface is configured to display on the display screen an image formed by the stereoscopic camera system. The image corresponds to a scene viewable by the stereoscopic camera system. The disparity range system is configured to detect a disparity for the scene. The disparity range system is configured to provide an indication on the display screen in response to the detected disparity.08-09-2012
20110109724BODY SCAN - A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.05-12-2011
20110115883Method And System For Adaptive Viewport For A Mobile Device Based On Viewing Angle - A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces.05-19-2011
20110115884INFORMATION PROCESSING APPARATUS, METHOD, PROGRAM AND RECORDING MEDIUM - A data structure, recording medium, playing device, playing method, program, and program storing medium, which enable providing of a video format for 3D display, suitable for 3D display of captions and menu buttons. Caption data used for 2D display of caption and menu data used for 2D display of menu buttons are recorded in a disc as is a database of offset information, in which is described offset information, made up of offset direction representing the direction of shifting of an image for the left eye and an image for the right eye used for 3D display as to images for 2D display regarding the caption data and menu data, and an offset value representing the amount of shifting, correlated with the playing point-in-time of caption data and menu data, respectively.05-19-2011
20110074926SYSTEM AND METHOD FOR CREATING 3D VIDEO - A system and method for generating 3D video from a plurality of 2D video streams is provided. A video capture device for capturing video to be transformed into 3D video includes a camera module for capturing a two-dimensional (2D) video stream, a location module for determining a location of the video capture device, an orientation module for determining an orientation of the video capture device, and a processing module for associating additional information with the 2D video stream captured by the camera module, the additional information including the orientation of the video capture device and the location of the video capture device.03-31-2011
20100013907Separation Type Unit Pixel Of 3-Dimensional Image Sensor and Manufacturing Method Thereof - A separation type unit pixel of an image sensor, which can control light that incidents onto a photodiode at various angles, and be suitable for a zoom function in a compact camera module by securing an incident angle margin, and a manufacturing method thereof are provided. The unit pixel of an image sensor includes: a first wafer including a photodiode containing impurities having an impurity type opposite to that of a semiconductor material and a pad for transmitting photoelectric charge of the photodiode to outside; a second wafer including a pixel array region in which transistors except the photodiode are arranged regularly, a peripheral circuit region having an image sensor structure except the pixel array, and a pad for connecting pixels with one another; and a connecting means connecting the pad of the first wafer and the pad of the second wafer. Accordingly, manufacturing processes can be simplified by constructing the upper wafer using only a photodiode and the lower wafer using the pixel array region except the photodiode, and costs are reduced since transistors are not included in the upper wafer portion, which in turn cannot affect the interaction with light.01-21-2010
20100039501IMAGE RECORDING DEVICE AND IMAGE RECORDING METHOD - According to an image recording device and an image recording method according to the present invention, images can be recorded in such a manner that even an image processing apparatus not having a function that reads a plurality of image data from an extended image file storing the plurality of image data and reproduces or edits them can read representative image data in an extended image file. Furthermore, if a basic file has been deleted or altered, the basic file can be restored using the representative image data in the extended image file, so it is possible to provide another image processing apparatus with the representative image data before the alteration any time.02-18-2010
20120242802STEREOSCOPIC IMAGE DATA TRANSMISSION DEVICE, STEREOSCOPIC IMAGE DATA TRANSMISSION METHOD, STEREOSCOPIC IMAGE DATA RECEPTION DEVICE, AND STEREOSCOPIC IMAGE DATA RECEPTION METHOD - [Object] To realize facilitation of processing on the reception side.09-27-2012
20120242801Vision Enhancement for a Vision Impaired User - This invention concerns a vision enhancement apparatus that improves vision for a vision-impaired user of interface equipment. Interface equipment stimulates the user's cortex, directly or indirectly, to provide artificial vision. It may include a passive sensor to acquire real-time high resolution video data representing the vicinity of the user. A sight processor to receive the acquired high resolution data and automatically: Analyse the high resolution data to extract depth of field information concerning objects of interest. Extract lower resolution data representing the vicinity of the user. And, provide both the depth of field information concerning objects of interest and the lower resolution data representing the vicinity of the user to the interface equipment to stimulate artificial vision for the user.09-27-2012
20120242800APPARATUS AND SYSTEM FOR INTERFACING WITH COMPUTERS AND OTHER ELECTRONIC DEVICES THROUGH GESTURES BY USING DEPTH SENSING AND METHODS OF USE - Disclosed herein are systems and methods for gesture capturing, detection, recognition, and mapping them into commands which allow one or many users to interact with electronic games or any electronic device interfaces. Gesture recognition methods, apparatus and system are disclosed from which application developers can incorporate gesture-to-character inputs into their gaming, learning or the like applications. Also herein are systems and methods for receiving 3D data reflecting hand, fingers or other body parts movements of a user, and determining from that data whether the user has performed gesture commands for controlling electronic devices, or computer applications such as games or others.09-27-2012
20110249093THREE-DIMENSIONAL VIDEO IMAGING DEVICE - A three-dimensional (3D) video imaging device includes a liquid crystal layer, a color filter plate, a lens array, light shielding elements and an optical sheet, and the lens array includes lens elements installed onto a surface of the color filter plate, and the light shielding elements are installed onto a surface of the color filter plate or lens element or formed directly in the color filter plate, and the light shielding elements are arranged with an interval apart from each other and corresponding to the intervals among the lens elements, and a combination of the liquid crystal layer, color filter plate and optical sheet constitutes an LCD panel structure for installing the lens array and the light shielding elements into the LCD panel directly to reduce the thickness and simplify the manufacturing process, while preventing stray lights, improving the clarity of 3D images, and maintaining a high-brightness display effect.10-13-2011
20100053307COMMUNICATION TERMINAL AND INFORMATION SYSTEM - A communication terminal includes: an image or video collecting and generating module, adapted to collect and generate 3D image or video data; a communication module, adapted to send or receive image or video and generated 3D image or video data; and an image or video display module, adapted to display the 3D image or video according to the collected or received and generated 3D image or video data. An information system includes a 3D image or video information center. The 3D image or video information center further includes: a 3D image or video information relay, adapted to interact with the communication terminal; and a 3D image or video information server, adapted to store 3D image or video information and convert between 3D image or video information and SMS. The communication terminal features simple structure and good performance. The information system supports communication through 3D image or video information.03-04-2010
20110175983APPARATUS AND METHOD FOR OBTAINING THREE-DIMENSIONAL (3D) IMAGE - An apparatus and method for obtaining a three-dimensional image. A first multi-view image may be generated using patterned light of infrared light, and a second multi-view image may be generated using non-patterned light of visible light. A first depth image may be obtained from the first multi-view image, and a second depth image may be obtained from the second multi-view image. Then, stereo matching may be performed on the first depth image and the second depth image to generate a final depth image.07-21-2011
20100134594Displaying Objects with Certain Visual Effects - Embodiments of the invention provide methods, systems, and articles for displaying objects in images, videos, or a series of images with WYSIWYG (what you see is what you get) effects, for calibrating and storing dimensional information of the display elements in a display system, and for constructing 3-dimensional features and size measurement information using one camera. Displaying merchandises with WYSIWYG effects allows online retailers to post vivid pictures of their sales items on the Internet to attract online customers. The processes of calibrating a display system and the processes of constructing 3-dimensional features and size measurement information using one camera are applications of the invention designed to achieve desired WYSIWYG effects.06-03-2010
201001345953-D Optical Microscope - A 3-D optical microscope, a method of turning a conventional optical microscope into a 3-D optical microscope, and a method of creating a 3-D image on an optical microscope are described. The 3-D optical microscope includes a processor, at least one objective lens, an optical sensor capable of acquiring an image of a sample, a mechanism for adjusting focus position of the sample relative to the objective lens, and a mechanism for illuminating the sample and for projecting a pattern onto and removing the pattern from the focal plane of the objective lens. The 3-D image creation method includes taking two sets of images, one with and another without the presence of the projected pattern, and using a software algorithm to analyze the two image sets to generating a 3-D image of the sample. The 3-D image creation method enables reliable and accurate 3-D imaging on almost any sample regardless of its image contrast.06-03-2010
20110074925METHOD AND SYSTEM FOR UTILIZING PRE-EXISTING IMAGE LAYERS OF A TWO-DIMENSIONAL IMAGE TO CREATE A STEREOSCOPIC IMAGE - Implementations of the present invention involve methods and systems for converting a 2-D multimedia image to a 3-D multimedia image by utilizing a plurality of layers of the 2-D image. The layers may comprise one or more portions of the 2-D image and may be digitized and stored in a computer-readable database. The layers may be reproduced as a corresponding left eye and right eye version of the layer, including a pixel offset corresponding to a desired 3-D effect for each layer of the image. The combined left eye layers and right eye layers may form the composite right eye and composite left eye images for a single 3-D multimedia image. Further, this process may be applied to each frame of a animated feature film to convert the film from 2-D to 3-D.03-31-2011
20110249097DEVICE FOR RECORDING, REMOTELY TRANSMITTING AND REPRODUCING THREE-DIMENSIONAL IMAGES - The invention relates to an image recording device, an image reproduction device and a system with such a recording and reproduction device. The recording device comprises an optical axis (10-13-2011
20110249095IMAGE COMPOSITION APPARATUS AND METHOD THEREOF - An image composition apparatus includes a synchronization unit for synchronizing a motion capture equipment and a camera; a three-dimensional (3D) restoration unit for restoring 3D motion capture data of markers attached for motion capture; a 2D detection unit for detecting 2D position data of the markers from a video image captured by the camera; and a tracking unit for tracking external and internal factors of the camera for all frames of the video image based on the restored 3D motion capture data and the detected 2D position data. Further, the image composition apparatus includes a calibration unit for calibrating the tracked external and internal factors upon completion of tracking in all the frames; and a combination unit for combining a preset computer-generated (CG) image with the video image by using the calibrated external and internal factors.10-13-2011
20100171813GATED 3D CAMERA - A camera for determining distances to a scene, the camera comprising: a light source comprising a VCSEL controllable to illuminate the scene with a train of pulses of light having a characteristic spectrum; a photosurface; optics for imaging light reflected from the light pulses by the scene on the photosurface; and a shutter operable to gate the photosurface selectively on and off for light in the spectrum.07-08-2010
20120120200COMBINING 3D VIDEO AND AUXILIARY DATA - A three dimensional [3D] video signal (05-17-2012
20120120198THREE-DIMENSIONAL SIZE MEASURING SYSTEM AND THREE-DIMENSIONAL SIZE MEASURING METHOD - A system for measuring a three-dimensional (3D) size of an object in a space according to an indicating mark is provided, wherein the indicating mark is used to point to one of a plurality of measuring points on the object. The system includes an image capturing module, an spatial vector calculating module, and a measuring module. The image capturing module captures an image of the space. The spatial vector calculating module respectively calculates a spatial vector corresponding to the indicating mark when the indicating mark is used to point to each of the measuring points on the object in the image. The measuring module calculates spatial coordinates of the measuring points according to the spatial vectors so as to obtain the 3D size of the object.05-17-2012
20120120197APPARATUS AND METHOD FOR SHARING HARDWARE BETWEEN GRAPHICS AND LENS DISTORTION OPERATION TO GENERATE PSEUDO 3D DISPLAY - A system, method, and computer program product for providing pseudo 3D user interface effects in a digital camera with existing lens distortion correction hardware. A distortion map normally used to correct captured images instead alters a displayed user interface object image to support production of a “pseudo 3D” version of the object image via production of at least one modified image. A blending map also selectively mixes the modified image with a second image to produce a distorted blended image. A set or series of such images may be produced automatically or at user direction to generate static or animated effects in-camera without a graphics accelerator, resulting in hardware cost savings and extended battery life.05-17-2012
201201201953D CONTENT ADJUSTMENT SYSTEM - A 3D content adjustment system includes a processor. A camera is coupled to the processor. A non-transitory, computer-readable medium is coupled to the processor and the camera. The computer-readable medium includes a content adjustment engine including instructions that when executed by the processor receive viewer information from the camera, modify a plurality of original stereoscopic images using the viewer information to create a plurality of modified stereoscopic images, and output the plurality of modified stereoscopic images.05-17-2012
20110074927METHOD FOR DETERMINING EGO-MOTION OF MOVING PLATFORM AND DETECTION SYSTEM - A method for determining ego-motion of a moving platform and a system thereof are provided. The method includes: using a first lens to capture a first and a second left image at a first and a second time, and using a second lens to capture a first and a second right image; segmenting the images into first left image areas, first right image areas, second left image areas, and second right image areas; comparing the first left image areas and the first right image areas, the second left image areas and the second right image areas, and the first right image areas and the second right image areas, so as to find plural common areas; selecting N feature points in the common areas to calculate depth information at the first and the second time, and determining the ego-motion of the moving platform between the first time and the second time.03-31-2011
20110058023SYSTEM AND METHOD FOR STRUCTURED LIGHT ILLUMINATION WITH FRAME SUBWINDOWS - A structured light imaging (SLI) system includes a projection system, image sensor system and processing module. The projection system is operable to project a single SLI pattern into an imaging area. The image sensor system is operable to capture images of a 3D object moving through the imaging area. The image sensor system outputs a subset of camera pixels corresponding to a subwindow of a camera frame to generate subwindow images. The processing module is operable to create a 3D surface map of the 3D object based on the subwindow images.03-10-2011
20110149041APPARATUS AND METHOD FOR CAMERA PARAMETER CALIBRATION - The present invention provides an apparatus and method for camera parameter calibration which is capable of easily and simply setting physical and optical characteristic parameters of a camera in order to acquire information on actual measurement of an image provided through the camera with high accuracy. The camera parameter calibration apparatus and method has an advantage of correct image analysis that it is capable of increasing accuracy of information of measurement through an image only with an intuitive interface manipulation, without taking a time-consuming and incorrect actual measurement procedure, by determining parameters of the space model corresponding to the image by displaying a 3D space model corresponding to a real space of the image on the image and changing and adjusting visual point parameters such that the 3D space model matches the display image, and regarding the determined parameters of the space model as camera parameters of the image.06-23-2011
20110254922IMAGING SYSTEM USING MARKERS - A system for detecting a position of an object such as a surgical tool in an image guidance system includes a camera system with a detection array for detecting visible light a processor arranged to analyze the output from the array. Each object to be detected carries a single marker with a pattern of contrasted areas of light and dark intersecting at a specific single feature point thereon. The pattern includes components arranged in an array around the specific location arranged such that the processor is able to detect an angle of rotation of the pattern around the location and which are different from other markers of the system such that the processor is able to distinguish each marker from the other markers.10-20-2011
20110249096THREE-DIMENSIONAL MEASURING DEVICE AND BOARD INSPECTION DEVICE - A board inspection device includes an irradiation device for irradiating light on a printed circuit board, a CCD camera for imaging the irradiated part of the circuit board. First image processing is performed for a first exposure time such that an inspection target region is free of brightness saturation, and second image processing is performed using a second exposure time corresponding to the insufficiency of the first exposure time relative to a certain exposure time appropriate for measurement of a measurement standard region. Thereafter, image data for three-dimensional measurement is prepared for the inspection target region using the value of image data obtained by the first image processing, and image data for three-dimensional measurement is prepared for the measurement standard region using a value obtained by summing the image data value acquired by the second image processing and the image data value acquired by the first image processing.10-13-2011
20100295924INFORMATION PROCESSING APPARATUS AND CALIBRATION PROCESSING METHOD - An information processing apparatus, which provides images for stereoscopic viewing by synthesizing images obtained by capturing an image of real space by a main image sensing device and sub image sensing device to a virtual image, measures the position and orientation of the main image sensing device, calculates the position and orientation of the sub image sensing device based on inter-image capturing device position and orientation held in a holding unit and the measured position and orientation of the main image sensing device. Then the information processing apparatus calculates an error using the measured position and orientation of the main image sensing device, the calculated position and orientation of the sub image sensing device, and held intrinsic parameters of the main image sensing device and sub image sensing device. The information processing apparatus calibrates the held information based on the calculated error.11-25-2010
20100259598APPARATUS FOR DETECTING THREE-DIMENSIONAL DISTANCE - A three-dimensional distance detecting apparatus includes a stereo vision camera configured to detect a parallax of a selected point to be detected; and a pattern generating device configured to generate a pattern and project the pattern on the selected point when the selected point is a subject which does not induce a parallax.10-14-2010
20110249094Method and System for Providing Three Dimensional Stereo Image - The present invention provides a method for providing 3D stereo image. The method comprises: accepting a request submitted from a client system by an intermediate server system; selecting an image server based on the request and responding to the client system from the image server through a processor in the intermediate server system; requesting at least one 3D stereo image by the client system from the image server according to the response; and providing the at least one 3D stereo image to the client system by the image server system. The present invention also provides a system for providing 3D stereo image.10-13-2011
20110149040METHOD AND SYSTEM FOR INTERLACING 3D VIDEO - A video processing device may generate and/or capture a plurality of view sequences of video frames, decimate at least some of the plurality of view sequences, and may generating a three-dimension (3D) video stream comprising the plurality of view sequences based on that decimation. The decimation may be achieved by converting one or more of the plurality of view sequences from progressive to interlaced video. The interlacing may be performed by removing top or bottom fields in each frame of those one or more view sequences during the conversion to interlaced video. The removed fields may be selected based on corresponding conversion to interlaced video of one or more corresponding view sequences. The video processing device may determine bandwidth limitations existing during direct and/or indirect transfer or communication of the generated 3D video stream. The decimation may be performed based on this determination of bandwidth limitations.06-23-2011
20080246836SYSTEM AND METHOD FOR PROCESSING VIDEO IMAGES FOR CAMERA RECREATION - Embodiments use point clouds to recreate a camera. The point cloud of the object may be formed from analysis of two dimensional images taken by the camera. Once the virtual camera has been formed, the camera may be used in the process of generating stereoscopic three dimensional images of the scene within the 10-09-2008
20120200671Apparatus And Method For Three-Dimensional Image Capture With Extended Depth Of Field - An optical system for capturing three-dimensional images of a three-dimensional object is provided. The optical system includes a projector for structured illumination of the object. The projector includes a light source, a grid mask positioned between the light source and the object for structured illumination of the object, and a first Wavefront Coding (WFC) element having a phase modulating mask positioned between the grid mask and the object to receive patterned light from the light source through the grid mask. The first WFC element is constructed and arranged such that a point spread function of the projector is substantially invariant over a wider range of depth of field of the grid mask than a point spread function of the projector without the first WFC element.08-09-2012
20120169845METHOD AND APPARATUS FOR ADAPTIVE SAMPLING VIDEO CONTENT - In a method of encoding video, the video is analyzed to determine a sampling format for the video from a plurality of sampling formats. The video is sampled using the determined sampling format to produce a video portion having a subset of information of the video. The video portion is encoded to form an output bit stream.07-05-2012
201200330483D IMAGE DISPLAY APPARATUS, 3D IMAGE PLAYBACK APPARATUS, AND 3D IMAGE VIEWING SYSTEM - A 3D image display apparatus comprises a transmission-reception device and a control signal output device. The transmission-reception device receives a video data including a plurality of image informations which is base data of 3D images from a 3D image playback apparatus through a transmission cable and thereby generates an image signal. The control signal output device transmits a control signal for controlling light penetration states of penetration units for right and left eyes to shutter glasses. The transmission-reception device receives the video data from the 3D image playback apparatus through the transmission cable and thereby generates the image signal and a synchronizing signal. The synchronizing signal indicates which of the plurality of image informations is included in the image signal currently outputted. The control signal output device generates the control signal based on the synchronizing signal.02-09-2012
20100277570IMAGE PROCESSING SYSTEM - An image processing system includes a photographing apparatus and a processing apparatus. The photographing apparatus includes LEDs for emitting light with characteristics of spectroscopic distributions varied in a visible light area, an image pick-up device unit which picks-up a subject image that is illuminated by the LEDs and is formed by an image pick-up optical system and which outputs an image signal, and a control unit. The control unit sequentially lights-on the LEDs upon an instruction for photographing a subject spectroscopic image being input from an operating switch, picks-up the image, and thus controls the operation for capturing subject spectroscopic images. The processing apparatus includes a calculating device which performs a desired image calculation based on the image signal.11-04-2010
20110254928Time of Flight Camera Unit and Optical Surveillance System - A time of flight, TOF, camera unit for an optical surveillance system and an optical surveillance system comprising such a TOF camera is disclosed. The TOF camera unit comprises a radiation emitting unit for illuminating a surveillance area defined by a first plane, a radiation detecting unit for receiving radiation reflected from said surveillance area and for generating a three-dimensional image from said detected radiation, and at least one mirror for at least partly deflecting said emitted radiation into at least one second plane extending across to said first plane and for deflecting the radiation reflected from said second plane to the radiation detecting unit. The TOF camera and the at least one mirror may be ar-ranged on a common carrier element.10-20-2011
20110254926Data Structure, Image Processing Apparatus, Image Processing Method, and Program - A reproduction apparatus reproduces image data of a left eye image and a right eye image of a 3D content recorded in a recording medium. The recording medium stores information about black border widths according to each parallax amount in periphery of right and left image frames between the left eye image and the right eye image. A post processing unit generates and outputs a border-attached left eye image and a border-attached right eye image by inserting an image having the obtained black border width according to the parallax amount in the periphery of the right image frame and an image having the obtained black border width according to the parallax amount in the periphery of the left image frame into the left eye image and the right eye image. The present invention can be applied to an image processing apparatus for processing image data of 3D images.10-20-2011
20110254923Image processing apparatus, method and computer-readable medium - Provided is an image processing apparatus, method and computer-readable medium. The image processing apparatus may perform modeling of a function that enables correction of a systematic error of a depth camera, using a single depth camera and a single calibration reference image. Additionally, the image processing apparatus may calculate a depth error or a distance error of an input image, and may correct a measured depth of the input image using a modeled function.10-20-2011
20120200674System and Method for Determining Whether to Operate a Robot in Conjunction with a Rotary Parlor - In certain embodiments, a system includes a three-dimensional camera and a processor communicatively coupled to the three-dimensional camera. The processor is operable to determine a first hind location of a first hind leg of a dairy livestock based at least in part on visual data captured by the three-dimensional camera and determine a second hind location of a second hind leg of the dairy livestock based at least in part on the visual data. The processor is further operable to determine a measurement, wherein the measurement is the distance between the first hind location and the second hind location. Additionally, the processor is operable to determine whether the measurement exceeds a minimum threshold.08-09-2012
201202006723D IMAGING DEVICE - A 3D imaging device includes: a first imaging unit having a first variable power lens and a first driving unit that drives the first variable power lens along an optical axis; a second imaging unit having a second variable power lens and a second driving unit that drives the second variable power lens along an optical axis; a storage unit that temporarily stores the first photographic image and the second photographic image; a parallax determining unit that determines a parallax in a horizontal direction between the first photographic image and the second photographic image; a parallax adjusting unit that generates a third photographic image excluding a first parallax adjusting image from the first photographic image and a fourth photographic image excluding a second parallax adjusting image from the second photographic image; and a photographic information recording unit that records information about a magnification of the first variable power lens.08-09-2012
20120200673IMAGING APPARATUS AND IMAGING METHOD - The present invention provides an imaging apparatus which generates, based on a captured image, a depth map of an object with a high degree of precision.08-09-2012
20110261163Image Recognition - A subject (10-27-2011
20110164114THREE-DIMENSIONAL MEASUREMENT APPARATUS AND CONTROL METHOD THEREFOR - A three-dimensional measurement apparatus generates patterns to be projected onto the measurement object, images the measurement object using an imaging unit after projecting a plurality of types of generated patterns onto the measurement object using a projection unit, and computes the coordinate values of patterns on a captured image acquired by the imaging unit, based on the projected patterns, a geometric model of the measurement object, and information indicating the coarse position and orientation of the measurement object. Captured patterns on the captured image are corresponded with the patterns projected by the projection unit using the computed coordinate values, and the distances between the imaging unit and the patterns projected onto the measurement object are derived. The position and orientation of the measurement object are estimated using the derived distances and the geometric model of the measurement object, and the information on the coarse position and orientation is updated.07-07-2011
20110254925IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a first acquiring unit, a second acquiring unit, and a correction processor. The first acquiring unit acquires an intended viewing environment parameter, which is a parameter of an intended viewing environment, together with image data of a three-dimensional picture. The second acquiring unit acquires an actual viewing environment parameter, which is a parameter of an actual viewing environment for a user viewing the three-dimensional picture. The correction processor corrects the three-dimensional picture in accordance with a difference between the acquired intended viewing environment parameter and the acquired actual viewing environment parameter.10-20-2011
20110164115TRANSCODER SUPPORTING SELECTIVE DELIVERY OF 2D, STEREOSCOPIC 3D, AND MULTI-VIEW 3D CONTENT FROM SOURCE VIDEO - Transcoders are provided for transcoding three-dimensional content to two-dimensional content, and for transcoding three-dimensional content of a first type to three-dimensional content of another type. Transcoding of content may be performed due to user preference, display device capability, bandwidth constraints, user payment/subscription constraints, device loading, and/or for other reason. Transcoders may be implemented in a content communication network in a media source, a display device, and/or in any device/node in between.07-07-2011
20110134223METHOD AND A SYSTEM FOR REDUCING ARTIFACTS - A method for preparing an article of lenticular imaging. The method comprises receiving a plurality of source images, superimposing at least one deghosting element on the plurality of source images, the deghosting element being formed to reduce an estimated ghosting artifact from the article, interlacing the plurality of processed source images so as to form a spatially multiplexed image, and preparing the article by attaching an optical element to the spatially multiplexed image.06-09-2011
20120062703INFORMATION DISPLAY DEVICE, REPRODUCTION DEVICE, AND STEREOSCOPIC IMAGE DISPLAY DEVICE - To provide an information display device to be included in a device outputting stereoscopic images in which images for the right and left eyes are arranged alternately in chronological order. The information display device includes a display unit which has light emitting units and which displays information other than the stereoscopic images using the light emitting units each of which alternates between a light-up state and a light-out state, the light-up state lasting for a predetermined period from a start of a light emission of the light emitting unit to an end of the light emission. Each of the light emitting units emits light so that the light-up state is included at least once during a shutter open period from when a shutter of eyeglasses used for viewing the stereoscopic images is opened to when the shutter is closed, the shutter being for one of the is right and left eyes.03-15-2012
20110025825METHODS, SYSTEMS, AND COMPUTER-READABLE STORAGE MEDIA FOR CREATING THREE-DIMENSIONAL (3D) IMAGES OF A SCENE - Methods, systems, and computer program products for generating three-dimensional images of a scene are disclosed herein. According to one aspect, a method includes receiving a plurality of images of a scene. The method also includes determining attributes of the plurality of images. Further, the method includes determining, based on the attributes, a pair of images from among the plurality of images for use in creating a three-dimensional image. A user can, by use of the subject matter disclosed herein, use an image capture device for capturing a plurality of different images of the same scene and for converting the images into a three-dimensional, or stereoscopic, image of the scene. The subject matter disclosed herein includes a three-dimensional conversion process. The conversion process can include identification of suitable pairs of images, registration, rectification, color correction, transformation, depth adjustment, and motion detection and removal.02-03-2011
20100283833DIGITAL IMAGE CAPTURING DEVICE WITH STEREO IMAGE DISPLAY AND TOUCH FUNCTIONS - A digital image capturing device with stereo image display and touch functions is provided. The digital image capturing device includes an image capturing module, a central processing unit (CPU), and a touch display module. The CPU transmits a stereo image to the touch display module. The touch display module converts the stereo image into a multi-image, and then the multi-image is synthesized into the stereo image after being perceived by eyes. When a touch body performs a touch operation on the touch display module, for example, a contact/non-contact touch, the touch display module produces a first or second motion track, and the stereo image on the touch display module changes in real time along with the first or second motion track, so as to achieve an interactive effect of a virtual stereo image during the touch operation.11-11-2010
20100283834DEVICE FOR 3D IMAGING - A 3 dimensional (3D) imaging device is described. The device emits a laser pulse towards a scene. Radiation reflected by the scene includes information relating to the range between objects in the scene. A detector, detects the reflected radiation pulses and outputs signals characteristic of the scene to an imaging device or camera. Two image frames will be produced per radiation pulse, one frame being representative of the ‘close’ object and the second frame being representative of the ‘far’ object. The ratio of these frames may be processed by suitable means to produce a 3D image of the scene.11-11-2010
20100283832Miniaturized GPS/MEMS IMU integrated board - This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution.11-11-2010
20110069154HIGH SPEED, HIGH RESOLUTION, THREE DIMENSIONAL SOLAR CELL INSPECTION SYSTEM - An optical inspection system and method are provided. A workpiece transport moves a workpiece in a nonstop manner. An illuminator includes a light pipe and is configured to provide a first and second strobed illumination field types. First and second arrays of cameras are arranged to provide stereoscopic imaging of the workpiece. The first array of cameras is configured to generate a first plurality of images of the workpiece with the first illumination field and a second plurality of images of the feature with the second illumination field. The second array of cameras is configured to generate a third plurality of images of the workpiece with the first illumination field and a fourth plurality of images of the feature with the second illumination field. A processing device stores at least some of the first, second, third, and fourth pluralities of images and provides the images to an other device.03-24-2011
20110134222Rolling Camera System - A 3D imaging system comprising: first and second rolling shutter photosurfaces having pixels; a first shutter operable to gate on and off the first photosurface; a light source controllable to transmit a train of light pulses to illuminate a scene; a controller that controls the first shutter to gate the first photosurface on and off responsive to transmission times of the light pulses and opens and closes bands of pixels in the photosurfaces to register light reflected from the light pulses by the scene that reach the 3D imaging system during periods when the first photosurface is gated on; and a processor that determines distances to the scene responsive to amounts of light registered by pixels in the photosurfaces.06-09-2011
20110134221OBJECT RECOGNITION SYSTEM USING LEFT AND RIGHT IMAGES AND METHOD - Disclosed herein are an object recognition system and method which extract left and right feature vectors from a stereo image of an object, find feature vectors which are present in both the extracted left and right feature vectors, compare information about the left and right feature vectors and the feature vectors present in both the extracted left and right feature vectors with information stored in a database, extract information of the object, and recognize the object.06-09-2011
20110261164OPTICAL SECTIONING OF A SAMPLE AND DETECTION OF PARTICLES IN A SAMPLE - The invention relates to an apparatus, a method and a system for obtaining a plurality of images of a sample arranged in relation to a sample device. The apparatus comprises at least a first optical detection assembly having an optical axis and at least one translation unit arranged to move the sample device and the first optical detection assembly relative to each other. The movement of the sample device and the first optical detection assembly relative to each other is along a scanning path, which defines an angle theta relative to the optical axis, wherein theta is larger than zero.10-27-2011
20110254927IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus executes a distortion correction on coordinates of a target pixel in a virtual viewpoint image based on distortion characteristics of a virtual camera and calculates coordinates in the virtual viewpoint image after the distortion correction. The image process apparatus calculates ideal coordinates in a captured image from the coordinates in the virtual viewpoint image after the distortion correction and calculates real coordinates in the captured image from the ideal coordinates in the captured image based on distortion characteristics of an imaging unit. The image process apparatus calculates a pixel value corresponding to the real coordinates from image data of the virtual viewpoint image and corrects the pixel value corresponding to the real coordinates based on ambient light amount decrease characteristics of the imaging unit and ambient light amount decrease characteristics of the virtual camera.10-20-2011
20120147144TRAJECTORY PROCESSING APPARATUS AND METHOD - A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section.06-14-2012
20120147143OPTICAL SYSTEM HAVING INTEGRATED ILLUMINATION AND IMAGING OPTICAL SYSTEMS, AND 3D IMAGE ACQUISITION APPARATUS INCLUDING THE OPTICAL SYSTEM - An optical system including integrated illumination and imaging optical systems, and a 3-dimensional (3D) image acquisition apparatus including the optical system. In the optical system of the 3D image acquisition apparatus, the illumination optical system and the imaging optical system are integrated to have a coaxial optical path. Accordingly, there is no parallax between the illumination optical system and the imaging optical system, so that depth information about an object acquired using illumination light may reflect actual distances between the object and the 3D image acquisition apparatus. Consequently, the depth information about the object may be more precise. The zero parallax between the illumination optical system and the imaging optical system may improve utilization efficiency of the illumination light. As a result, a greater amount of light may be incident on the 3D image acquisition apparatus, which ensures to acquire further precise depth information about the object.06-14-2012
20120147142IMAGE TRANSMITTING APPARATUS AND CONTROL METHOD THEREOF, AND IMAGE RECEIVING APPARATUS AND CONTROL METHOD THEREOF - Provided are an image transmitting apparatus and method, and an image receiving apparatus and method. The image transmitting apparatus includes: a video processor which converts a first video signal, including a left-eye image and a right-eye image corresponding to a frame of a three-dimensional (3D) image, into a second video signal by increasing a number of data bits per pixel of a first video signal and merging two pieces of pixel information respectively corresponding to the left-eye image and the right-eye image into one piece of pixel information; and a video output unit which transmits the second video signal.06-14-2012
20110080470VIDEO REPRODUCTION APPARATUS AND VIDEO REPRODUCTION METHOD - According to one embodiment, a video reproduction apparatus includes an image generator, a motion recognizer, a marker generator and an image synthesizer. The image generator is configured to generate a first pair of images with a difference in visual field for an operational object. The motion recognizer is configured to recognize three-dimensional gesture of a user. The marker generator is configured to identify three-dimensional designated coordinates based on the gesture recognized by the motion recognizer, and generate a second pair of images with a difference in visual field for a marker corresponding to the designated coordinates. The image synthesizer is configured to synthesize the first pair of images with the second pair of images to generate a third pair of images.04-07-2011
20100110164THREE-DIMENSIONAL IMAGE COMMUNICATION TERMINAL - A three-dimensional image communication terminal can make communication in which there are a sense of being engaged on a place and a sense of reality by use of a three-dimensional image with naturalness and a high robust characteristic. A three-dimensional image communication terminal includes a three-dimensional image input section, a transmitting section that transmits an input image to a communication partner after image processing, a three-dimension image display section which monitor-displays a human image or an object image which was shot, and a telephone calling section which receives three-dimensional image information from a partner and communicates with the other end by voice. The three-dimensional image display section includes an integral photography type horizontal/vertical parallax display device. In the three-dimensional image input section, cameras 05-06-2010
20100182406SYSTEM AND METHOD FOR THREE-DIMENSIONAL OBJECT RECONSTRUCTION FROM TWO-DIMENSIONAL IMAGES - A system and method for three-dimensional acquisition and modeling of a scene using two-dimensional images are provided. The present disclosure provides a system and method for selecting and combining the three-dimensional acquisition techniques that best fit the capture environment and conditions under consideration, and hence produce more accurate three-dimensional models. The system and method provide for acquiring at least two two-dimensional images of a scene, applying a first depth acquisition function to the at least two two-dimensional images, applying a second depth acquisition function to the at least two two-dimensional images, combining an output of the first depth acquisition function with an output of the second depth acquisition function, and generating a disparity or depth map from the combined output. The system and method also provide for reconstructing a three-dimensional model of the scene from the generated disparity or depth map.07-22-2010
20100177165METHOD OF CONDUCTING PRECONDITIONED RELIABILITY TEST OF SEMICONDUCTOR PACKAGE USING CONVECTION AND 3-D IMAGING - A precondition reliability test of a semiconductor package, to determine a propensity of the package to delaminate, includes a baking test of drying the package, a moisture soaking test of moisturizing the dried package, a reflow test of heat-treating the moisturized package using hot air convection, and a three-dimensional imaging of the package to acquire a 3-D image of a surface of the package. The three-dimensional imaging is preferably carried out using a Moire interferometry technique during the course of the reflow test. Therefore, the delamination of the package can be observed in real time so that data on the start and rapid development of the delamination can be produced. The method also allows data which can be ordered as a Weibull Plot to be produced, thereby enabling a quantitative analysis of the reliability test results.07-15-2010
20090322860SYSTEM AND METHOD FOR MODEL FITTING AND REGISTRATION OF OBJECTS FOR 2D-TO-3D CONVERSION - A system and method is provided for model fitting and registration of objects for 2D-to-3D conversion of images to create stereoscopic images. The system and method of the present disclosure provides for acquiring at least one two-dimensional (2D) image, identifying at least one object of the at least one 2D image, selecting at least one 3D model from a plurality of predetermined 3D models, the selected 3D model relating to the identified at least one object, registering the selected 3D model to the identified at least one object, and creating a complementary image by projecting the selected 3D model onto an image plane different than the image plane of the at least one 2D image. The registering process can be implemented using geometric approaches or photometric approaches.12-31-2009
20100118124Method Of Forming Virtual Endoscope Image Of Uterus - A method of forming a virtual endoscope image of a uterus is disclosed. The virtual image showing an inner wall of a uterus is formed from a three-dimensional ultrasound uterus image obtained by hysterosalpingography with a solution. An inner wall of the uterus in the 3D virtual image is inspected and a virtual endoscope image of the uterus is formed by reflecting the inspection result on the 3D virtual uterus image. Also, the virtual endoscope images in every aspect are provided according to the positions of a view point or virtual light source. Thus, the inner wall of the uterus can be inspected more easily.05-13-2010
20100066812IMAGE PICKUP APPARATUS AND IMAGE PICKUP METHOD - An image pickup apparatus having a simple configuration and being capable of performing switching between an image pickup mode based on a light field photography technique and a normal high-resolution image pickup mode is provided. The image pickup apparatus includes an image pickup lens 03-18-2010
20100194858Intermediate image generation apparatus and method - An intermediate image generation apparatus and method are described. An intermediate image may be generated from any one image of a left image and a right image, and the intermediate image may be interpolated by referring to the other image of the left image and the right image.08-05-2010
20100194859CONFIGURATION MODULE FOR A VIDEO SURVEILLANCE SYSTEM, SURVEILLANCE SYSTEM COMPRISING THE CONFIGURATION MODULE, METHOD FOR CONFIGURING A VIDEO SURVEILLANCE SYSTEM, AND COMPUTER PROGRAM - Video surveillance systems typically comprise a plurality of video cameras that are distributed in a surveillance region at different locations. The image data recorded by the surveillance cameras is collected in a surveillance center and automated or evaluated by surveillance personnel. It is known with the automated surveillance that certain image regions of a surveillance camera are selected and continuously monitored by means of digital image processing. A configuration module 08-05-2010
20110187826FAST GATING PHOTOSURFACE - An embodiment of the invention provides a camera comprising a photosurface that is electronically turned on and off to respectively initiate and terminate an exposure period of the camera at a frequency sufficiently high so that the camera can be used to determine distances to a scene that it images without use of an external fast shutter. In an embodiment, the photosurface comprises pixels formed on a substrate and the photosurface is turned on and turned off by controlling voltage to the substrate. In an embodiment, the substrate pixels comprise light sensitive pixels, also referred to as “photopixels”, in which light incident on the photosurface generates photocharge, and storage pixels that are insensitive to light that generates photocharge in the photopixels. In an embodiment, the photosurface is controlled so that the storage pixels accumulate and store photocharge substantially upon its generation during an exposure period of the photosurface.08-04-2011
20110187828APPARATUS AND METHOD FOR OBTAINING 3D LOCATION INFORMATION - An apparatus to obtain 3D location information from an image using a single camera or sensor includes a first table, in which the numbers of pixels are recorded according to the distance of a reference object. Using the prepared first table and a determined focal distance, a second table is generated in which the number of pixels is recorded according to the distance of a target object. Distance information is then calculated according to the detected number of pixels with reference to the second table. A method for obtaining 3D location information includes detecting a number of pixels of a target object from a first image, generating tables including numbers of pixels according to distance, detecting a central pixel and a number of pixels of the target object from a second image, and estimating two-dimensional location information one-dimensional distance of the target object from the tables and pixel information.08-04-2011
20110187827METHOD AND APPARATUS FOR CREATING A STEREOSCOPIC IMAGE - A method of creating a stereoscopic image for display comprising the steps of: receiving a first image and a second image of the same scene captured from the same location, the second image being displaced from the first image by an amount; and transforming the second image such that at least some of the second image is displaced from the first image by a further amount; and outputting the first image and the transformed second image for stereoscopic display is disclosed. A corresponding apparatus is also disclosed.08-04-2011
20110187825SYSTEMS AND METHODS FOR PRESENTING THREE-DIMENSIONAL CONTENT USING APERTURES - Systems and methods are presented for presenting three-dimensional video content to one or more viewers. In an exemplary embodiment, a system comprises a display comprising a plurality of pixels, an opaque material interposed in a line-of-sight between the display and the viewer, and a processor coupled to the display. The opaque material comprises a plurality of apertures formed therein. The processor and the display are cooperatively configured to display right channel content on a first subset of the plurality of pixels that are viewable by a right eye of the viewer through the apertures and display left channel content on a second subset of the plurality of pixels that are viewable by a left eye of the viewer through the apertures.08-04-2011
20110187832NAKED EYE THREE-DIMENSIONAL VIDEO IMAGE DISPLAY SYSTEM, NAKED EYE THREE-DIMENSIONAL VIDEO IMAGE DISPLAY DEVICE, AMUSEMENT GAME MACHINE AND PARALLAX BARRIER SHEET - The present invention realizes a naked eye three-dimensional video image display device that alleviates a jump point. In the naked eye three-dimensional video image display device of the invention, a slit of a parallax barrier is arranged in a zigzag or curved shape and the edge of the slit has a shape of an elliptic arc, so that a moderate view mix is generated to alleviate the jump point. Since a perforated parallax barrier is designed after an area to be viewed on a pixel arrangement surface is determined, the parallax barrier can be appropriately provided with an effect of the viewmix.08-04-2011
20110187831APPARATUS AND METHOD FOR DISPLAYING THREE-DIMENSIONAL IMAGES - According to the present disclosure, there is disclosed a method and device for displaying a 3-dimensional image, which may provide an improved depth perception. The method according to present invention comprises: forming parallax images for left eye and right eye, each of the parallax images including a plurality of images corresponding to images at different depths for a same object; controlling a brightness of the images of each of the parallax images for the left eye and the right eye; and displaying the parallax images for the left eye and the right eye.08-04-2011
20110187830METHOD AND APPARATUS FOR 3-DIMENSIONAL IMAGE PROCESSING IN COMMUNICATION DEVICE - An apparatus and a method for 3-Dimensional (3D) image processing in a communication device are provided. The method includes obtaining an original image for providing a 3D image digital multimedia broadcasting service, setting the original image provided from the controller into a right-side image and generating a left-side image which differs from the right-side image, converting the left-side image and the right-side image into a side-by-side format image by combining the left-side image and the right-side image, dividing each of the combined two images into a plurality of blocks, determining a search region for each of the divided blocks within an image, and estimating a motion vector of each block based on the search region.08-04-2011
20110187829IMAGE CAPTURE APPARATUS, IMAGE CAPTURE METHOD AND COMPUTER READABLE MEDIUM - There is provided an image capture apparatus. The apparatus includes: an image capture section configured to capture an image of a subject; a focal point distance detector configured to detect a focal point distance from a main point of the image capture section to a focal point of the image capture section on the subject; an image acquisition section configured to acquire first and second images of the subject; an image position detector configured to detect a first image position and a second image position, wherein the first image position represents a position of a certain point on the subject in the first image, and the second image position represents a position of the certain point on the subject in the second image; a 3D image generator configured to generate a 3D image of the subject based on a difference between the first image position and the second image position; a parallelism computation section configured to compute parallelism based on the first and second image positions and the focal point distance; and a display section configured to display the parallelism.08-04-2011
20110149043DEVICE AND METHOD FOR DISPLAYING THREE-DIMENSIONAL IMAGES USING HEAD TRACKING - Disclosed herein are a device and method for displaying 3D images. The device includes an image processing unit for calculating the location of a user relative to a reference point and outputting a 3D image which is obtained by performing image processing on 3D content sent by a server based on the calculated location of the user, the image processing corresponding to a viewpoint of the user, and a display unit for displaying the 3D image output by the image processing unit to the user. The method includes calculating the location of a user relative to a reference point, performing image processing on 3D content sent by a server from a viewpoint of the user based on the calculated location of the user, and outputting a 3D image which is obtained by the image processing, and displaying the 3D image output by the image processing unit to the user.06-23-2011
20100079582Method and System for Capturing and Using Automatic Focus Information - Methods and digital image capture devices are provided for capturing and using automatic focus information. Methods include building a three dimension (3D) focus map for a digital image on a digital image capture device, using the 3D focus map in processing the digital image, and storing the digital image. Digital image capture devices include a processor, a lens, a display operatively connected to the processor, means for automatic focus operatively connected to the processor and the lens, and a memory storing software instructions, wherein when executed by the processor, the software instructions cause the digital image capture device to initiate capture of a digital image, build a three dimension (3D) focus map for the digital image using the means for automatic focus, and complete capture of the digital image.04-01-2010
20100026784METHOD AND SYSTEM TO CONVERT 2D VIDEO INTO 3D VIDEO - 2D/3D video conversion using a method for providing an estimation of visual depth for a video sequence, the method comprises an audio scene classification (02-04-2010
20090174765CAMERA DEVICE, LIQUID LENS, AND IMAGE PICKUP METHOD - A camera device is provided which has a compound eye structure only by a liquid lens unit and a control unit without requiring a plurality of lenses to be mounted in advance and is capable of taking a three-dimensional stereoscopic video image. In addition, a compact and lightweight three-dimensional stereoscopic camera is provided which can be switched to take a two-dimensional planar image or to take a three-dimensional stereoscopic image only by an electronic control with no need for a movable mechanism and can reduce the power consumption and improve the reliability. A camera device comprises a liquid lens (07-09-2009
20120062705OVERLAPPING CHARGE ACCUMULATION DEPTH SENSORS AND METHODS OF OPERATING THE SAME - One embodiment includes sequentially resetting rows and applying a gating signal to the rows sequentially in order in which the rows are reset; accumulating at each of the rows photocharge generated in response to an optical signal reflected from an object and the gating signal for an integration time; and reading a result of photocharge accumulation from each of the rows. A phase of the gating signal applied to a row with respect to which the reading has been completed, may be changed. A period of photocharge accumulation based on the gating signal having a changed phase in at least one row, which has been subjected to the reading and then reset, may overlap a period of photocharge accumulation in at least one row in which photocharge accumulation based on the gating signal having a phase before being changed is being carried out.03-15-2012
20120307011IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD FOR DISPLAYING VIDEO IMAGE CAPABLE OF ACHIEVING IMPROVED OPERABILITY AND REALISM, AND NON-TRANSITORY STORAGE MEDIUM ENCODED WITH COMPUTER READABLE PROGRAM FOR CONTROLLING IMAGE PROCESSING APPARATUS - An exemplary embodiment provides an image processing apparatus. The image processing apparatus includes a first object control unit for changing an orientation or a direction of movement of a first object in a three-dimensional space, a virtual camera control unit for determining a direction of shooting of the virtual camera in the three-dimensional space, and a display data generation unit for generating display data based on the determined direction of shooting of the virtual camera. The virtual camera control unit includes a detection unit for detecting whether the first object is hidden by another object when viewed from the virtual camera and a first following change unit for increasing a degree of causing the direction of shooting of the virtual camera to follow the orientation or the direction of movement of the first object based on a result of detection.12-06-2012
20120307012ELECTRONIC DEVICE MOTION DETECTION AND RELATED METHODS - An electronic device may include an optical source generating an optical output, a lens cooperating with the optical source and projecting a grid optical pattern from the optical output, and a video sensor detecting changes in the grid optical pattern caused by movement of an object.12-06-2012
20120307010OBJECT DIGITIZATION - Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.).12-06-2012
20120307009METHOD AND APPARATUS FOR GENERATING IMAGE WITH SHALLOW DEPTH OF FIELD - A method and an apparatus for generating an image with shallow depth of field are provided. The present method includes following steps. A subject is photographed according to a first aperture value, so as to generate a first aperture value image. The subject is photographed according to a second aperture value, so as to generate a second aperture value image, wherein the second aperture value is greater than the first aperture value. The first aperture value image and the second aperture value image are analyzed to generate an image difference value. If the image difference value is greater than a threshold, an image processing is performed on the first aperture value image to obtain the image with shallow depth of field.12-06-2012
20120307008PORTABLE ELECTRONIC DEVICE WITH RECORDING FUNCTION - A portable electronic device with recording function includes a main body, a hinge member, a covering body, a first image capturing module and a second image capturing module. One side of the hinge member is connected to the main body and another side is connected to the covering body so that the covering body has at least a rotary axial direction of movement. The first image capturing module is disposed on the main body and includes a zoom lens and a first image-sensing element. The second image capturing module is disposed on the covering body and includes a stereoscopic image recoding unit and at least a second image sensing element.12-06-2012
20110304696Time-of-flight imager - An improved solution for generating depth maps using time-of-flight measurements is described, more specifically a time-of-flight imager and a time-of-flight imaging method with an improved accuracy. A depth correction profile is applied to the measured depth maps, which takes into account propagation delays within an array of pixels of a sensor of the time-of-flight imager.12-15-2011
20110050854IMAGE PROCESSING DEVICE AND PSEUDO-3D IMAGE CREATION DEVICE - The present invention provides an apparatus that includes a color and polarization image capturing section 03-03-2011
20110304695MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which a focal position of a 3D image is controlled in accordance with a viewer's position and by which guide information on the focal position is provided to the viewer. The present invention includes a display unit configured to display a 3D image, a sensing unit configured to detect a position information of a viewer, the sensing unit comprising at least one selected from the group consisting of at least one proximity sensor, at least one distance sensor and at least one camera, and a controller receiving the position information of the user from the sensing unit, the controller controlling the mobile terminal to facilitate the viewer to find a focal position of the 3D image based on the position information of the user, or the controller controlling the mobile terminal to vary the focal position of the 3D image in accordance with a position of the viewer.12-15-2011
20110304693FORMING VIDEO WITH PERCEIVED DEPTH - A method for providing a video with perceived depth comprising: capturing a sequence of video images of a scene with a single perspective image capture device; determining a relative position of the image capture device for each of the video images in the sequence of video images; selecting stereo pairs of video images responsive to the determined relative position of the image capture device; and forming a video with perceived depth based on the selected stereo pairs of video images.12-15-2011
201200387452D to 3D User Interface Content Data Conversion - A method of two dimensional (2D) content data conversion to three dimensional (3D) content data in a 3D television involves receiving 3D video content and 2D user interface content data via a 2D to 3D content conversion module. A displacement represented by disparity data that defines a separation of left eye and right eye data for 3D rendering of the 2D user interface content data is determined The 3D video content is displayed on a display of the 3D television. 3D user interface content data is generated at a 3D depth on the display based upon the received 2D user interface content data and the determined displacement. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.02-16-2012
20110080471Hybrid method for 3D shape measurement - A method for three-dimensional shape measurement provides for generating sinusoidal fringe patterns by defocusing binary patterns. A method for three-dimensional shape measurement may include (a) projecting a plurality of binary patterns onto at least one object; (b) projecting three phase-shifted fringe patterns onto the at least one object; (c) capturing images of the at least one object with the binary patterns and the phase-shifted fringe patterns; (d) obtaining codewords from the binary patterns; (e) calculating a wrapped phase map from the phase-shifted fringe patterns; (f) applying the codewords to the wrapped phase map to produce an unwrapped phase map; and (g) computing coordinates using the unwrapped phase map for use in the three-dimensional shape measurement of the at least one object. A system for performing the method is also provided. The high-speed real-time 3D shape measurement may be used in numerous applications including medical science, biometrics, and entertainment.04-07-2011
201000795813D CAMERA USING FLASH WITH STRUCTURED LIGHT - An imaging device capable of capturing depth information or surface profiles of objects is disclosed herein. The imaging device uses an enclosed flashing unit to project a sequence of structured light patterns onto an object and captures the light patterns reflected from the surfaces of the object by using an image sensor that is enclosed in the imaging device. The imaging device is capable of capturing an image of an object such that the captured image is comprised of one or more color components of a two-dimensional image of the object and a depth component that specifies the depth information of the object.04-01-2010
20130010069METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR WIRELESSLY CONNECTING A DEVICE TO A NETWORK - From a bit stream, at least the following are decoded: a stereoscopic image of first and second views; a maximum positive disparity between the first and second views; and a minimum negative disparity between the first and second views. In response to the maximum positive disparity violating a limit on positive disparity, a convergence plane of the stereoscopic image is adjusted to comply with the limit on positive disparity. In response to the minimum negative disparity violating a limit on negative disparity, the convergence plane is adjusted to comply with the limit on negative disparity.01-10-2013
20130010068AUGMENTED REALITY SYSTEM - Methods and systems for providing an augmented reality system are disclosed. In one instance, an augmented reality system may: identify a feature within a three-dimensional environment; project information into the three-dimensional environment; collect an image of the three-dimensional environment and the projected information; determine at least one of distance and orientation of the feature from the projected information; identify an object within the three-dimensional environment; and perform markerless tracking of the object.01-10-2013
201100634163D IMAGING DEVICE AND METHOD FOR MANUFACTURING SAME - The invention concerns a 3D imaging device comprising a photodetector (03-17-2011
20120176477Methods, Systems, Devices and Associated Processing Logic for Generating Stereoscopic Images and Video - The present invention includes methods, systems, devices and associated processing logic for generating stereoscopic 3-Dimensional images and/or video from 2-Dimensional images or video. There may be provided a stereoscopic 3D generating system to extrapolate and render 2D complementary images and or video from a first 2D image and/or video. The complementary images and/or video, when combined with the first image or video, or a second complementary image or video, form a stereoscopic image of the scene captured in the first image or video. The stereoscopic 3D generation system may generate a complementary image or images, such that when a viewer views the first image or a second complementary image (shifted in the other direction from the first complementary image) with one eye and the complementary image with the other eye, an illusion of depth in the image is created (e.g. a stereoscopic 3D image).07-12-2012
201201764763D TIME-OF-FLIGHT CAMERA AND METHOD - 3D time-of-flight camera and a corresponding method for acquiring information about a scene. To increase the frame rate, the proposed camera comprises a radiation source, a radiation detector comprising one or more pixels, wherein a pixel comprises two or more detection units each detecting samples of a sample set of two or more samples and an evaluation unit that evaluates said sample sets of said two or more detection units and generates scene-related information from said sample sets. Said evaluation unit comprises a rectification unit that rectifies a subset of samples of said sample sets by use of a predetermined rectification operator defining a correlation between samples detected by two different detection units of a particular pixel, and an information value calculator that determines an information value of said scene-related information from said subset of rectified samples and the remaining samples of the sample sets.07-12-2012
20110316974METHOD AND SYSTEM FOR REDUCING GHOST IMAGES OF THREE-DIMENSIONAL IMAGES - A method and system for reducing ghost images in three-dimensional (3D) images are disclosed. The method comprises: calculating a brightness difference distribution between a left-eye image and a right-eye image; determining a space factor indicating a brightness change resulting from the brightness difference distribution on the left-eye image or the right-eye image, the space factor being determined according to two-dimensional (12-29-2011
20110316979Method and Apparatus For Vehicle Service System Optical Target Assembly - A machine vision vehicle wheel alignment system for acquiring measurements associated with a vehicle. The system includes at least one imaging sensor having a field of view and at least one optical target secured to a wheel assembly on a vehicle within the field of view of the imaging sensor. The optical target includes a plurality of visible target elements disposed on at least two surfaces in a determinable geometric and spatial configuration. A processing unit in the system is configured to receive at least two sets of image data from the imaging sensor, with each set of image data acquired at a different rotational position of the wheel assembly around an axis of rotation and representative of at least one visible target element on each of the two surfaces, from which the processing unit is configured to identify said axis of rotation of the wheel assembly.12-29-2011
20110316978INTENSITY AND COLOR DISPLAY FOR A THREE-DIMENSIONAL METROLOGY SYSTEM - Described are a method and apparatus for generating a display of a three-dimensional (“3D”) metrology surface. The method includes determining a 3D point cloud representation of a surface of an object in a point cloud coordinate space. An image of the object is acquired in a camera coordinate space and then transformed from the camera coordinate space to the point cloud coordinate space. The transformed image is mapped onto the 3D point cloud representation to generate a realistic display of the surface of the object. In one embodiment, a metrology camera used to acquire images for determination of the 3D point cloud is also used to acquire the image of the object so that the transformation between coordinate spaces is not performed. The display includes a grayscale or color shading for the pixels or surface elements in the representation.12-29-2011
20110316977METHOD OF CNC PROFILE CUTTING PROGRAM MANIPULATION - A method of CNC program file cutting manipulation. The CNC imaging system comprises a capture device and a bed located below the capture device, wherein the bed has at least two reference points affixed thereto. A cutting head is mounted above the bed, and a controller controls movement of the cutting head. An image processing device communicates with the capture device and with the controller.12-29-2011
20110316976IMAGING APPARATUS CAPABLE OF GENERATING THREE-DIMENSIONAL IMAGES, THREE-DIMENSIONAL IMAGE GENERATING METHOD, AND RECORDING MEDIUM - An imaging apparatus generates a 3D model using a photographed image of a subject and generates a 3D image based on the 3D model. When a corresponding point corresponding to a point forming the 3D model does not form a 3D model generated using a photographed image photographed at a different photographing position, the imaging apparatus determines that the point is noise, and removes the point determined as noise from the 3D model. The imaging apparatus generates a 3D image based on the 3D model from which the point determined as noise is removed.12-29-2011
20110316975STEREO IMAGING APPARATUS AND METHOD - A stereo imaging apparatus comprises two groups of optical imaging lens (12-29-2011
20110157316IMAGE MANAGEMENT METHOD - A method for managing an image photographed by two or more image pickup devices corresponding to two or more viewpoints, comprises: storing a 2D image photographed by the two or more image pickup devices, with identifier indicating that the image is two-dimensional; and storing a 3D image photographed by the two or more image pickup devices, with identifier indicating that the image is three-dimensional. Hence, it becomes possible to search and display quickly an object 2D or 3D image by performing an access per folder.06-30-2011
20110157315INTERPOLATION OF THREE-DIMENSIONAL VIDEO CONTENT - Techniques are described herein for interpolating three-dimensional video content. Three-dimensional video content is video content that includes portions representing respective frame sequences that provide respective perspective views of a given subject matter over the same period of time. For example, the three-dimensional video content may be analyzed to identify one or more interpolation opportunities. If an interpolation opportunity is identified, frame data that is associated with the interpolation opportunity may be replaced with an interpolation marker. In another example, a frame that is not directly represented by data in the three-dimensional video content may be identified. For instance, the frame may be represented by an interpolation marker or corrupted data. The interpolation marker or corrupted data may be replaced with an interpolated representation of the frame.06-30-2011
20110157313Stereo Image Server and Method of Transmitting Stereo Image - The present invention discloses a method of transmitting three dimensional information. The method includes providing a remote server having a three dimensional image database, and a local terminal device coupled to the remote server via network, wherein the local terminal device includes a stereo image display. The local terminal device transmits a request command for three dimensional images to the remote server through the network, followed by sending the desired three dimensional images to the local terminal device through the network.06-30-2011
20110157312IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus includes: a creating means that creates identification information identifying a type of an image of image data among a 2D image for planar viewing, a 3D image for stereoscopic viewing including left and right images, and a duplex image which is capable of both stereoscopic viewing including the left and right images and planar viewing with disparity lower than that of the 3D image; and a transmitting means that transmits the identification information created by the creating means and the image data.06-30-2011
20120002013VIDEO PROCESSING APPARATUS AND CONTROL METHOD THEREOF - There is provided a video processing apparatus comprising: a video data obtainment unit configured to obtain arbitrary viewpoint video data having an arbitrary viewpoint specified by a user, the arbitrary viewpoint video data being generated based on a plurality of video data having different viewpoints; an output unit configured to output the arbitrary viewpoint video data obtained by the video data obtainment unit to a display unit; and a notification unit configured to notify the user based on a degree of matching between first viewpoint information corresponding to the plurality of video data having different viewpoints and second viewpoint information corresponding to the arbitrary viewpoint video data.01-05-2012
20110096148IMAGING INSPECTION DEVICE - In a method for recording a thermographic inspection image (04-28-2011
20120007953MOBILE TERMINAL AND 3D IMAGE CONTROLLING METHOD THEREIN - A mobile terminal and 3D image controlling method therein are provided, by which a user can be informed of a state of a 3D effect on one or more objects in a 3D image. The mobile terminal includes a first camera configured to capture a left-eye image for generating a 3D image, a second camera configured to capture a right-eye image for generating the 3D image, a display unit configured to display the 3D image generated based on the left-eye image and the right-eye image, and a controller configured to determine an extent of a 3D effect on at least one object included in the 3D image, and to control the display unit to display information indicating the determined extent of the 3D effect.01-12-2012
201200569883-D CAMERA - A 3-D camera is disclosed. The 3-D camera includes an optical system, a front-end block, and a processor. The front-end block further includes a combined image sensor to generate an image, which includes color information and near infra-red information of a captured object and a near infra-red projector to generate one or more patterns. The processor is to generate a color image and a near infra-red image from the image and then generate a depth map using the near infra-red image and the one or more patterns from a near infra-red projector. The processor is to further generate a full three dimensional color model based on the color image and the depth map, which may be aligned with each other.03-08-2012
20120206576STEREOSCOPIC IMAGING METHOD AND SYSTEM THAT DIVIDES A PIXEL MATRIX INTO SUBGROUPS - A stereoscopic imaging method where a pixel matrix is divided into groups such that parallax information is received by one pixel group and original information is received by another pixel group. The parallax information may, specifically, be based on polarized information received by subgroups of the one pixel, group and by processing all of the information received multiple images are rendered by the method.08-16-2012
20120206575Method and Device for Calibrating a 3D TOF Camera System - A method for calibrating a three dimensional time-of-flight camera system mounted on a device, includes determining at a first instant a direction vector relating to an object; determining an expected direction vector and expected angle for the object to be measured at a second instant with reference to the device's assumed trajectory and optical axis of the camera; determining a current direction vector and current angle at the second instant; determining an error represented by a difference between the current direction vector and the expected direction vector; and using the error to correct the assumed direction of the main optical axis of the camera system such that said error is substantially eliminated.08-16-2012
20120206573METHOD AND APPARATUS FOR DETERMINING DISPARITY OF TEXTURE - A method and system to determine the disparity associated with one or more textured regions of a plurality of images is presented. The method comprises the steps of breaking up the texture into its color primitives, further segmenting the textured object into any number of objects comprising such primitives, and then calculating a disparity of these objects. The textured objects emerge in the disparity domain, after having their disparity calculated. Accordingly, the method is further comprised of defining one or more textured regions in a first of a plurality of images, determining a corresponding one or more textured regions in a second of the plurality of images, segmenting the textured regions into their color primitives, and calculating a disparity between the first and second of the plurality of images in accordance with the segmented color primitives.08-16-2012
20120007954METHOD AND APPARATUS FOR A DISPARITY-BASED IMPROVEMENT OF STEREO CAMERA CALIBRATION - A method and apparatus for camera calibration. The method is for disparity estimation of the camera calibration and includes collecting statistical information from at least one disparity image, inferring sub-pixel misalignment between a left view and a right view of the camera, and utilizing the collected statistical information and the inferred sub-pixel misalignment for calibration refinement.01-12-2012
20120044326Laser Scanner Device and Method for Three-Dimensional Contactless Recording of the Surrounding Area by Means of a Laser Scanner Device - The invention relates to a laser scanner device (02-23-2012
201200569873D CAMERA SYSTEM AND METHOD - A system and method for generating 3D images comprising a plurality of fully-adjustable optical elements arranged in pyramidical configurations on parallel planes such that the cameras have different convergent points and focal points.03-08-2012
20120013710SYSTEM AND METHOD FOR GEOMETRIC MODELING USING MULTIPLE DATA ACQUISITION MEANS - A system and a method for modeling a predefined space including at least one three-dimensional physical surface, referred to hereinafter as a “measuring space”. The system and method use a scanning system enabling to acquire three-dimensional (3D) data of the measuring space and at least one two-dimensional (2D) sensor enabling to acquire 2D data of the measuring space. The system and method may enable generating a combined compound reconstructed data (CRD), which is a 3D geometrical model of the measuring space, by combining the acquired 2D data with the acquired 3D data, by reconstructing additional 3D points, from the combined 3D and 2D data thereby generating the CRD model. The generated CRD model includes a point cloud including a substantially higher density of points than that of its corresponding acquired 3D data point cloud from which the CRD was generated.01-19-2012
20120013712SYSTEM AND ASSOCIATED METHODS OF CALIBRATION AND USE FOR AN INTERATIVE IMAGING ENVIRONMENT - In various embodiments, the present invention provides a system and associated methods of calibration and use for an interactive imaging environment based on the optimization of parameters used in various segmentation algorithm techniques. These methods address the challenge of automatically calibrating an interactive imaging system, so that it is capable of aligning human body motion, or the like, to a visual display. As such the present invention provides a system and method of automatically and rapidly aligning the motion of an object to a visual display.01-19-2012
20120013711METHOD AND SYSTEM FOR CREATING THREE-DIMENSIONAL VIEWABLE VIDEO FROM A SINGLE VIDEO STREAM - It is provided a method for generating a 3D representation of a scene, initially represented by a first video stream captured by a certain camera at a first set of viewing configurations. The method includes providing video streams compatible with capturing the scene by cameras, and generating an integrated video stream enabling three-dimensional display of the scene by integration of two video streams. The method includes calculating parameters characterizing a viewing configuration by analysis of elements having known geometrical parameters. The scene may be a sport scene which a playing field, a group of on-field objects and a group of background objects. The method includes segmenting a frame to those portions, separately associating each portion to the different viewing configuration, and merging them into a single frame. Also, the method may include calculating of on-field footing locations of on-field objects, computing new locations in a new frame, and transforming the on-field objects to the respective frame as a 2D object. Furthermore, the method may include synthesizing at on-field objects by segmenting portions of the object from respective frames of the first video stream, stitching the portions together and rendering the stitched object within a synthesized frame.01-19-2012
20120056992IMAGE GENERATION SYSTEM, IMAGE GENERATION METHOD, AND INFORMATION STORAGE MEDIUM - An image generation system includes a captured image acquisition section that acquires a captured image captured by an imaging section, a depth information acquisition section that acquires depth information about a photographic object observed within the captured image, an object processing section that performs a process that determines a positional relationship between the photographic object and a virtual object in a depth direction based on the acquired depth information, and synthesizes the virtual object with the captured image, and an image generation section that generates an image in which the virtual object is synthesized with the captured image.03-08-2012
20120056991IMAGE SENSOR, VIDEO CAMERA, AND MICROSCOPE - An image sensor (03-08-2012
20120056990IMAGE REPRODUCTION APPARATUS AND CONTROL METHOD THEREFOR - An image reproduction apparatus includes: a reproduction unit that reproduces a stereo image; a mode setting unit that sets one mode from a plurality of modes which include a first mode and a second mode; an adjustment unit that adjusts a maximum value of a disparity between an image for left eye and an image for right eye of the stereo image according to the mode that has been set by the mode setting unit; and a generation unit that generates a stereo image from the image for left eye and the image for right eye for which the disparity has been adjusted by the adjustment unit and outputs the generated stereo image, wherein the adjustment unit makes a maximum value of the disparity in the second mode less than a maximum value of the disparity in the first mode.03-08-2012
20120056989IMAGE RECOGNITION APPARATUS, OPERATION DETERMINING METHOD AND PROGRAM - An object is to enable an accurate determination of an operation. An operator uses a relative relation between a virtual operation screen determined from an image or a position of an operator photographed by the aforementioned video camera and the operator to determine that the operation starts when a part of the operator comes on this side of the operation screen as viewed from the video camera, and from a configuration or a movement of each portion, it is determined which out of operations in advance estimated the configuration or the movement corresponds to.03-08-2012
20090135247STEREOSCOPIC CAMERA FOR RECORDING THE SURROUNDINGS - A stereoscopic camera for recording the surroundings is provided with a right and a left image sensor having one lens each to display the surroundings on the image sensors, with the image sensors and the lenses being held by a carrier side-by-side and at a distance in reference to each other. The stereoscopic camera is additionally provided with a circuit board arranged on the carrier and comprising at least the signal and the supply lines of both image sensors. The image sensors are each mounted on a carrier substrate, which similar to the lenses, are arranged on the carrier and are distanced in reference to the circuit board, and have a flexible electric connection to the circuit board.05-28-2009
20120206572METHOD OF CALCULATING 3D OBJECT DATA WITHIN CONTROLLABLE CONSTRAINTS FOR FAST SOFTWARE PROCESSING ON 32 BIT RISC CPUS - Systems and methods are described to allow arbitrary 3D data to be rendered to a 2D viewport on a device with limited processing capabilities. 3D vertex data is received comprising vertices and connections conforming to coordinate processing constraints. A position and orientation of a camera in world co-ordinates is received to render the 3D vertex data from. A processing zone of the plurality of processing zones the position of the camera is in is determined. The vertices of the 3D vertex data assigned to the determined processing zone are transformed based on the position and orientation of the camera for rendering to the viewport.08-16-2012
20120154540THREE DIMENSIONAL MEASUREMENT APPARATUS AND THREE DIMENSIONAL MEASUREMENT METHOD - A three dimensional measurement apparatus includes a projection unit that projects, to a measurement target object, a first pattern light including alternately arranged bright parts and dark parts and a second pattern light in which a phase of the first pattern light is shifted, and an imaging unit that images the measurement target object on which the first or second pattern light is projected. When a period of repetitions of the bright parts and the dark parts of the pattern light is one period, a range of imaging on the measurement target object by one pixel included in the imaging unit is an image distance, and the length of one period of the projected pattern light on the measurement target object surface is M times the image distance, the projection unit and the imaging unit are arranged to satisfy “2×N−0.2≦M≦2×N+0.2 (where N is not less than 2)”.06-21-2012
20120154541APPARATUS AND METHOD FOR PRODUCING 3D IMAGES - A camera module includes a single lens system, a sensor and an image enhancer. The image enhancer is operable to enhance a single image captured by the sensor via the single lens system. The image enhancer performs opto-algorithmic processing to extend the depth of field of the single lens system, a mapping to derive a depth map from the captured single image; and image processing to calculate suitable offsets from the depth map as is required to produce a 3-dimensional image. The calculated offsets are applied to appropriate image channels so as to obtain the 3-dimensional image from the single image capture.06-21-2012
20120154539IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus including a background image generator which generates a background image, a receiver which receives additional information, a depth memory which stores in advance a depth for each of types of the additional information, a depth decide module which determines a type of the additional information received by the receiver, and reads a depth which is associated with the determined type from the depth memory, a three-dimensional image generator which generates an object image based on the additional information, and generates a three-dimensional image based on the object image and the depth which is read by the depth decide module, an image composite module which generates a video signal by displaying the background image and displaying the three-dimensional image in front of the displayed background image, and an output module which outputs the video signal generated by the image composite module.06-21-2012
20120154537IMAGE SENSORS AND METHODS OF OPERATING THE SAME - According to example embodiments, a method of operating a three-dimensional image sensor comprises measuring a distance of an object from the three-dimensional image sensor using light emitted by a light source module, and adjusting an emission angle of the light emitted by the light source module based on the measured distance. The three-dimensional image sensor includes the light source module.06-21-2012
20120154538IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus including background image generator which generates background image, receiver which receives additional information, depth calculator which determines first depth based on additional information, and calculates second depth based on first depth, first three-dimensional image generator which generates first object image based on additional information, and generates first three-dimensional image based on first object image and first depth, second three-dimensional image generator which generates second object image based on additional information, and generates second three-dimensional image based on second object image and second depth, at least part of second three-dimensional image being displayed in area overlapping first three-dimensional image, image composite module which generates video signal by displaying background image, displaying second three-dimensional image in front of displayed background image, and displaying first three-dimensional image in front of displayed second three-dimensional image, and output module which outputs video signal.06-21-2012
20120154535CAPTURING GATED AND UNGATED LIGHT IN THE SAME FRAME ON THE SAME PHOTOSURFACE - A photosensitive surface of an image sensor, hereafter a photosurface, of a gated 06-21-2012
20120154536METHOD AND APPARATUS FOR AUTOMATICALLY ACQUIRING FACIAL, OCULAR, AND IRIS IMAGES FROM MOVING SUBJECTS AT LONG-RANGE - The present invention relates to a method and apparatus for long-range facial and ocular acquisition. One embodiment of a system for acquiring an image of a subject's facial feature(s) includes a steerable telescope configured to acquire the image of the facial feature(s), a first computational imaging element configured to minimize the effect of defocus in the image of the facial feature(s), and a second computational imaging element configured to minimize the effects of motion blur. In one embodiment, the detecting, the acquiring, the minimizing the effect of the motion, and the minimizing the effect of the defocus are performed automatically without a human input.06-21-2012
20120026291IMAGE PROCESSING APPARATUS AND METHOD - Provided is an image processing apparatus and method thereof. The image processing apparatus may extract a three-dimensional (3D) bidirectional flow by analyzing data of an input object. The image processing apparatus may calculate a 3D volumetric center density of the input object based on the 3D bidirectional flow.02-02-2012
201200627043-D IMAGE PICKUP APPARATUS - A 3-D image pickup apparatus includes a lens portion that includes a lens system; and an adjustment ring portion that includes plural coaxially rotatable rings. Each ring adjusts a respective one of plural optical parameters of the lens system.03-15-2012
20120026293METHOD AND MEASURING ASSEMBLY FOR DETERMINING THE WHEEL OR AXLE GEOMETRY OF A VEHICLE - In a method for determining a wheel or axle geometry of a vehicle, the following steps are provided: illuminating a wheel region with structured and with unstructured light during a motion of at least one wheel and/or of the vehicle; acquiring multiple images of the wheel region during the illumination, in order to create a three-dimensional surface model having surface parameters, a texture model having texture parameters, and a motion model having motion parameters of the sensed wheel region; calculating values for the surface parameters, the texture parameters, and the motion parameters using a variation computation as a function of the acquired images, in order to minimize a deviation of the three-dimensional surface model, texture model, and motion model from image data of the acquired images; and determining a rotation axis and/or a rotation center of the wheel as a function of the calculated values of the motion parameters.02-02-2012
20120026292MONITOR COMPUTER AND METHOD FOR MONITORING A SPECIFIED SCENE USING THE SAME - A method for monitoring a specified scene obtains a scene image of the specified scene captured by an image capturing device, determines a first sub-area of the scene image, detects a three dimensional (02-02-2012
20120026294DISTANCE-MEASURING OPTOELECTRONIC SENSOR FOR MOUNTING AT A PASSAGE OPENING - A distance measuring optoelectronic sensor (02-02-2012
20120026295STEREO IMAGE PROCESSOR AND STEREO IMAGE PROCESSING METHOD - A stereo image processor (02-02-2012
20120206574COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN DISPLAY CONTROL PROGRAM, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - A virtual object placed in a three-dimensional virtual space and a user interface are stereoscopically displayed on an upper LCD of a game apparatus. An image used for adjusting a display position of the user interface in the depth direction is displayed on a lower LCD. A user adjusts a UI adjustment slider by using a touch pen to adjust the parallax of the user interface. This allows the adjustment of the parallax of the user interface separately from the parallax of the three-dimensional virtual space, and thereby the depth perception of the user interface can be adjusted.08-16-2012
20120105587METHOD AND APPARATUS OF MEASURING DEPTH INFORMATION FOR 3D CAMERA - Provided is a depth information measuring method and apparatus for a three-dimensional (3D) camera. The depth information measuring method and apparatus may output, to an object, an optical pulse of which an intensity is higher than an intensity of an ambient light. The depth information measuring method and apparatus may generate a voltage that is proportional to a log value of an intensity of a light reflected from the object. The depth information measuring method and apparatus may use discharging units, and the discharging units may respectively include dischargers having different discharging speeds or capacitors having different capacities.05-03-2012
20120105590ELECTRONIC EQUIPMENT - Electronic equipment includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a monitor that displays on a display screen a distance histogram indicating a distribution of distance between an object at each position in the target input image and an apparatus that photographed the target input image, and displays on the display screen a selection index that is movable along a distance axis in the distance histogram, and a depth of field setting portion that sets a depth of field of the target output image based on a position of the selection index determined by an operation for moving the selection index along the distance axis.05-03-2012
20120105589REAL TIME THREE-DIMENSIONAL MENU/ICON SHADING - An image display apparatus, comprising: a two-dimensional display for displaying three-dimensional object images; an imaging unit for capturing an image of a user who is in the state of viewing the display screen; a processing unit for determining a face direction orientation of the user from the captured image; a tilt sensor for determining an angle of the image display apparatus; wherein the processing unit determines a virtual light direction by subtracting the angle of the image display apparatus from the face direction; and a projection image generator for projecting the three-dimensional objects onto the display, wherein lighting and shading is applied to the three-dimensional objects based on the virtual light direction is disclosed. A method for displaying three-dimensional images and a computer program for implementing the method are also disclosed.05-03-2012
20120105588IMAGE CAPTURE DEVICE - An image capture device, to/from which a piece of equipment for use to shoot video or record audio is attachable and removable, includes: a communications section adapted to get, when such a piece of equipment is attached to the device, property information of the piece of equipment; a processor adapted to determine, by reference to the property information, whether a user interface to control the operation of the piece of equipment needs to be displayed or not; a display section adapted to display the user interface when instructed by the processor to do so; and a touchscreen panel adapted to allow the user to operate the user interface.05-03-2012
20120105586LATENT FINGERPRINT DETECTORS AND FINGERPRINT SCANNERS THEREFROM - An automatic fingerprint system includes an optical sensor having a first light source that provides a collimated beam for interrogating a first sample surface, and a camera including a lens and a photodetector array having a camera field of view (FOV05-03-2012
20120105585IN-HOME DEPTH CAMERA CALIBRATION - A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.05-03-2012
20120105584CAMERA WITH SENSORS HAVING DIFFERENT COLOR PATTERNS - An image capture device includes a lens arrangement having a first lens associated with a first digital image sensor and a second lens associated with a second digital image sensor; the first digital image sensor having photosites of a first predetermined color pattern for producing a first digital image; the second digital image sensor having photosites of a different second predetermined color pattern for producing a second digital image. The image capture device also includes a device for causing the lens arrangement to capture a first digital image from the first digital image sensor and a second digital image from the second digital image sensor at substantially the same time; a processor aligning the first and second digital images; and the processor using values of the second image based on the alignment between the first and second images operates on the first digital image to produce the enhanced digital image.05-03-2012
20120062702ONLINE REFERENCE GENERATION AND TRACKING FOR MULTI-USER AUGMENTED REALITY - A multi-user augmented reality (AR) system operates without a previously acquired common reference by generating a reference image on the fly. The reference image is produced by capturing at least two images of a planar object and using the images to determine a pose (position and orientation) of a first mobile platform with respect to the planar object. Based on the orientation of the mobile platform, an image of the planar object, which may be one of the initial images or a subsequently captured image, is warped to produce the reference image of a front view of the planar object. The reference image may be produced by the mobile platform or by, e.g., a server. Other mobile platforms may determine their pose with respect to the planar object using the reference image to perform a multi-user augmented reality application.03-15-2012
20110043610THREE-DIMENSIONAL FACE CAPTURING APPARATUS AND METHOD AND COMPUTER-READABLE MEDIUM THEREOF - Disclosed is a 3D face capturing apparatus, method and computer-readable medium. As an example, the 3D face capturing method includes obtaining a face color image, obtaining a face depth image, aligning, by a computer, the face color image and the face depth image, obtaining, by the computer, a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module, removing by the computer, depth noise of the 3D face model, and obtaining, by the computer, an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing residual noise based on a registration between the 3D face model and the 3D face template.02-24-2011
20100118122METHOD AND APPARATUS FOR COMBINING RANGE INFORMATION WITH AN OPTICAL IMAGE - A method for combining range information with an optical image is provided. The method includes capturing a first optical image of a scene with an optical camera, wherein the first optical image comprising a plurality of pixels. Additionally, range information of the scene is captured with a ranging device. Range values are then determined for at least a portion of the plurality of pixels of the first optical image based on the range information. The range values and the optical image are combined to produce a 3-dimensional (3D) point cloud. A second optical image of the scene from a different perspective than the first optical image is produced based on the 3D point cloud.05-13-2010
20110037831METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT USING POINTS, AND CORRESPONDING COMPUTER PROGRAM AND IMAGING SYSTEM - The method of the invention includes: determining a set of points of a space and a value of each of these points at a given moment, the set of points including the points of the object in the position thereof at the given moment; selecting a three-dimensional representation function that can be parameterized with parameters and an operation that gives, using the three-dimensional representation function, a function for estimating the value of each point in the space; and determining parameters, such that, for each point in the set, the estimation of the value of the point substantially gives the value of the point.02-17-2011
20110090316METHOD AND APPARATUS FOR 3-D IMAGING OF INTERNAL LIGHT SOURCES - The present invention provides systems and methods for obtaining a three-dimensional (3D) representation of one or more light sources inside a sample, such as a mammal. Mammalian tissue is a turbid medium, meaning that photons are both absorbed and scattered as they propagate through tissue. In the case where scattering is large compared with absorption, such as red to near-infrared light passing through tissue, the transport of light within the sample is described by diffusion theory. Using imaging data and computer-implemented photon diffusion models, embodiments of the present invention produce a 3D representation of the light sources inside a sample, such as a 3D location, size, and brightness of such light sources.04-21-2011
20110090315Capturing device, image processing method, and program - A capturing device includes a display section that changes between and displays a three-dimensional (3D) image and a two-dimensional (2D) image, and a controller that performs an image display control for the display section, wherein the controller changes a display mode of an image displayed on the display section, from a 3D image display to a 2D image display, in accordance with preset setting information, at the time of performing a focus control process.04-21-2011
20110090314METHOD AND APPARATUS FOR GENERATING STREAM AND METHOD AND APPARATUS FOR PROCESSING STREAM - Provided are a method and apparatus for generating a stream, and a method and apparatus for processing of the stream. The method of generating the stream includes: generating an elementary stream including three-dimensional (3D) image data providing a 3D image, and 3D detail information for reproducing the 3D image; generating a section including 3D summary information representing that a transport stream to be generated from the elementary stream provides the 3D image; and generating the transport stream with respect to the section and the elementary stream.04-21-2011
20110090313MULTI-EYE CAMERA AND METHOD FOR DISTINGUISHING THREE-DIMENSIONAL OBJECT - A stereo camera captures a pair of R and L viewpoint images. Upon a half press of a shutter release button, a preliminary photographing procedure is carried out. A binary image generator applies binary processing to each image, and a shadow extracting section extracts a shadow of a main subject from each binary image. A size calculating section calculates a size of each shadow, and a difference calculating section calculates a difference in size of the shadow between the images. If an absolute value of the difference is a size difference threshold value or more, the main subject is distinguished as a three-dimensional object suited to a 3D picture mode. Otherwise, the main subject is distinguished as a printed sheet suited to a 2D picture mode. Upon a full press of the shutter release button, an actual photographing procedure is carried out in the established 3D or 2D picture mode.04-21-2011
20110102551IMAGE GENERATION DEVICE AND IMAGE GENERATION METHOD - Provided is an image generation device generating a high-quality image of an object under a pseudo light source at any desired position, based on geometric parameters generated from a low-quality image of the object. The image generation device includes: a geometric parameter calculation unit (05-05-2011
201101025503D IMAGING SYSTEM - An apparatus and method for computing a three dimensional model of a surface of an object are disclosed. At least one directional energy source (05-05-2011
20110102549THREE-DIMENSIONAL DIGITAL MAGNIFIER OPERATION SUPPORTING SYSTEM - The simulation regarding the state change of the subject in a real space provides a system which represents impacts to three-dimensional computer graphics caused by changes of state of three-dimensional computer graphics composed and fixed to subject, and state of image taking space by simulation, surface polygon model and similar surface polygon model 1 is selected, according to shape pattern, from surface polygon model 2 measures, in a three-dimensional way, subject image existing in the same space, a tracking process is performed on the computer graphics, following to the relative position change of the position changes of the subject and the camera caused in real three-dimensional space, subjects in the visual field of the camera and virtual three-dimensional computer graphics image is unified and displayed by displaying computer graphics image having the same relative position change on the image.05-05-2011
20110102547Three-Dimensional Image Sensors and Methods of Manufacturing the Same - Image sensors include three-dimensional (3D) color image sensors having an array of sensor pixels therein. A 3-D color image sensor may include a 3-D image sensor pixel having a plurality of color sensors and a depth sensor therein. The plurality of color sensors may include red, green and blue sensors extending adjacent the depth sensor. A rejection filter is also provided. This rejection filter, which extends opposite a light receiving surface of the 3-D image sensor pixel, is configured to be selectively transparent to visible and near-infrared light relative to far-infrared light. The depth sensor may also include an infrared filter that is selectively transparent to near-infrared light having wavelengths greater than about 700 nm relative to visible light.05-05-2011
20110102546DISPERSED STORAGE CAMERA DEVICE AND METHOD OF OPERATION - A distributed storage network contains a user device that has a computing core, a DSN interface and either an integrated or an externally-connected camera or sensor. The camera or sensor collects data from its surrounding environment and processes the data at least partially through error coding dispersal storage functions that include slicing the data into a plurality of error-coded data slices. The error-coded data slices are output by the user device for one or more of storage within a dispersed storage network (DSN) memory, playing on a destination player, or broadcast consumption over a network. A storage integrity unit manages the storage capacity, use, and/or throughput of the system to ensure that data is processing in a real time or near real time so that data can be accurately processed and perceived by targeted end users.05-05-2011
20110102545UNCERTAINTY ESTIMATION OF PLANAR FEATURES - In one embodiment, a method comprises generating three-dimensional (3D) imaging data for an environment using an imaging sensor, extracting an extracted plane from the 3D imaging data, and estimating an uncertainty of an attribute associated with the extracted plan. The method further comprises generating a navigation solution using the attribute associated with the extracted plane and the estimate of the uncertainty of the attribute associated with the extracted plane.05-05-2011
20110102548MOBILE TERMINAL AND METHOD FOR CONTROLLING OPERATION OF THE MOBILE TERMINAL - A mobile terminal and a method for controlling the operation of the same are provided. In the method, a screen including a preview image of a camera is displayed on a display module. Then, a preview window is set in a region of the screen and a predictive image of a three-dimensional (3D) stereoscopic image, which can be generated using images of a subject corresponding to the preview image, is displayed on the preview window. Thus, when images are captured to obtain a 3D stereoscopic image, the user can easily operate the camera using such a 3D stereoscopic image preview function.05-05-2011
20110032335VIDEO STEREOMICROSCOPE - A video stereomicroscope includes a main objective (02-10-2011
20120120199METHOD FOR DETERMINING THE POSE OF A CAMERA WITH RESPECT TO AT LEAST ONE REAL OBJECT - A method for determining the pose of a camera with respect to at least one real object, the method comprises the following steps: operating the camera (05-17-2012
20120120196IMAGE COUNTING METHOD AND APPARATUS - The image counting method includes the steps of: acquiring 3D images from the region by a 3D camera, wherein the 3D images include a plurality of pixels, and the pixels have x, y and z coordinate values and pixel data; mapping the x, y and z coordinate values and the pixel data of the pixels into a plurality of correlative coordinate values of a spatial correlative coordinate represented as (x, z, t), wherein t is the number of pixels whose pixel data are lower than a threshold in y direction with the same x and z coordinate values; grouping the correlative coordinate values into a plurality of groups according to a correlation between each of the correlative coordinate values in x-z plane; and comparing the correlative coordinate values of each of groups with the correlative coordinate values of the 3D images of the specific objects to determine the number of specific objects in the region.05-17-2012
20120162377ILLUMINATION/IMAGE-PICKUP SYSTEM FOR SURFACE INSPECTION AND DATA STRUCTURE - In order to commonly perform image processing at an image processing device, even if inspection contents or valid inspection regions are changed for every image or if an illumination/image-pickup system for surface inspection per se is changed, a data structure for image processing is constructed in a manner that valid image-pickup region data indicative of a valid image-pickup region that is valid for inspection in the captured image, and image processing specifying data for specifying contents of the image processing performed on the valid image-pickup region are associated with the captured image data indicative of the image captured by the image-pickup device.06-28-2012
20120162378METHOD AND SYSTEM FOR VISION-BASED INTERACTION IN A VIRTUAL ENVIRONMENT - Method, computer program and system for tracking movement of a subject. The method includes receiving data from a plurality of fixed position sensors comprising a distributed network of time of flight camera sensors to generate a volumetric three-dimensional representation of the subject, identifying a plurality of clusters within the volumetric three-dimensional representation that correspond to features indicative of motion of the subject relative to the fixed position sensors and one or more other portions of the subject, and presenting one or more objects on one or more three dimensional display screens. The plurality of fixed position sensors are used to track motion of the features of the subject to manipulate the volumetric three-dimensional representation to determine interaction of one or more of the features of the subject and one or more of the one or more objects on one or more of the one or more three dimensional display screens.06-28-2012
20120162375STEREOSCOPIC IMAGE CAPTURING METHOD, SYSTEM AND CAMERA - A camera and camera system is provided with an optical device (06-28-2012
20120162373DYNAMIC RANGE THREE-DIMENSIONAL IMAGE SYSTEM - Disclosed is a system of a dynamic range three-dimensional image, including: an optical detector including a gain control terminal capable of controlling an optical amplification gain; a pixel detecting module for detecting a pixel signal for configuring an image by receiving an output of the optical detector; a high dynamic range (HDR) generating module for acquiring a dynamic range image by generating a signal indicating a saturation degree of the pixel signal and combining the pixel signal based on the pixel signal detected by the pixel detecting module; and a gain control signal generating module generating an output signal for supplying required voltage to the gain control terminal of the optical detector based on the magnitude of the signal indicating the saturation degree of the pixel signal.06-28-2012
20100289879Remote Contactless Stereoscopic Mass Estimation System - A contactless system and method for estimating the mass or weight of a target object is provided. The target object is imaged and a spatial representation of the target animal is derived from the images. A virtual spatial model is provided of a characteristic object of a class of object to which the target object belongs. The virtual spatial model is reshape to optimally fit the spatial representation of the individual animal. Finally, the mass or weight of the target object is estimated as a function of shape variables characterizing the reshaped virtual object.11-18-2010
20100289878IMAGE PROCESSING APPARATUS, METHOD AND COMPUTER PROGRAM FOR GENERATING NORMAL INFORMATION, AND VIEWPOINT-CONVERTED IMAGE GENERATING APPARATUS - High-precision normal information on the surface of a subject is generated by capturing an image of the subject. A normal information generating device captures the image of the subject and thereby passively generates normal information on the surface of the subject. The normal information generating device includes: a stereo polarization image capturing section for receiving a plurality of polarized light beams of different polarization directions at different viewpoint positions and obtaining a plurality of polarization images of different viewpoint positions; and a normal information generating section for estimating a normal direction vector of the subject based on the plurality of polarization images of different viewpoint positions.11-18-2010
20100289877METHOD AND EQUIPMENT FOR PRODUCING AND DISPLAYING STEREOSCOPIC IMAGES WITH COLOURED FILTERS - A method for viewing a sequence of images producing a relief sensation is provided.11-18-2010
20100245544IMAGING APPARATUS, IMAGING CONTROL METHOD, AND RECORDING MEDIUM - An imaging apparatus includes: a first imaging controller configured to control imaging by an imaging unit; a movement distance acquirer configured to acquire movement distance of the imaging unit required to generate a three-dimensional image of the imaged subject after imaging by the first imaging controller; a first determining unit configured to determine whether or not the imaging unit has moved the movement distance acquired by the movement distance acquirer; a second imaging controller configured to control imaging with respect to the imaging unit in the case where it is determined by the first determining unit that the imaging unit has moved the movement distance; and a three-dimensional image generator configured to generate a three-dimensional image from the image acquired by the first imaging controller and the image acquired by the second imaging controller.09-30-2010
20100245543MRI COMPATIBLE CAMERA THAT INCLUDES A LIGHT EMITTING DIODE FOR ILLUMINATING A SITE - Systems and methods of using MR-compatible cameras to view magnetic resonance imaging procedures. The MR-compatible camera systems may include a casing with at least two openings, including one oriented to permit a camera to view a site, and another opening oriented to permit a light source to illuminate a portion of the site. The camera systems may be used with either closed bore or open bore MRI systems.09-30-2010
20120300037THREE-DIMENSIONAL IMAGING SYSTEM USING A SINGLE LENS SYSTEM - The passive imaging system of the present application includes first and second input polarizers on the light receiving side of a light receiving lens. A first half of the split polarizer performs vertical polarization of incoming light while the second half of the split polarizer performs horizontal polarization of the incoming light. The input polarizing structure provides parallax to accomplish 3D imaging. A third or interleaving polarizer is provided between the lens and an imaging device and is adjacent to and closely spaced from (<10 microns) the image plane of the device. The interleaving polarizer is sectional so that alternating sections, along the direction of parallax created by the input polarizer(s), pass vertically and horizontally polarized light. The resulting image frame formed at the image plane of the imager is similarly sectional so that sections of the image alternate between vertically polarized light and horizontally polarized light, e.g., for example the odd sections of the image are images of vertical polarized light (received from the left side) and even sections of the image are images of horizontally polarized light (received from the right side). Once an image frame has been captured, it is divided into two parallactic image frames, one of vertically polarized light imaged from the left side and one of horizontally polarized light imaged from the right side. The two resulting frames are combined to form a 3D image.11-29-2012
20120300036Optimizing Stereo Video Display - System and method for video processing. First video levels for pixels for a left image of a stereo image pair are received from a GPU. Gamma corrected video levels (g-levels) are generated via a gamma look-up table (LUT) based on the first video levels. Outputs of the gamma LUT are constrained by minimum and/or maximum values, thereby excluding values for which corresponding post-OD display luminance values differ from static display luminance values by more than a specified error. Overdriven video levels are generated via a left OD LUT based on the g-levels. The overdriven video levels correspond to display luminance values that differ from corresponding static display luminance values by less than the error threshold, and are provided to a display device for display of the left image. This process is repeated for second video levels for a right image of the stereo image pair, using a right OD LUT.11-29-2012
20120300035ELECTRONIC CAMERA - An electronic camera includes an imager. An imager captures a scene through an optical system. A distance adjuster adjusts an object distance to a designated distance. A depth adjuster adjusts a depth of field to a predetermined depth, corresponding to completion of an adjustment of the distance adjuster. An acceptor accepts a changing operation for changing a length of the designated distance. A changer changes the depth of field to an enlarged depth greater than the predetermined depth, in response to the changing operation.11-29-2012
20120300034INTERACTIVE USER INTERFACE FOR STEREOSCOPIC EFFECT ADJUSTMENT - Present embodiments contemplate systems, apparatus, and methods to determine a user's preference for depicting a stereoscopic effect. Particularly, certain of the embodiments contemplate receiving user input while displaying a stereoscopic video sequence. The user's preferences may be determined based upon the input. These preferences may then be applied to future stereoscopic depictions.11-29-2012
20120127271STEREO VIDEO CAPTURE SYSTEM AND METHOD - A method is provided for a stereo video capture system. The stereo video system includes a stereo video monitor, a control platform, and a three-dimensional (3D) capture imaging device. The method includes capturing at least a first image and a second image, with a parallax between the first image and the second image based on a first parallax configuration. The method also includes receiving the first and second images; and calculating a value of at least one parallax setting parameter associated with the first and second images and corresponding to the first parallax configuration. Further, the method includes determining whether the value is within a pre-configured range. When the value is out of the pre-configured range, the method includes converting the first parallax configuration into a second parallax configuration. The method also includes sending, the second parallax configuration to the 3D imaging capture device, and adopting the second parallax configuration in operation.05-24-2012
20120127274APPARATUS AND METHOD FOR PROVIDING IMAGES IN WIRELESS COMMUNICATION SYSTEM AND PORTABLE DISPLAY APPARATUS AND METHOD FOR DISPLAYING IMAGES - A mobile communication system for displaying a three-dimensional (3D) image is provided. The mobile communication system includes an image providing apparatus to generate a first two-dimensional (2D) image Transport Stream (TS), and a second 2D image TS by capturing the same target in different directions, a Multicast Broadcast Service (MBS) server to control at least two base stations included in an MBS area to individually transmit the first 2D image TS and the second 2D image TS, and a portable display apparatus to receive the first 2D image TS and the second 2D image TS, to divide the first 2D image TS to and the second 2D image TS into first 2D image data and second 2D image data, respectively, and to display a 2D image or a 3D image based on an image quality of each of the first 2D image data and the second 2D image data.05-24-2012
20120127273IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - An image processing apparatus includes a depth map generator which generates a depth map of a predetermined image which includes at least one object; a disparity estimator which estimates a reference disparity of a left eye image and a right eye image at a predetermined distance from the object based on the generated depth map; a disparity calculator which calculates a changed disparity of the left eye image and the right eye image at a changed distance by using the estimated reference disparity if the predetermined distance is changed; and a three-dimensional (3D) image generator which generates a 3D image which moves horizontally from the left eye image and the right eye image corresponding to the changed disparity.05-24-2012
20120127275IMAGE PROCESSING METHOD FOR DETERMINING DEPTH INFORMATION FROM AT LEAST TWO INPUT IMAGES RECORDED WITH THE AID OF A STEREO CAMERA SYSTEM - An image processing method is described for determining depth information from at least two input images recorded by a stereo camera system, the depth information being determined from a disparity map taking into account geometric properties of the stereo camera system, characterized by the following method steps for ascertaining the disparity map: transforming the input images into signature images with the aid of a predefined operator, calculating costs based on the signature images with the aid of a parameter-free statistical rank correlation measure for ascertaining a cost range for predefined disparity levels in relation to at least one of the at least two input images, performing a correspondence analysis for each point of the cost range for the predefined disparity levels, the disparity to be determined corresponding to the lowest costs, and ascertaining the disparity map from the previously determined disparities.05-24-2012
201201272723D IMAGE CAPTURING DEVICE AND METHOD - A 3D image capturing device includes a first light incident hole, a second light incident hole, a photosensitive element, and a processing unit. The photosensitive element is located at an intersection point between a first light path formed by the first light incident hole and a second light path formed by the second light incident hole. The processing unit includes a 3D image sensing module and a 3D image synthesizing module. The 3D image sensing module enables the photosensitive element to alternately sense at least one first image and at least one second image. The first image is sensed by the photosensitive element through the first light path and the second image is sensed by the photosensitive element through the second light path. The 3D image synthesizing module synthesizes alternate first image and second image to generate at least one 3D image.05-24-2012
20120162372APPARATUS AND METHOD FOR CONVERGING REALITY AND VIRTUALITY IN A MOBILE ENVIRONMENT - Disclosed herein are an apparatus and a method for converging reality and virtuality in a mobile environment. The apparatus includes an image processing unit, a real environment virtualization unit, and a reality and virtuality convergence unit. The image processing unit corrects real environment image data captured by at least one camera included in a mobile terminal. The real environment virtualization unit generates real object virtualization data virtualized by analyzing each real object of the corrected real environment image data in a three-dimensional (3D) fashion. The reality and virtuality convergence unit generates a convergent image, in which the real object virtualization data and at least one virtual object of previously stored virtual environment data are converged by associating the real object virtualization data with the virtual environment data, with reference to location and direction data of the mobile terminal.06-28-2012
20100208038METHOD AND SYSTEM FOR GESTURE RECOGNITION - A method of image acquisition and data pre-processing includes obtaining from a sensor an image of a subject making a movement. The sensor may be a depth camera. The method also includes selecting a plurality of features of interest from the image, sampling a plurality of depth values corresponding to the plurality of features of interest, projecting the plurality of features of interest onto a model utilizing the plurality of depth values, and constraining the projecting of the plurality of features of interest onto the model utilizing a constraint system. The constraint system may comprise an inverse kinematics solver.08-19-2010
20100208035VOLUME RECOGNITION METHOD AND SYSTEM - The present invention relates to a volume recognition method comprising the steps of:08-19-2010
20100208037IMAGE DISPLAYING SYSTEM AND IMAGE CAPTURING AND DISPLAYING SYSTEM - An image displaying system includes an image processor and a display unit. The image processor moves and/or deforms a first radiographic image and a second radiographic image that has been captured after the first radiographic image such that the first radiographic image and the second radiographic image are aligned. The display unit displays the first radiographic image and the second radiographic image that have been aligned, allows one of the first and second radiographic image to be viewed by the right eye, and allows the other of the first and second radiographic image to be viewed by the left eye.08-19-2010
20100208036SECURITY ELEMENT - The present invention relates to asecurity element for security papers, value documents and the like, having a microoptical moiré magnification arrangement (08-19-2010
20100208034METHOD AND SYSTEM FOR THE DYNAMIC CALIBRATION OF STEREOVISION CAMERAS - The present invention generally provides a method of performing dynamic calibration of a stereo vision system using a specific stereo disparity algorithm adapted to provide for the determination of disparity in two dimensions, X and Y. In one embodiment of the present invention, an X/Y disparity map may be calculated using this algorithm without having to perform pre-warping or first finding the epipolar directions. Thus information related to camera misalignment and/or distortion can be preserved in the resulting X/Y disparity map and later extracted.08-19-2010
20100208033Personal Media Landscapes in Mixed Reality - An exemplary method includes accessing geometrically located data that represent one or more virtual items with respect to a three-dimensional coordinate system; generating a three-dimensional map based at least in part on real image data of a three-dimensional space as acquired by a camera; rendering to a physical display a mixed reality scene that includes the one or more virtual items at respective three-dimensional positions in a real image of the three-dimensional space acquired by the camera; and re-rendering to the physical display the mixed reality scene upon a change in the field of view of the camera. Other methods, devices, systems, etc., are also disclosed.08-19-2010
20110181703INFORMATION STORAGE MEDIUM, GAME SYSTEM, AND DISPLAY IMAGE GENERATION METHOD - A game system acquires an input image from an input section that applies light to a body and receives reflected light from the body. The game system controls the size of an object in a virtual space based on a distance between the input section and the body, the distance being determined based on the input image. The game system generates a display image including the object.07-28-2011
20110181702METHOD AND SYSTEM FOR GENERATING A REPRESENTATION OF AN OCT DATA SET - A method of generating a representation of an OCT data set includes obtaining the OCT data set, representing a plurality of tuples, each of which comprises values of three spatial coordinates and a value of a scattering intensity, obtaining a color image data set representing a plurality of tuples, each of which comprises values of two spatial coordinates and a color value, and generating an image data set, representing a plurality of tuples, each of which comprises values of three spatial coordinates and a color value. Generating the image data set is performed depending on an analysis of the OCT data set and an analysis of the color image data set.07-28-2011
20110181701Image Data Processing - A method for processing image data of a sample is disclosed. The method comprises registering a first and a second images of at least partially overlapping spatial regions of the sample and processing data from the registered images to obtain integrated image data comprising information about the sample, said information being additional to that available from said first and second images.07-28-2011
20120212583Charged Particle Radiation Apparatus, and Method for Displaying Three-Dimensional Information in Charged Particle Radiation Apparatus - Disclosed is a charged particle radiation apparatus capable of capturing a change in a sample due to gaseous atmosphere, light irradiation, heating or the like without exposing the sample to atmosphere. The present invention relates to a sample holder provided with a sample stage that is rotatable around a rotation axis perpendicular to an electron beam irradiation direction, the sample holder being capable of forming an airtight chamber around the sample stage. A sample is allowed to chemically react in any atmosphere, and three-dimensional analysis on the reaction is enabled. A sample liable to change in atmosphere can be three-dimensionally analyzed without exposing the sample to the atmosphere.08-23-2012
201101759813D COLOR IMAGE SENSOR - A 3D color image sensor and a 3D optical imaging system including the 3D color image sensor are provided. The 3D color image sensor includes a semiconductor substrate, having a plurality of first photodiodes and a plurality of second photodiodes, and a wiring layer formed under the first photodiodes and the second photodiodes. A light filter array layer is disposed on the first and the second photodiodes, having a plurality of color filter patterns and infrared (IR) light filter patterns, wherein each of the IR light filter patterns receives depth information of 3D color image of an object and corresponds to the first photodiode, and each of the color filter patterns receives color image information of 3D color image of the object and corresponds to the second photodiode.07-21-2011
20120133742GENERATING A TOTAL DATA SET - The invention relates to generating a total data set of at least one segment of an object for determining at least one characteristic by merging individual data sets determined by means of an optical sensor moving relative to the object and of an image processor, wherein individual data sets of sequential images of the object contain redundant data that are matched for merging the individual data sets. In order that the data obtained by scanning the object are of sufficient quantity for performing an optimal analysis, but without being too great an amount of data for processing, the invention proposes that individual data sets determined per unit of time be varied as a function of the relative motion between the optical sensor and the object.05-31-2012
20120133739IMAGE PROCESSING APPARATUS - An image processing apparatus which employs a basic configuration including an image processing controller which processes images captured by a stereo camera and a recognition processing controller which recognizes an object based on information from the image processing controller includes a target object area specifying unit, feature amount extracting unit and smoke determining unit as functions of enabling recognition of a smoky object. The target object specifying unit specifies an area of an object which is a detection target, by canceling the influence of the background, the feature amount extracting unit extracts an image feature amount for recognizing a smoky object in a target object area and the smoke deciding unit decides whether the object in the target object area is a smoky object or an object other than the smoky object, based on the extracted image feature amount.05-31-2012
20120133741CAMERA CHIP, CAMERA AND METHOD FOR IMAGE RECORDING - The invention relates to a camera chip (C) for image acquisition. It is characterized in that pixel groups (P05-31-2012
20120133738Data Processing System and Method for Providing at Least One Driver Assistance Function - The invention relates to a data processing system and a method for providing at least one driver assistance function. A stationary receiving unit (05-31-2012
20120133737IMAGE SENSOR FOR SIMULTANEOUSLY OBTAINING COLOR IMAGE AND DEPTH IMAGE, METHOD OF OPERATING THE IMAGE SENSOR, AND IMAGE PROCESSING SYTEM INCLUDING THE IMAGE SENSOR - An image sensor includes a light source that emits modulated light such as visible light, white light, or white light-emitting diode (LED) light to a target object, a plurality of pixels, and an image processing unit. The pixels include at least one pixel for outputting pixel signals according to light reflected by the target object. The image processing unit simultaneously generates a color image and a depth image from the pixel signals of the at least one pixel.05-31-2012
20120133743THREE-DIMENSIONAL IMAGE PICKUP DEVICE - The 3D image capture device of this invention includes: a light-transmitting section 05-31-2012
20120314033APPARATUS AND METHOD FOR GENERATING 3D IMAGE DATA IN A PORTABLE TERMINAL - The present invention relates to an apparatus and a method for processing three-dimensional (3D) image data of a portable terminal, and particularly, to an apparatus and a method for enabling contents sharing and reproduction (playback) between various 3D devices using a file structure for effectively storing a 3D image obtained using a plurality of cameras, and a stored 3D related parameter, and sharing and reproduction between various 3D devices are possible using a file structure for effectively storing a 3D image (for example, a stereo image) obtained using a plurality of cameras, and a stored 3D related parameter.12-13-2012
20120314031INVARIANT FEATURES FOR COMPUTER VISION - Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.12-13-2012
20120212581IMAGE CAPTURE APPARATUS AND IMAGE SIGNAL PROCESSING APPARATUS - An image capture apparatus includes an image capture unit that has a plurality of unit pixels each including a plurality of photo-electric conversion units per condenser unit, and a recording unit that records captured image signals, which are captured by the image capture unit and are respectively read out from the plurality of photo-electric conversion units, and the recording unit records identification information which allows to identify each photo-electric conversion unit used to obtain the captured image signal in association with that captured image signal.08-23-2012
20090058992THREE DIMENSIONAL PHOTOGRAPHIC LENS SYSTEM - Provided is a three-dimensional image capturing lens system having a structure in which left and right image sensing lenses are provided, and light is synthesized to form an image on a single CCD (charge-coupled device) in order to prevent loss of light intensity.03-05-2009
20120169848Image Processing Systems - An image processing system includes a calculation unit, a reconstruction unit, a confidence map estimation unit and an up-sampling unit. The up-sampling unit is configured to perform a joint bilateral up-sampling on depth information of a first input image based on a confidence map of the first input image and a second input image with respect to an object and increase a first resolution of the first input image to a second resolution to provide an output image with the second resolution.07-05-2012
20120169849AUTOMATED EXTENDED DEPTH OF FIELD IMAGING APPARATUS AND METHOD - An imaging apparatus and method enables an automated extended depth of field capability that automates and simplifies the process of creating extended depth of field images. An embodiment automates the acquisition of an image “stack” or sequence and stores metadata at the time of image acquisition that facilitates production of a composite image having an extended depth of field from at least a portion of the images in the acquired sequence. An embodiment allows a user to specify, either at the time of image capture or at the time the composite image is created, a range of distances that the user wishes to have in focus within the composite image. An embodiment provides an on-board capability to produce a composite, extended depth of field image from the image stack. One embodiment allows the user to import the image stack into an image-processing software application that produces the composite image.07-05-2012
201201764753D Microscope Including Insertable Components To Provide Multiple Imaging And Measurement Capabilities - A three-dimensional (3D) microscope includes various insertable components that facilitate multiple imaging and measurement capabilities. These capabilities include Nomarski imaging, polarized light imaging, quantitative differential interference contrast (q-DIC) imaging, motorized polarized light imaging, phase-shifting interferometry (PSI), and vertical-scanning interferometry (VSI).07-12-2012
20120176473DYNAMIC ADJUSTMENT OF PREDETERMINED THREE-DIMENSIONAL VIDEO SETTINGS BASED ON SCENE CONTENT - Predetermined three-dimensional video parameter settings may be dynamically adjusted based on scene content. One or more three-dimensional characteristics associated with a given scene may be determined. One or more scale factors may be determined from the three-dimensional characteristics. The predetermined three-dimensional video parameter settings can be adjusted by applying the scale factors to the predetermined three-dimensional video parameter settings. The scene may be displayed on a three-dimensional display using the resulting adjusted set of predetermined three-dimensional video parameters.07-12-2012
20120176474ROTATIONAL ADJUSTMENT FOR STEREO VIEWING - An apparatus for viewing of a three dimensional image of a scene including a monocular device worn by a viewer over one eye for displaying first two-dimensional images of the scene and a mechanism for rotating the first display in response to perceived rotational misalignments with a second display that displays second two dimensional images in stereo image pairs. A second display for displaying the second two-dimensional images of the scene. A way of determining lateral, longitudinal and rotational misalignments of the first images relative to the second images. A controller for providing lateral and longitudinal shifts of the first images on the first display and rotational movements of the mechanism to align the first and second images so the viewer perceives a three dimensional image of the scene.07-12-2012
20120314032METHOD FOR PILOT ASSISTANCE FOR THE LANDING OF AN AIRCRAFT IN RESTRICTED VISIBILITY - Method for pilot assistance in landing an aircraft with restricted visibility, in which the position of a landing point is defined by at least one of a motion compensated, aircraft based helmet sight system and a remotely controlled camera during a landing approach, and with the landing point is displayed on a ground surface in the at least one of the helmet sight system and the remotely controlled camera by production of symbols that conform with the outside view. The method includes one of producing or calculating during an approach, a ground surface based on measurement data from an aircraft based 3D sensor, and providing both the 3D measurement data of the ground surface and a definition of the landing point with reference to a same aircraft fixed coordinate system.12-13-2012
20120075426IMAGE PICKUP SYSTEM - An image pickup system capable of shortening time lag between reading of signals from an image sensor and displaying of the signals when a 3D image signal from a camera having the single image sensor is displayed in real time with a time-division system. A solid state image pickup device has pixels that are arranged in two dimensions and are divided into image pickup areas. A reading unit reads signals from the image pickup areas. A mode setting unit sets either of a first shooting mode and a second shooting mode. A control unit controls the reading unit to read signals from all the image pickup areas as a single frame when the mode setting unit sets the first shooting mode, and reads the signals from the image pickup areas as different frames, respectively, when the mode setting unit sets the second shooting mode.03-29-2012
20120075425HANDHELD DENTAL CAMERA AND METHOD FOR CARRYING OUT OPTICAL 3D MEASUREMENT - A handheld dental camera performs three-dimensional, optical measurements. The camera includes a light source that emits an illuminating beam, a scanning unit, a color sensor, and a deflector. The scanning unit focuses the illuminating beam onto a surface of an object to be measured. The surface of the object reflects the illuminating beam and forms a monitoring beam, which is detected by the color sensor. Focal points of wavelengths of the illuminating beam form chromatic depth measurement ranges. The scanning unit stepwise displaces the chromatic depth measurement ranges by a step width smaller than or equal to a length of each chromatic depth measurement range, so that a first chromatic depth measurement range in a first end position of the scanning unit and a second chromatic depth measurement range in a second end position are precisely adjoined in a direction of a measurement depth, or are partially overlapped.03-29-2012
20120075423Methods and Apparatus for Transient Light Imaging - In illustrative implementations of this invention, multi-path analysis of transient illumination is used to reconstruct scene geometry, even of objects that are occluded from the camera. An ultrafast camera system is used. It comprises a photo-sensor (e.g., accurate in the picosecond range), a pulsed illumination source (e.g. a femtosecond laser) and a processor. The camera emits a very brief light pulse that strikes a surface and bounces. Depending on the path taken, part of the light may return to the camera after one, two, three or more bounces. The photo-sensor captures the returning light bounces in a three-dimensional time image I(x,y,t) for each pixel. The camera takes different angular samples from the same viewpoint, recording a five-dimensional STIR (Space Time Impulse Response). A processor analyzes onset information in the STIR to estimate pairwise distances between patches in the scene, and then employs isometric embedding to estimate patch coordinates.03-29-2012
201200754223D INFORMATION GENERATOR FOR USE IN INTERACTIVE INTERFACE AND METHOD FOR 3D INFORMATION GENERATION - The present invention discloses a 3D information generator for use in an interactive interface. The 3D information generator includes: a MEMS light beam generator having at least one light source for providing a dot light beam and a MEMS mirror for projecting a movable scanning light beam according to the dot light beam to an object; an image sensor for sensing an image of the object to generate a 2D image information; and a processor for generating a distance information by triangulation method according to a reflection result of the scanning light beam scanning on the object, wherein the distance information is combined with the 2D image information to generate a 3D information.03-29-2012
20120249739METHOD AND SYSTEM FOR STEREOSCOPIC SCANNING - Provided is a system and method for scanning a target area, including capturing images from onboard a platform for use in producing one or more stereoscopic views. A first set of at least two image sequences of at least two images each, covering the target area or a subsection thereof is captured. As the platform continues to move forward, at least one other set of images covering the same target area or subsection thereof is captured. At least one captured image from each of at least two of the sets may be used in producing a stereoscopic view.10-04-2012
20120249738LEARNING FROM HIGH QUALITY DEPTH MEASUREMENTS - A depth camera computing device is provided, including a depth camera and a data-holding subsystem holding instructions executable by a logic subsystem. The instructions are configured to receive a raw image from the depth camera, convert the raw image into a processed image according to a weighting function, and output the processed image. The weighing function is configured to vary test light intensity information generated by the depth camera from a native image collected by the depth camera from a calibration scene toward calibration light intensity information of a reference image collected by a high-precision test source from the calibration scene.10-04-2012
20120249740THREE-DIMENSIONAL IMAGE SENSORS, CAMERAS, AND IMAGING SYSTEMS - A three-dimensional image sensor may include a light source module configured to emit at least one light to an object, a sensing circuit configured to polarize a received light that represents the at least one light reflected from the object and configured to convert the polarized light to electrical signals, and a control unit configured to control the light source module and sensing circuit. A camera may include a receiving lens; a sensor module configured to generate depth data, the depth data including depth information of objects based on a received light from the objects; an engine unit configured to generate a depth map of the objects based on the depth data, configured to segment the objects in the depth map, and configured to generate a control signal for controlling the receiving lens based on the segmented objects; and a motor unit configured to control focusing of the receiving lens.10-04-2012
20120249741ANCHORING VIRTUAL IMAGES TO REAL WORLD SURFACES IN AUGMENTED REALITY SYSTEMS - A head mounted device provides an immersive virtual or augmented reality experience for viewing data and enabling collaboration among multiple users. Rendering images in a virtual or augmented reality system may include capturing an image and spatial data with a body mounted camera and sensor array, receiving an input indicating a first anchor surface, calculating parameters with respect to the body mounted camera and displaying a virtual object such that the virtual object appears anchored to the selected first anchor surface. Further operations may include receiving a second input indicating a second anchor surface within the captured image that is different from the first anchor surface, calculating parameters with respect to the second anchor surface and displaying the virtual object such that the virtual object appears anchored to the selected second anchor surface and moved from the first anchor surface.10-04-2012
20120249744Multi-Zone Imaging Sensor and Lens Array - An imaging module includes a matrix of detector elements formed on a single semiconductor substrate and configured to output electrical signals in response to optical radiation that is incident on the detector elements. A filter layer is disposed over the detector elements and includes multiple filter zones overlying different, respective, convex regions of the matrix and having different, respective passbands.10-04-2012
20120249742METHOD FOR VISUALIZING FREEFORM SURFACES BY MEANS OF RAY TRACING - The invention relates to visualizing freeform surfaces, like NURBS surfaces, from three-dimensional construction data via Virtual beams from a virtual camera are sent out of a virtual image plane in a scene having at least one object and at least one freeform surface. Lighting values are calculated for each point where a beam intersects the freeform surface. The lighting values are then attributed to the pixels associated with the different points of intersection. The freeform surface is defined by two parameters (u, v), and related equations define all points of the surface of the freeform surface The subdivision of the freeform surface for determining the intersections with the beams based on the two parameters (u, v) is regular, so that the surface fragments form meshes of a two-dimensional grid of the freeform surface in the parameter space.10-04-2012
20120257016THREE-DIMENSIONAL MODELING APPARATUS, THREE-DIMENSIONAL MODELING METHOD AND COMPUTER-READABLE RECORDING MEDIUM STORING THREE-DIMENSIONAL MODELING PROGRAM - In three-dimensional modeling apparatus, an image obtaining section obtains image sets picked up by stereoscopic camera. A generating section generates three-dimensional models. A three-dimensional model selecting section selects a first three-dimensional model and a second three-dimensional model to be superimposed on the first three-dimensional model among generated three-dimensional models. A extracting section extracts first and second feature points from the selected first and second three-dimensional model. A feature-point selecting section selects feature points having a closer distance to stereoscopic camera from the extracted first and second feature points. A parameter obtaining section obtains a transformation parameter for transforming a coordinate of the second three-dimensional model into a coordinate system of the first three-dimensional model. A transforming section transforms the coordinate of the second three-dimensional model into the coordinate system of the first three-dimensional model. And a superimposing section superimposes the second three-dimensional model on the first three-dimensional model.10-11-2012
20120224028METHOD OF FABRICATING MICROLENS, AND DEPTH SENSOR INCLUDING MICROLENS - A method of fabricating a microlens includes forming layer of photoresist on a substrate, patterning the layer of photoresist, and then reflowing the photoresist pattern. The layer of photoresist is formed by coating the substrate with liquid photoresist whose viscosity is 150 to 250 cp. A depth sensor includes a substrate and photoelectric conversion elements at an upper portion of the substrate, a metal wiring section disposed on the substrate, an array of the microlenses for focusing incident light as beams onto the photoelectric conversion elements and which beams avoid the wirings of the metal wiring section. The depths sensor also includes a layer presenting a flat upper surface on which the microlenses are formed. The layer may be a dedicated planarization layer or an IR filter, interposed between the microlenses and the metal wiring section.09-06-2012
20120188343IMAGING APPARATUS - An imaging apparatus includes an imaging unit configured to capture a subject to generate an image, a detector configured to detect tilt of the imaging apparatus; a storage unit configured to store the images generated by the imaging unit with the images being related to the detection results of the detector, and a controller configured to select at least two images as images for generating a three-dimensional image from the plurality of images stored in the storage section based on the detection results related to the images.07-26-2012
20120188342USING OCCLUSIONS TO DETECT AND TRACK THREE-DIMENSIONAL OBJECTS - A mobile platform detects and tracks a three-dimensional (3D) object using occlusions of a two-dimensional (2D) surface. To detect and track the 3D object, an image of the 2D surface with the 3D object is captured and displayed and the 2D surface is detected and tracked. Occlusion of a region assigned as an area of interest on the 2D surface is detected. The shape of the 3D object is determined based on a predefined shape or by using the shape of the area of the 2D surface that is occluded along with the position of the camera with respect to the 2D to calculate the shape. Any desired action with respect to the position of the 3D object on the 2D surface may be performed, such as rendering and displaying a graphical object on or near the displayed 3D object.07-26-2012
20120257020RASTER SCANNING FOR DEPTH DETECTION - Techniques are provided for determining distance to an object in a depth camera's field of view. The techniques may include raster scanning light over the object and detecting reflected light from the object. One or more distances to the object may be determined based on the reflected image. A 3D mapping of the object may be generated. The distance(s) to the object may be determined based on times-of-flight between transmitting the light from a light source in the camera to receiving the reflected image from the object. Raster scanning the light may include raster scanning a pattern into the field of view. Determining the distance(s) to the object may include determining spatial differences between a reflected image of the pattern that is received at the camera and a reference pattern.10-11-2012
20120257019STEREO IMAGE DATA TRANSMITTING APPARATUS, STEREO IMAGE DATA TRANSMITTING METHOD, STEREO IMAGE DATA RECEIVING APPARATUS, AND STEREO IMAGE DATA RECEIVING METHOD - [Object] To maintain perspective consistency with individual objects in an image when displaying captions (caption units) based on an ARIB method in a superimposed manner.10-11-2012
20120257017METHOD AND SURVEYING SYSTEM FOR NONCONTACT COORDINATE MEASUREMENT ON AN OBJECT SURFACE - Noncontact coordinate measurement. With a 3D image recording unit, a first three-dimensional image of a first area section of the object surface is electronically recorded in a first position and first orientation, the first three-dimensional image being composed of a multiplicity of first pixels, with which in each case a piece of depth information is coordinated. First 3D image coordinates in an image coordinate system are coordinated with the first pixels. The first position and first orientation of the 3D image recording unit in the object coordinate system are determined by a measuring apparatus coupled to the object coordinate system by means of an optical reference stereocamera measuring system. First 3D object coordinates in the object coordinate system are coordinated with the first pixels from the knowledge of the first 3D image coordinates and of the first position and first orientation of the 3D image recording unit.10-11-2012
20120257018STEREOSCOPIC DISPLAY DEVICE, METHOD FOR GENERATING IMAGE DATA FOR STEREOSCOPIC DISPLAY, AND PROGRAM THEREFOR - Provided is a stereoscopic display device provided with a stereoscopic display panel and a display controller, the stereoscopic display panel including a lenticular lens, a color filter substrate, a TFT substrate, etc. Unit pixels arranged in a horizontal direction parallel to the direction in which both eyes of viewer are arranged are alternately used as left-eye pixels and right-eye pixels. The display controller determined, according to temperature information from a temperature sensor, the contraction/expansion of the lens by a stereoscopic image generating module and generates 3D image data for driving the display panel in which the amount of disparity in a specific disparity direction is corrected on the basis of parameter information defined by an effective linear expansion coefficient inherent in the stereoscopic display panel, or the like and the magnitude of the temperature to thereby ensure a predetermined stereoscopic visual recognition range even when the lens are contracted/expanded.10-11-2012
20120081518METHOD FOR 3-DIMENSIONAL MICROSCOPIC VISUALIZATION OF THICK BIOLOGICAL TISSUES - The present invention discloses a method of visualizing the 3-dimensional microstructure of a thick biological tissue. This method includes: a process of immersing thick, opaque biological tissues in the optical-clearing solution, for example FocusClear (U.S. Pat. No. 6,472,216), and utilizing an optical scanning microscope and a cutter. In microscopy, the cutter removes a portion of the tissue after each round of optical scanning. Each round of optical scanning follows the principal that the depth of the removal plane is less than the depth of the boundary plane derived from the scanning This method acquires an image stack to provide the information of thick biological tissue's 3-dimensional microstructure with minimal interference by the tissue removal.04-05-2012
20120081517Image Processing Apparatus and Image Processing Method - According to one embodiment, an image processing apparatus includes a limiter, an input module, and a converter. The limiter is configured to limit an input image format according to an instruction for a 3D conversion for converting an input 2D image into a 3D image. The input module is configured to input a 2D image corresponding to an input image format based on the limitation. The converter is configured to convert the input 2D image into a 3D image.04-05-2012
20120262550SIX DEGREE-OF-FREEDOM LASER TRACKER THAT COOPERATES WITH A REMOTE STRUCTURED-LIGHT SCANNER - Measuring three surface sets on an object surface with a measurement device and scanner, each surface set being 3D coordinates of a point on the object surface. The method includes: the device sending a first light beam to the first retroreflector and receiving a second light beam from the first retroreflector, the second light beam being a portion of the first light beam, a scanner processor and a device processor jointly configured to determine the surface sets; selecting the source light pattern and projecting it onto the object to produce the object light pattern; imaging the object light pattern onto a photosensitive array to obtain the image light pattern; obtaining the pixel digital values for the image light pattern; measuring the translational and orientational sets with the device; determining the surface sets corresponding to three non-collinear pattern elements; and saving the surface sets.10-18-2012
20120262551THREE DIMENSIONAL IMAGING DEVICE AND IMAGE PROCESSING DEVICE - The 3D image capture device includes a light-transmitting section with n transmitting areas (where n is an integer and n≧2) that have different transmission wavelength ranges and each of which transmits a light ray falling within a first wavelength range, a solid-state image sensor that includes a photosensitive cell array having a number of unit blocks, and a signal processing section that processes the output signal of the image sensor. Each unit block includes n photosensitive cells including a first photosensitive cell that outputs a signal representing the quantity of the light ray falling within the first wavelength range. The signal processing section generates at least two image data with parallax by using a signal obtained by multiplying a signal supplied from the first photosensitive cell by a first coefficient, which is real number that is equal to or greater than zero but less than one.10-18-2012
20120262549Full Reference System For Predicting Subjective Quality Of Three-Dimensional Video - A method of generating a predictive picture quality rating is provided. In general, a disparity measurement is made of a three-dimensional image by comparing left and right sub-components of the three-dimensional image. Then the left and right sub-components of the three-dimensional image are combined (fused) into a two-dimensional image, using data from the disparity measurement for the combination. A predictive quality measurement is then generated based on the two-dimensional image, and further including quality information about the comparison of the original three-dimensional image.10-18-2012
20120229609THREE-DIMENSIONAL VIDEO CREATING DEVICE AND THREE-DIMENSIONAL VIDEO CREATING METHOD - A three-dimensional video creating device (09-13-2012
20120229608IMAGE PROCCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING METHOD - Stereoscopic tracking during a zooming period is facilitated to alleviate eye fatigue. An image processing device includes an imaging unit 09-13-2012
20120229606DEVICE AND METHOD FOR OBTAINING THREE-DIMENSIONAL OBJECT SURFACE DATA - The concept includes projecting at the object surface, along a first optical axis, two or more two-dimensional (2D) images containing together one or more distinct wavelength bands. The wavelength bands vary in intensity along a first image axis, forming a pattern, within at least one of the projected images. Each projected image generates a reflected image along a second optical axis. The 3D surface data is obtained by comparing the object data with calibration data, which calibration data was obtained by projecting the same images at a calibration reference surface, for instance a planar surface, for a plurality of known positions along the z-axis. Provided that the z-axis is not orthogonal to the second optical axis, the z-axis coordinate at each location on the object surface can be found if the light intensity combinations of all predefined light intensity patterns are linearly independent along the corresponding z-axis.09-13-2012
20120229605OPTICAL OBSERVATION INSTRUMENT WITH AT LEAST TWO OPTICAL TRANSMISSION CHANNELS THAT RESPECTIVELY HAVE ONE PARTIAL RAY PATH - An optical observation instrument has two optical transmission channels for transmitting two partial ray bundles (09-13-2012
20110122228THREE-DIMENSIONAL VISUAL SENSOR - A perspective transformation is performed to a three-dimensional model and a model coordinate system indicating a reference attitude of the three-dimensional model to produce a projection image expressing a relationship between the model and the model coordinate system, and a work screen is started up. A coordinate of an origin in the projection image and rotation angles of an X-axis, a Y-axis, and a Z-axis are displayed in work areas on the screen to accept a manipulation to change the coordinate and the rotation angles. The display of the projection image is changed by a manipulation. When an OK button located is pressed, the coordinate and rotation angle are fixed, and the model coordinate system is changed based on the coordinate and rotation angle. A coordinate of each constituent point of the three-dimensional model is transformed into a coordinate of the post-change model coordinate system.05-26-2011
20100328434Microscope apparatus and cell culture apparatus - An imaging section of a microscope apparatus captures a plurality of microscope images each having the focal position which differs in the field being same with a light flux having passed through a microscopic optical system. A region separating section separates a cellular region from a non-cellular region by using the plurality of the microscope images. A focusing position calculating section finds a focusing position in a target pixel included in the cellular region based on a brightness change in the position being same in the plurality of the microscope images. A three dimensional information generating section generates three dimensional information of a cultured cell based on a position of the cellular region and the focusing position in the target pixel.12-30-2010
20100328433Method and Devices for 3-D Display Based on Random Constructive Interference - The present invention relates to a method and an apparatus for 3-D display based on random constructive interference. It produces a number of discrete secondary light sources by using an amplitude-phase-modulator-array, which helps to create 3-D images by means of constructive interference. Next it employs a random-secondary-light-source-generator-array to shift the position of each secondary light source to a random place, eliminating multiple images due to high order diffraction. It could be constructed with low resolution liquid crystal screens to realize large size real-time color 3-D display, which could widely be applied to 3-D computer or TV screens, 3-D human-machine interaction, machine vision, and so on.12-30-2010
20100328431Rendering method and apparatus using sensor in portable terminal - A method and an apparatus detect motion, rotation, and tilt for rendering using a sensor in a portable terminal. The rendering method using the sensor in the portable terminal includes pre-rendering a region of a size corresponding to a screen of the terminal and a surrounding region. A preset region of the pre-rendered regions is displayed. A motion of the terminal is detected using a sensor, and a region to display in the pre-rendered regions is changed according to the motion.12-30-2010
20100328429STEREOSCOPIC IMAGE INTENSITY BALANCING IN LIGHT PROJECTOR - In a light projection system, potentially hierarchical levels of light intensity control ensure proper laser-light output intensity, color channel intensity, white point, left/right image intensity balancing, or combinations thereof The light projection system can include a light intensity sensor in an image path, in a light-source subsystem light-dump path, in a light-modulation subsystem light-dump path, in a position to measure light leaked from optical components, or combinations thereof.12-30-2010
20100328428Optimized stereoscopic visualization - The present invention discloses a method comprising: calculating an X separation distance between a left eye and a right eye, said X separation distance corresponding to an interpupilary distance in a horizontal direction; and transforming geometry and texture only once for said left eye and said right eye.12-30-2010
20120320160DEVICE FOR ESTIMATING THE DEPTH OF ELEMENTS OF A 3D SCENE - Device comprising:12-20-2012
20120320159Apparatus And Method To Automatically Distinguish Between Contamination And Degradation Of An Article - An inspection apparatus includes an imaging unit producing image signals; a processing unit for receiving the image signal; the imaging unit producing a stack of images of an article at different focal lengths in response to the processing unit; the processing unit generating a depth map from the stack of images; the processing unit analyzing the depth map to derive a depth profile of an object of interest; the processing unit determining a surface mean for the article from the stack of images; and the processing unit characterizing the article as degraded or contaminated in response to the depth profile and the surface mean.12-20-2012
20120320158INTERACTIVE AND SHARED SURFACES - The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.12-20-2012
20120320156Image recording apparatus - An image recording apparatus includes an image-sensing unit, a video-processing unit and a power module. The image-sensing unit further includes a frame, a first connector, and a first image-sensing module for capturing an image. The video-processing unit further includes a body, a second connector, a video-processing circuit, a video-output module, and a memory module. Through an electric engagement of the first connector and the second connector, image data captured by the first image-sensing module can be transmitted to the video-processing unit for being processed by the video-processing circuit into an electronic image file readable and storable to an ordinary computer apparatus. The electronic image file can be stored in the memory module. The video-processing module can also forward the electronic image file to a foreign video-displaying apparatus through the video-output module. The power module is to provide electricity to the image-sensing unit and the video-processing unit.12-20-2012
20110037833METHOD AND APPARATUS FOR PROCESSING SIGNAL FOR THREE-DIMENSIONAL REPRODUCTION OF ADDITIONAL DATA - A method and apparatus for processing a signal, including: extracting three-dimensional (3D) reproduction information for reproducing a subtitle, which is reproduced with a video image, in 3D, from additional data for generating the subtitle; and reproducing the subtitle in 3D by using the additional data and the 3D reproduction information.02-17-2011
20110037832Defocusing Feature Matching System to Measure Camera Pose with Interchangeable Lens Cameras - A lens and aperture device for determining 3D information. An SLR camera has a lens and aperture that allows the SLR camera to determine defocused information.02-17-2011
20120268565Operating Assembly Including Observation Apparatus, The Use of Such an Operating Assembly, and an Operating Facility - An operating assembly, in particular for a medical operating field, comprises an operating lighting apparatus having a lighting support arm mounted on a support to be independently pivotally movable, and observation apparatus comprising a camera support arm mounted on support to be independently pivotally moveable, a camera module with a camera support mounted to be pivotally movable relative to the camera support arm and drive means for driving the camera support, and control means for controlling the drive means. The camera module comprises two observation cameras, each of which has a vision axis and a vision direction, each end of the camera support carrying a camera so that the cameras are disposed on either side of an axis of symmetry passing through the middle of the camera support.10-25-2012
20120268563AUGMENTED AUDITORY PERCEPTION FOR THE VISUALLY IMPAIRED - A person is provided with the ability to auditorily determine the spatial geometry of his current physical environment. A spatial map of the current physical environment of the person is generated. The spatial map is then used to generate a spatialized audio representation of the environment. The spatialized audio representation is then output to a stereo listening device which is being worn by the person.10-25-2012
20120268566THREE-DIMENSIONAL COLOR IMAGE SENSORS HAVING SPACED-APART MULTI-PIXEL COLOR REGIONS THEREIN - A three-dimensional color image sensor includes color pixels and depth pixels therein. A semiconductor substrate is provided with a depth region therein, which extends adjacent a surface of the semiconductor substrate. A two-dimensional array of spaced-apart color regions are provided within the depth region. Each of the color regions includes a plurality of different color pixels therein (e.g., red, blue and green pixels) and each of the color pixels within each of the spaced-apart color regions are spaced-apart from all other color pixels within other color regions.10-25-2012
20120268567THREE-DIMENSIONAL MEASUREMENT APPARATUS, PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A three-dimensional measurement apparatus selects points corresponding to geometric features of a three-dimensional shape model of a target object, projects a plurality of selected points corresponding to the geometric features onto a range image based on approximate values indicating the position and orientation of the target object and imaging parameters at the time of imaging of the range image, searches regions of predetermined ranges respectively from the plurality of projected points for geometric features on the range image which correspond to the geometric features of the three-dimensional shape model, and associates these geometric features with each other. The apparatus then calculates the position and orientation of the target object using differences of distances on a three-dimensional space between the geometric features of the three-dimensional shape model and those on the range image, which are associated with each other.10-25-2012
20110043609APPARATUS AND METHOD FOR PROCESSING A 3D IMAGE - Provided are an apparatus and a method for processing a three dimensional image. The apparatus for processing the three dimensional image includes a lattice pattern projection unit configured to project a reference lattice pattern onto a photographic object; a photographing unit configured to generate an image information by photographing the photographic object onto which the reference lattice pattern is projected; a depth information extraction unit configured to extract a depth information of the photographic object based on comparison between the reference lattice pattern projected onto the photographic object and a modified lattice pattern included in the image information; and a left and right eye information generation unit configured to generate a left and right eye information in correspondence with the depth information, the left and right eye information containing a three dimensional information of the photographic object.02-24-2011
20110211044Non-Uniform Spatial Resource Allocation for Depth Mapping - A method for depth mapping includes providing depth mapping resources including an illumination module, which is configured to project patterned optical radiation into a volume of interest containing the object, and an image capture module, which is configured to capture an image of the pattern reflected from the object. A depth map of the object is generated using the resources while applying at least one of the resources non-uniformly over the volume of interest.09-01-2011
20100245542DEVICE FOR COMPUTING THE EXCAVATED SOIL VOLUME USING STRUCTURED LIGHT VISION SYSTEM AND METHOD THEREOF - A device for computing an excavated soil volume using structured light is disclosed. A control sensor unit is provided at the hinge points of an excavator arm, and is configured to detect and output the location and the bent angle of the excavator arm. A microcontroller is configured to output a control signal so as to capture the images of a work area of a bucket, provided at one end of the excavator arm, using the output of the control sensor unit, convert the captured images into 3-Dimensional (3D) images, and compute an excavated soil volume. An illumination module is configured to include at least one light source that is controlled by the control signal and radiates light onto the work area. A structured light module is configured to capture the work area in response to the control signal.09-30-2010
20120274745THREE-DIMENSIONAL IMAGER AND PROJECTION DEVICE - The systems and methods described herein include a device that can scan the surrounding environment and construct a 3D image, map, or representation of the surrounding environment using, for example, invisible light projected into the environment. In some implementations, the device can also project into the surrounding environment one or more visible radiation pattern patterns (e.g., a virtual object, text, graphics, images, symbols, color patterns, etc.) that are based at least in part on the 3D map of the surrounding environment.11-01-2012
20120327192METHOD AND DEVICE FOR OPTICAL SCANNING OF THREE-DIMENSIONAL OBJECTS BY MEANS OF A DENTAL 3D CAMERA USING A TRIANGULATION METHOD - A dental 3D camera for optically scanning a three-dimensional object, and a method for operating a dental 3D camera. The camera operates in accordance with a triangulation procedure to acquire a plurality of images of the object. The method comprises forming at least one comparative signal based on at least two images of the object acquired by the camera while at least one pattern is projected on the object, and determining at least one camera shake index based on the at least one comparative signal.12-27-2012
20120327193VOICE-BODY IDENTITY CORRELATION - A system and method are disclosed for tracking image and audio data over time to automatically identify a person based on a correlation of their voice with their body in a multi-user game or multimedia setting.12-27-2012
201203271913D IMAGING DEVICE AND 3D IMAGING METHOD - A 3D imaging device obtains a 3D image (3D video) that achieves an appropriate 3D effect and/or intended placement. The 3D imaging device (12-27-2012
20120327188Vehicle-Mounted Environment Recognition Apparatus and Vehicle-Mounted Environment Recognition System - A vehicle-mounted environment recognition apparatus including a simple pattern matching unit which extracts an object candidate from an image acquired from a vehicle-mounted image capturing apparatus by using a pattern shape stored in advance and outputs a position of the object candidate, an area change amount prediction unit which calculates a change amount prediction of the extracted object candidate on the basis of an object change amount prediction calculation method set differently for each area of a plurality of areas obtained by dividing the acquired image, detected vehicle behavior information, and an inputted position of the object candidate, and outputs a predicted position of an object, and a tracking unit which tracks the object on the basis of an inputted predicted position of the object.12-27-2012
20120327190MONITORING SYSTEM - A monitoring system includes at least one three-dimensional (3D) time-of-flight (TOF) camera configured to monitor a safety-critical area. An evaluation unit is configured to activate a safety function upon an entrance of at least one of an object and a person into the monitored area and to suppress the activation of the safety function where at least one clearance element is recognized as being present on the at least one of the object and the person.12-27-2012
20120327189Stereo Camera Apparatus - A stereo camera apparatus which carries out distance measuring stably and with high accuracy by making measuring distance resolution variable according to a distance to an object is provided. A stereo camera apparatus 12-27-2012
20120327187ADVANCED REMOTE NONDESTRUCTIVE INSPECTION SYSTEM AND PROCESS - A system for inspecting a test article incorporates a diagnostic imaging system for a test article. A command controller receives two dimensional (2D) images from the diagnostic imaging system. A three dimensional (3D) computer aided design (CAD) model visualization system and an alignment system for determining local 3D coordinates are connected to the command controller. Computer software modules incorporated in the command controller are employed, in aligning, the 2D images and 3D CAD model responsive to the local 3D coordinates. The 2D images and 3D CAD model are displayed with reciprocal registration. The alignment system is then directed to selected coordinates in the 2D images or 3D CAD model.12-27-2012
20120092461FOCUS SCANNING APPARATUS - Disclosed is a handheld scanner for obtaining and/or measuring the 3D geometry of at least a part of the surface of an object using confocal pattern projection techniques. Specific embodiments are given for intraoral scanning and scanning of the interior part of a human ear.04-19-2012
20120092460System And Method For Alerting Visually Impaired Users Of Nearby Objects - A system and method for assisting a visually impaired user including a time of flight camera, a processing unit for receiving images from the time of flight camera and converting the images into signals for use by one or more controllers, and one or more vibro-tactile devices, wherein the one or more controllers activates one or more of the vibro-tactile devices in response to the signals received from the processing unit. The system preferably includes a lanyard means on which the one or more vibro-tactile devices are mounted. The vibro-tactile devices are activated depending on a determined position in front of the user of an object and the distance from the user to the object.04-19-2012
20120092459METHOD AND DEVICE FOR RECONSTRUCTION OF A THREE-DIMENSIONAL IMAGE FROM TWO-DIMENSIONAL IMAGES - The disclosure relates to a method for reconstruction of a three-dimensional image of an object. A first image is acquired of the object lit by a luminous flux having, in a region including the object, a luminous intensity dependant on the distance, with a light source emitting the luminous flux. A second image is acquired of the object lit by a luminous flux having, in a region including the object, a constant luminous intensity. For each pixel of a three-dimensional image, a relative distance of a point of the object is determined as a function of the intensity of a pixel corresponding to the point of the object in each of the acquired images.04-19-2012
20120092458Method and Apparatus for Depth-Fill Algorithm for Low-Complexity Stereo Vision - A method and apparatus for depth-fill algorithm for low-complexity stereo vision. The method includes utilizing right and left images of a stereo camera to estimate depth of the scene, wherein the estimated depth relates to each pixel of the image, and updating a depth model with the current depth utilizing the estimated depth of the scene.04-19-2012
20120092457STEREOSCOPIC IMAGE DISPLAY APPARATUS - A display apparatus 04-19-2012
20120133740FLUORESCENT NANOSCOPY METHOD - A method for analysis of an object dyed with fluorescent coloring agents. Separately fluorescing visible molecules or nanoparticles are periodically formed in different object parts, the laser produces the oscillation thereof which is sufficient for recording the non-overlapping images of the molecules or nanoparticles and for decoloring already recorded fluorescent molecules, wherein tens of thousands of pictures of recorded individual molecule or nanoparticle images, in the form of stains having a diameter on the order of a fluorescent light wavelength multiplied by a microscope amplification, are processed by a computer for searching the coordinates of the stain centers and building the object image according to millions of calculated stain center co-ordinates corresponding to the co-ordinates of the individual fluorescent molecules or nanoparticles. Two-dimensional and three-dimensional images are provided for proteins, nucleic acids and lipids with different coloring agents.05-31-2012
20120140042WARNING A USER ABOUT ADVERSE BEHAVIORS OF OTHERS WITHIN AN ENVIRONMENT BASED ON A 3D CAPTURED IMAGE STREAM - A computer-implemented method, system, and program includes a behavior processing system for capturing a three-dimensional movement of a monitored user within a particular environment monitored by a supervising user, wherein the three-dimensional movement is determined by using at least one image capture device aimed at the monitored user. The behavior processing system identifies a three-dimensional object properties stream using the captured movement. The behavior processing system identifies a particular defined adverse behavior of the monitored user represented by the three-dimensional object properties stream by comparing the identified three-dimensional object properties stream with multiple adverse behavior definitions. In response to identifying the particular defined adverse behavior from among the multiple adverse behavior definitions, the behavior processing system triggers a warning system to notify the supervising user about the particular defined adverse behavior of the monitored user through an output interface only detectable by the supervising user.06-07-2012
20120140040INFORMATION DISPLAY SYSTEM, INFORMATION DISPLAY APPARATUS, INFORMATION PROVISION APPARATUS AND NON-TRANSITORY STORAGE MEDIUM - The utility of a service using AR is improved. The information display apparatus which applies an information display according to augmented reality (AR) acquires a reference image for matching with a subject in a captured image and scale information which shows a scale of the subject in the reference image from an information provision apparatus. Then, an inputter-outputter of the information display apparatus displays the captured image as well as a distribution of the reference images by a scale of the subject. On this distribution display, a guide display according to the scale of the subject in the captured image is performed. This guide display moves on the distribution display corresponding to a change of an angle of view. Accordingly, it is easily set the angle of view by which the matching using the acquired reference image can be performed appropriately, and the convenience improves.06-07-2012
20120140038ZERO DISPARITY PLANE FOR FEEDBACK-BASED THREE-DIMENSIONAL VIDEO - The techniques of this disclosure are directed to the feedback-based stereoscopic display of three-dimensional images, such as may be used for video telephony (VT) and human-machine interface (HMI) application. According to one example, a region of interest (ROI) of stereoscopically captured images may be automatically determined based on determining disparity for at least one pixel of the captured images are described herein. According to another example, a zero disparity plane (ZDP) for the presentation of a 3D representation of stereoscopically captured images may be determined based on an identified ROI. According to this example, the ROI may be automatically identified, or identified based on receipt of user input identifying the ROI.06-07-2012
20130010078STEREOSCOPIC IMAGE TAKING APPARATUS - A stereoscopic image taking apparatus (01-10-2013
20130010072SENSOR, DATA PROCESSING SYSTEM, AND OPERATING METHOD - An image sensor includes a unit pixel including a plurality of color pixels with a depth pixel. A first signal line group of first signal lines is used to supply first control signals that control operation of the plurality of color pixels, and a separate second signal line group of second signal lines is used to supply second control signals that control operation of the depth pixel.01-10-2013
20130010077THREE-DIMENSIONAL IMAGE CAPTURING APPARATUS AND THREE-DIMENSIONAL IMAGE CAPTURING METHOD - A three-dimensional image capturing apparatus generates depth information to be used for generating a three-dimensional image from an input image, and includes: a capturing unit obtaining the input image in capturing; an object designating unit designating an object in the input image; a resolution setting unit setting depth values, each representing a different depth position, so that in a direction parallel to a depth direction of the input image, depth resolution near the object is higher than depth resolution positioned apart from the object, the object being designated by the object designating unit; and a depth map generating unit generating two-dimensional depth information corresponding to the input image by determining, for each of regions in the input image, a depth value, from among the depth values set by the resolution setting unit, indicating a depth position corresponding to one of the regions.01-10-2013
20130010071METHODS AND SYSTEMS FOR MAPPING POINTING DEVICE ON DEPTH MAP - Disclosed are methods for determining and tracking a current location of a handheld pointing device, such as a remote control for an entertainment system, on a depth map generated by a gesture recognition control system. The methods disclosed herein enable identifying a user's hand gesture, and generating corresponding motion data. Further, the handheld pointing device may send motion, such as acceleration or velocity, and/or orientation data such as pitch, roll, and yaw angles. The motion data of user's hand gesture and motion data (orientation data) as received from the handheld pointing device are then compared, and if they correspond to each other, it is determined that the handheld pointing device is in active use by the user as it is held by a particular hand. Accordingly, a location of the handheld pointing device on the depth map can be determined.01-10-2013
20130010076MULTI-CORE PROCESSOR FOR PORTABLE DEVICE WITH DUAL IMAGE SENSORS - A multi-core processor is used in a portable device that has first and second image sensors spaced from each other for capturing images of a scene from slightly different perspectives. The multi-core processor has a first image sensor interface for receiving data from the image sensor, a second image sensor interface for receiving data from the second image sensor, multiple processing units and, the four processing units and the first and second sensor interfaces being integrated onto a single chip. The processing units are configured to simultaneously process the data from the first and second image interfaces to generate stereoscopic image data.01-10-2013
20130010070INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus configured to estimate a position and orientation of a measuring object using an imaging apparatus includes an approximate position and orientation input unit configured to input a relative approximate position and orientation between the imaging apparatus and the measuring object, a first position and orientation updating unit configured to update the approximate position and orientation by matching a three-dimensional shape model to a captured image, a position and orientation difference information input unit configured to calculate and acquire a position and orientation difference amount of the imaging apparatus relative to the measuring object having moved after the imaging apparatus has captured an image of the measuring object or after last position and orientation difference information has been acquired, and a second position and orientation updating unit configured to update the approximate position and orientation based on the position and orientation difference amount.01-10-2013
20130010075CAMERA WITH SENSORS HAVING DIFFERENT COLOR PATTERNS - An image capture device includes a lens arrangement having a first lens associated with a first digital image sensor and a second lens associated with a second digital image sensor; the first digital image sensor having photosites of a first predetermined color pattern for producing a first digital image; the second digital image sensor having photosites of a different second predetermined color pattern for producing a second digital image. The image capture device also includes a device for causing the lens arrangement to capture a first digital image from the first digital image sensor and a second digital image from the second digital image sensor at substantially the same time; a processor aligning the first and second digital images; and the processor using values of the second image based on the alignment between the first and second images operates on the first digital image to produce the enhanced digital image.01-10-2013
20130010074MEASUREMENT APPARATUS, MEASUREMENT METHOD, AND FEATURE IDENTIFICATION APPARATUS - It is an object to measure a position of a feature around a road. An image memory unit stores images in which neighborhood of the road is captured. Further, a three-dimensional point cloud model memory unit 01-10-2013
20130010073SYSTEM AND METHOD FOR GENERATING A DEPTH MAP AND FUSING IMAGES FROM A CAMERA ARRAY - A method, apparatus, system, and computer program product for of digital imaging. Multiple cameras comprising lenses and digital images sensors are used to capture multiple images of the same subject, and process the multiple images using difference information (e.g., an image disparity map, an image depth map, etc.). The processing commences by receiving a plurality of image pixels from at least one first image sensor, wherein the first image sensor captures a first image of a first color, receives a stereo image of the first color, and also receives other images of other colors. Having the stereo imagery, then constructing a disparity map and an associated depth map by searching for pixel correspondences between the first image and the stereo image. Using the constructed disparity map, captured images are converted into converted images, which are then combined with the first image, resulting in a fused multi-channel color image.01-10-2013
20130010067Camera and Method for Focus Based Depth Reconstruction of Dynamic Scenes - A dynamic scene is reconstructed as depths and an extended depth of field video by first acquiring, with a camera including a lens and sensor, a focal stack of the dynamic scene while changing a focal depth. An optical flow between the frames of the focal stack is determined, and the frames are warped according to the optical flow to align the frames and to generate a virtual static focal stack. Finally, a depth map and a texture map for each virtual static focal stack is generated using a depth from defocus, wherein the texture map corresponds to an EDOF image.01-10-2013
20130010066NIGHT VISION - A robot is provided that includes a processor executing instructions that generate an image. The robot also includes a depth sensor that captures depth data about an environment of the robot. Additionally, the robot includes a software component executed by the processor configured to generate a depth map of the environment based on the depth data. The software component is also configured to generate the image based on the depth map and red-green-blue (RGB) data about the environment.01-10-2013
20130016185LOW-COST IMAGE-GUIDED NAVIGATION AND INTERVENTION SYSTEMS USING COOPERATIVE SETS OF LOCAL SENSORS - An augmentation device for an imaging system has a bracket structured to be attachable to an imaging component, and a projector attached to the bracket. The projector is arranged and configured to project an image onto a surface in conjunction with imaging by the imaging system. A system for image-guided surgery has an imaging system, and a projector configured to project an image or pattern onto a region of interest during imaging by the imaging system. A capsule imaging device has an imaging system, and a local sensor system. The local sensor system pro-vides information to reconstruct positions of the capsule endoscope free from external monitoring equipment.01-17-2013
20130016183Dual Mode User Interface System and Method for 3D VideoAANM Lazarus; David B.AACI Elkins ParkAAST PAAACO USAAGP Lazarus; David B. Elkins Park PA USAANM Zhang; YaxiAACI WayneAAST PAAACO USAAGP Zhang; Yaxi Wayne PA US - A system is provided for use with a video input signal and a video unit. The video input signal can be one of a two dimensional video signal and a three dimensional video signal. The video unit can display a three dimensional video and a two dimensional video. The system includes a receiver portion, a processing portion, a switching portion and an output portion. The receiver portion can receive the video input signal. The processing portion can output a first signal in a first mode of operation and can output a second signal in a second mode of operation, wherein the first signal is based on the video input signal and the second signal is based on the video input signal. The switching portion can switch the processing portion from the first mode of operation to the second mode of operation. The output portion can provide an output signal to the video unit, wherein the output signal is based on the first signal when the processing portion operates in the first mode of operation and wherein the output signal is based on the second signal when the processing portion operates in the second mode of operation. The first signal includes a two dimensional video signal, whereas the second signal includes a three dimensional video signal.01-17-2013
20130016184SYSTEM AND METHOD FOR LOCATING AND DISPLAYING AIRCRAFT INFORMATION - A system and method for locating and displaying aircraft information, such as three-dimensional models and various information about an aircraft component. The system may include a portable display, a remote processor, and one or more location and/or orientation-determining components. The models and other various information displayed on the portable display may correspond with a location and orientation of the portable display relative to the aircraft component. The location and/or orientation-determining components may include one or more infrared cameras for communicating with the remote processor and a plurality of infrared targets or infrared markers. The remote processor may be configured to filter the information provided to the portable display based on the portable display's location and orientation relative to the aircraft component, geographic location, user input, or other various parameters.01-17-2013
20130016182COMMUNICATING AND PROCESSING 3D VIDEOAANM Booth; Robert C.AACI IvylandAAST PAAACO USAAGP Booth; Robert C. Ivyland PA USAANM Bhat; Dinkar N.AACI PrincetonAAST NJAACO USAAGP Bhat; Dinkar N. Princeton NJ USAANM Leary; Patrick J.AACI HorshamAAST PAAACO USAAGP Leary; Patrick J. Horsham PA US - There is a communicating of 3D video having associated metadata. The communicating may involve devices and includes receiving a first video bitstream with the 3D video encoded in a first format, receiving the associated metadata. The communicating also includes forming a protocol message, utilizing a processor, including the associated metadata. The communicating also includes transmitting a second video bitstream with the 3D video encoded in a second format and transmitting the protocol message separate from the second video bitstream. The protocol message includes data fields holding processing data, based on the associated metadata, describing the rendering of the 3D video. The processing data is operable to be read and utilized to direct a client device to process the 3D video extracted from a video bitstream and present the 3D video on a display. Also, there is a processing of the 3D video. The processing utilizes the protocol message.01-17-2013
20110149042METHOD AND APPARATUS FOR GENERATING A STEREOSCOPIC IMAGE - Provided is a method for generating stereoscopic image fedback by the interaction with a real world. Even though the method according to the related art interacts the user with the virtual object or forms the stereoscopic image without interacting with the object in the user space by controlling the virtual object using a separate apparatus, the present invention feedbacks the interaction between all the object in the user space including the object and users in the virtual space to the video reproducing system to implement a system for re-processing and reproducing the stereoscopic image, thereby making it possible to produce realistic stereoscopic image.06-23-2011
20110157314Image Processing Apparatus, Image Processing Method and Recording Medium - An image processing apparatus includes: a photographing unit that includes a fisheye lens and that captures an image through the fisheye lens; a memory unit that stores three-dimensional (3D) model information for defining a 3D space; a light source number calculation unit that calculates a number of light sources that irradiate light onto the image captured by the photographing unit and that calculates light source coordinate information that indicates positions of the light sources corresponding to the number of light sources in the image; a light source information calculation unit that calculates parameters regarding the light source in a real space as light source information about parameters in the 3D space, based on the light source coordinate information; a 3D image writing unit that writes a 3D image, based on the 3D model information and the light source information; and a display unit that displays the 3D image, thereby obtaining information about a light source that exists in a real space and displaying an image that reflects information about the light source.06-30-2011
20110157311Method and System for Rendering Multi-View Image - A method and a system for rendering a multi-view image are provided. The method for rendering the multi-view image includes the following steps. An image capturing unit provides an original image and depth information thereof. Multiple threads of one processing unit perform a pixel rendering process and a hole filling process on at least one row of pixels of the original image according to the depth information by way of parallel processing to render at least one new-view image. View-angles of the at least one new-view image and the original image are different. Each of the threads performs a view interlacing process on at least one pixel of the original image and the at least one new-view image by way of parallel processing to render the multi-view image.06-30-2011
20130021442ELECTRONIC CAMERA - An electronic camera includes an imager. An imager repeatedly outputs an image indicating a space captured on an imaging surface. A displayer displays the image outputted from the imager. A superimposer superimposes an index indicating a position of at least a focal point onto the image displayed by the displayer. A position changer changes a position of the index superimposed by the superimposer according to a focus adjusting operation. A setting changer changes a focusing setting in association with the process of the position changer.01-24-2013
20130021445Camera Projection Meshes - A 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world. Unlike previous methods that relied on shadow-mapping and that were limited in performance due to the need to re-render the complex scene multiple times per frame, the proposed method uses, for each camera, one Camera Projection Mesh (“CPM”) of fixed and limited complexity per camera. The CPM that surrounds each camera is effectively molded over the surrounding 3D world surfaces or areas visible from the video camera. Rendering and compositing of the CPMs may be entirely performed on the Graphic Processing Unit (“GPU”) using custom shaders for optimal performance. The method also enables improved view-shed analysis and fast visualization of the coverage of multiple cameras.01-24-2013
20130021443CAMERA SYSTEM WITH COLOR DISPLAY AND PROCESSOR FOR REED-SOLOMON DECODING - A camera system including: a substrate having a coding pattern printed thereon and01-24-2013
20130021441METHOD AND IMAGE SENSOR HAVING PIXEL STRUCTURE FOR CAPTURING DEPTH IMAGE AND COLOR IMAGE - An image sensor having a pixel structure for capturing a depth image and a color image. The image sensor has a pixel structure that shares a floating diffusion (FD) node and a readout node, and operates with different pixel structures, according to a depth mode and a color mode.01-24-2013
20120242803STEREO IMAGE CAPTURING DEVICE, STEREO IMAGE CAPTURING METHOD, STEREO IMAGE DISPLAY DEVICE, AND PROGRAM - A conventional device adjusts the convergence angle of its imaging unit in a manner that the left-and-right face detection areas are at the same coordinates and thus forms a stereo image that will be placed on a display screen during display, but fails to place a subject at an intended stereoscopic position. A stereo image capturing device (09-27-2012
20120242799VEHICLE EXTERIOR MONITORING DEVICE AND VEHICLE EXTERIOR MONITORING METHOD - A vehicle exterior monitoring device obtains position information of a three-dimensional object present in a detected region, divides the detected region with respect to an horizontal direction into plural first divided regions, derives a first representative distance corresponding to a peak in distance distribution of each first divided region based on the position information, groups the first divided regions based on the first representative distance to generate one or more first divided region groups, divides the first divided region group with respect to a vertical direction into plural second divided regions, groups second divided regions having relative distances close to the first representative distance to generate a second divided region group, and limits a target range for which the first representative distance is derived within the first divided region group in which the second divided region group is generated to a vertical range corresponding to the second divided region group.09-27-2012
20120242798SYSTEM AND METHOD FOR SHARING VIRTUAL AND AUGMENTED REALITY SCENES BETWEEN USERS AND VIEWERS - A preferred method for sharing user-generated virtual and augmented reality scenes can include receiving at a server a virtual and/or augmented reality (VAR) scene generated by a user mobile device. Preferably, the VAR scene includes visual data and orientation data, which includes a real orientation of the user mobile device relative to a projection matrix. The preferred method can also include compositing the visual data and the orientation data into a viewable VAR scene; locally storing the viewable VAR scene at the server; and in response to a request received at the server, distributing the processed VAR scene to a viewer mobile device.09-27-2012
20120242797VIDEO DISPLAYING APPARATUS AND VIDEO DISPLAYING METHOD - According to one embodiment, a video displaying apparatus includes a separator, a generator and a controller. The separator is configured to separate a video signal for 3D video display into first and second video signals. The generator is configured to generate a first video frame in which a frame of the first video signal is displayed in a first area on a screen, to generate a second video frame in which a frame of the first or second video signal is displayed in a second area different from the first area, to generate a third video frame similar to the first video frame, and to generate a fourth video frame in which a frame of the second or first video signal is displayed in the second area. The controller is configured to sequentially display the first to fourth video frames in this order.09-27-2012
20120242796AUTOMATIC SETTING OF ZOOM, APERTURE AND SHUTTER SPEED BASED ON SCENE DEPTH MAP - A Depth Map (DM) is able to be utilized for many parameter settings involving cameras, camcorders and other devices. Setting parameters on the imaging device includes zoom setting, aperture setting and shutter speed setting.09-27-2012
20100039500Self-Contained 3D Vision System Utilizing Stereo Camera and Patterned Illuminator - A self-contained hardware and software system that allows reliable stereo vision to be performed. The vision hardware for the system, which includes a stereo camera and at least one illumination source that projects a pattern into the camera's field of view, may be contained in a single box. This box may contain mechanisms to allow the box to remain securely and stay in place on a surface such as the top of a display. The vision hardware may contain a physical mechanism that allows the box, and thus the camera's field of view, to be tilted upward or downward in order to ensure that the camera can see what it needs to see.02-18-2010
20110261162Method for Automatically Generating a Three-Dimensional Reference Model as Terrain Information for an Imaging Device - A method for automatically generating a three-dimensional reference model as terrain information for a seeker head of an unmanned missile. A three-dimensional terrain model formed from model elements obtained with the aid of satellite and/or aerial reconnaissance is provided. Position data of the imaging device at least at one planned position and a direction vector from the planned position of the imaging device to a predetermined target point in the three-dimensional terrain model are provided. A three-dimensional reference model of the three dimensional terrain model is generated that incorporates only those model elements and sections of model elements from the terrain model, which in the viewing direction of the direction vector from the planned position of the imaging device, are not covered by other model elements and/or are not located outside the field of view of the imaging device.10-27-2011
20130169756DEPTH SENSOR, METHOD OF CALCULATING DEPTH IN THE SAME - A depth calculation method of a depth sensor includes outputting a modulated light to a target object, detecting four pixel signals from a depth pixel based on a reflected light reflected by the target object, determining whether each of the four pixel signals is saturated based on results of comparing a magnitude of each of the four pixel signals with a threshold value, and calculating depth to the target object based on the determination result.07-04-2013
20130176396BROADBAND IMAGER - A broadband imager, which is able to image both IR and visible light, is disclosed. In one embodiment, an IR sensitive region of an IR pixel underlies the R, G, B sensitive regions of R, G, and B visible pixels. Therefore, the IR pixel receives IR light through a same surface area of the photosensor through which the R, G, and B pixels receive visible light. However, the IR light generates electron-hole pairs deeper below the common surface area shared by the RGB and IR pixels, than visible light. The photosensor also has a charge accumulation region for accumulating charges generated in the IR sensitive region and an electrode above the charge accumulation region for providing a voltage to accumulate the charges generated in the IR pixel.07-11-2013
20130176397Optimized Stereoscopic Camera for Real-Time Applications - A method is provided for an optimized stereoscopic camera with low processing overhead, especially suitable for real-time applications. By constructing a viewer-centric and scene-centric model, the mapping of scene depth to perceived depth may be defined as an optimization problem, for which a solution is analytically derived based on constraints to stereoscopic camera parameters including interaxial separation and convergence distance. The camera parameters may thus be constrained prior to rendering to maintain a desired perceived depth volume around a stereoscopic display, for example to ensure user comfort or provide artistic effects. To compensate for sudden scene depth changes due to unpredictable camera or object movements, as may occur with real-time applications such as video games, the constraints may also be temporally interpolated to maintain a linearly corrected and approximately constant perceived depth range over time.07-11-2013
20130176398Display Shelf Modules With Projectors For Displaying Product Information and Modular Shelving Systems Comprising the Same - Modular shelving systems and display shelves for modular shelving systems are disclosed. In one embodiment, a modular shelving system includes a shelf support frame comprising a back plane portion and a base portion. At least one display shelf module is removably coupled to the back plane portion of the shelf support frame such that the display shelf module is vertically and horizontally positionable on the back plane portion of the shelf support frame. The display shelf module may include a top and bottom panels, and side panels that define an interior volume. A display panel may be affixed to a front of the display shelf module. A projector may be disposed in the interior volume of the display shelf module. The projector projects an optical signal onto a rear surface of the display panel such that image data is visible on a front surface of the display panel.07-11-2013
20120249745METHOD AND DEVICE FOR GENERATING A REPRESENTATION OF SURROUNDINGS - It is proposed that, on the assumption that the surrounding area forms a known topography, a representation is produced from a form of the topography, the camera position relative to the topography and the image in the form of a virtual representation of the view from an observation point which is at a distance from the camera position. This makes it possible to select an advantageous perspective of objects which are imaged in the image, thus making it possible for an operator to easily identify the position of the objects relative to the camera.10-04-2012
20120249743METHOD AND APPARATUS FOR GENERATING IMAGE WITH HIGHLIGHTED DEPTH-OF-FIELD - A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image.10-04-2012
20100085423Stereoscopic imaging - The invention represents a new form of stereoscopically-rendered three-dimensional model and various methods for constructing, manipulating, and displaying these models. The model consists of one or more stereograms applied to a substrate, where the shape of the substrate has been derived from the imagery or from the object itself, and the stereograms are applied to the substrate in a specific way that eliminates parallax for some points and reduces it in others. The methods offered can be (conservatively) 400 times more efficient at representing complex surfaces than conventional modelling techniques, and also provide for independent control of micro and macro parallaxes in a stereoscopically-viewed scene, whether presented in a VR environment or in stereo film or television.04-08-2010
20130176399SYSTEM AND METHOD FOR CREATING A THREE-DIMENSIONAL IMAGE FILE - A system includes a light source configured to illuminate a subject with a pattern, a first optical sensor configured to capture a first pattern image and a first object image of the subject, and a processing unit configured to determine elevation data of the subject based on the first pattern image and create a three-dimensional image file based on the first object image and the elevation data of the subject.07-11-2013
20130093852PORTABLE ROBOTIC DEVICE - A portable robotic device (PRD) as well as related devices and methods are described herein. The PRD includes a 3-D imaging sensor configured to acquire corresponding intensity data frames and range data frames of the environment. An imaging processing module configured to identify a matched feature in the intensity data frames, obtain sets of 3-D coordinates representing the matched feature in the range data frames, and determine a pose change of the PRD based on the 3-D coordinates; and perform 3-D data segmentation of the range data frames to extract planar surfaces.04-18-2013
20130093853INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - The present technology relates to an information processing apparatus and an information processing method that can reproduce high-quality stereoscopic images with low delay.04-18-2013
20130093850IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing method is disclosed. A 2D image is virtually divided into a plurality of blocks. With respect to each block, an optimum contrast value and a corresponding focus step are obtained. An object distance for an image in each block is obtained according to the respective focus step of each block. A depth map is obtained from the object distances of the blocks. The 2D image is synthesized to form a 3D image according to the depth map.04-18-2013
20130093851IMAGE GENERATOR - An image generator is provided which generates a monitor display image that facilitates easy recognition of a three-dimensional object in an overhead view image. An image generator includes: an overhead view image generation section for generating an overhead view image by performing a projective transformation, with a virtual viewpoint above a vehicle, of an image captured by an on-board camera for capturing an image of a surrounding region of the vehicle; a three-dimensional object detection section for recognizing a three-dimensional object present in the surrounding region and outputting three-dimensional object attribute information showing an attribute of the three-dimensional object; and an image composition section for generating a monitor display image for vehicle driving assistance by performing image composition of a grounding plane mark showing a grounding location of the three-dimensional object with a portion at the grounding location in the overhead view image, based on the three-dimensional object attribute information.04-18-2013
20130100252OBJECT REGION EXTRACTION SYSTEM, METHOD AND PROGRAM - Provided is an object region extraction system, an object region extraction method and an object region extraction program, which can improve non-extraction of an object region in three-dimensional space caused by extraction error of an object region from an image, and also reduce incorrect extraction of an object region in three-dimensional space.04-25-2013
20130100249STEREO CAMERA DEVICE - Degradation of a three-dimensional measurement accuracy of a stereo camera device that takes an image of an object in a wide wavelength band is suppressed. In order to achieve the above object, a stereo camera device includes: a stereo image acquiring part for taking an image of light from the object to acquire a stereo image; a corresponding point searching part for performing a corresponding point search between images constituting the stereo image; a wavelength acquiring part for acquiring a representative wavelength of a wavelength component of the light; a parameter acquiring part for acquiring each parameter value corresponding to the representative wavelength with respect to at least one of camera parameters of the stereo image acquiring part in which the parameter value fluctuates according to the wavelength component of the light; and a three-dimensional information acquiring part for acquiring three-dimensional information on the object from a result of the corresponding point search using the each parameter value.04-25-2013
20130100250Methods and apparatus for imaging of occluded objects from scattered light - In exemplary implementations of this invention, a 3D range camera “looks around a corner” to image a hidden object, using light that has bounced (reflected) off of a diffuse reflector. The camera can recover the 3D structure of the hidden object.04-25-2013
20130100251IMAGE CAPTURING DEVICE AND IMAGE CAPTURING METHOD - Present invention provides an imaging element that includes a first pixel group and a second pixel group, a pickup execution control unit that performs pixel addition by exposing the first pixel group and the second pixel group of the imaging element during the same exposure in a case of pickup in an SN mode and performs pixel addition by exposing the first pixel group and the second pixel group of the imaging element during different exposure times in a case of pickup in a DR mode, a diaphragm that is arranged in a light path through which the light fluxes which are incident to the imaging element pass, and a diaphragm control unit that, in the case of pickup in the DR mode, sets the diaphragm value of the diaphragm to be a value which is greater than that of the case of pickup in the SN mode.04-25-2013
20130113888DEVICE, METHOD AND PROGRAM FOR DETERMINING OBSTACLE WITHIN IMAGING RANGE DURING IMAGING FOR STEREOSCOPIC DISPLAY - An obstacle determining unit obtains predetermined index values for each of subranges of each imaging range of each imaging unit, compares the index values of the subranges at mutually corresponding positions in the imaging ranges of the different imaging units, and if a difference between the index values in the imaging ranges of the different imaging units is large enough to satisfy a predetermined criterion, determines that the imaging range of at least one of the imaging units contains an obstacle that is close to the imaging optical system of the at least one of the imaging units.05-09-2013
20130113887APPARATUS AND METHOD FOR MEASURING 3-DIMENSIONAL INTEROCULAR CROSSTALK - An apparatus and method for measuring 3-dimensional (3D) interocular crosstalk is disclosed. A light sensor detects luminance of a stereoscopic image displayed in a display and outputs a luminance value indicating the detected luminance. A controller calculates 3D interocular crosstalk based on a gray difference and a residual luminance ratio.05-09-2013
201301138863D Image Photographing Apparatus and Method - A three-dimensional (3D) image photographing apparatus including: an image sensor; a 3D shutter which is disposed on the light path of the image light and comprises a first opening and a second opening, and which sequentially executes a first operation of passing a first image light by opening only the first opening, a second operation of blocking both the first opening and the second opening, and a third operation of passing a second image light by opening only the second opening; and a control unit which is electrically connected to the image sensor and the 3D shutter and which synchronizes a vertical synchronization signal which is applied to the image sensor with a starting time of the second operation by controlling the image sensor and the 3D shutter.05-09-2013
20130113885THREE-DIMENSIONAL IMAGING USING A SINGLE CAMERA - The attenuation and other optical properties of a medium are exploited to measure a thickness of the medium between a sensor and a target surface. Disclosed herein are various mediums, arrangements of hardware, and processing techniques that can be used to capture these thickness measurements and obtain three-dimensional images of the target surface in a variety of imaging contexts. This includes general techniques for imaging interior/concave surfaces as well as exterior/convex surfaces, as well as specific adaptations of these techniques to imaging ear canals, human dentition, and so forth.05-09-2013
20130127998MEASUREMENT APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A measurement apparatus includes a projection unit configured to project pattern light onto an object to be measured, an imaging unit configured to capture an image of the object to be measured on which the pattern light is projected to acquire a captured image of the object to be measured, a measurement unit configured to measure a position and/or orientation of the object to be measured on the basis of the captured image, a position and orientation of the projection unit, and a position and orientation of the imaging unit, a setting unit configured to set identification resolution of the pattern light using a range of variation in the position and/or orientation of the object to be measured; and a change unit configured to change a pattern shape of the pattern light in accordance with the identification resolution.05-23-2013
20130127995PREPROCESSING APPARATUS IN STEREO MATCHING SYSTEM - A preprocessing apparatus in a stereo matching system is provided. In the preprocessing apparatus, coordinate information of a stereo camera is stored, a new address of the pixel is specified using the coordinate information, and left and right images received from the stereo camera are rectified using the new address of the pixel.05-23-2013
20130127994VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device.05-23-2013
20130169757IMAGE PICKUP APPARATUS THAT DETERMINES SHOOTING COMPOSITION, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - An image pickup apparatus capable of generating image signals for viewing images shot in a composition (vertical or horizontal) intended by a photographer as a three-dimensional image. The apparatus has an image pickup device for converting an optical image to a picked-up image signal as an electric signal. The device includes a plurality of unit pixels, each of which has a plurality of photo diodes for converting the optical image to the picked-up image signal. When an image pickup operation is performed, a posture of the image pickup apparatus is determined, and the plurality of photo diodes in each unit pixel are grouped into a plurality of photo diode groups according to a result of the determination. A plurality of image signals are generated from picked-up image signals output from the photo diode groups, respectively.07-04-2013
20130169755SIGNAL PROCESSING DEVICE FOR PROCESSING PLURALITY OF 3D CONTENT, DISPLAY DEVICE FOR DISPLAYING THE CONTENT, AND METHODS THEREOF - A display device includes a plurality of reception units receiving a plurality of content, a storage unit, a plurality of scaler units reducing data sizes of the plurality of content, storing the respective content with the reduced data sizes in the storage unit, and reading the respective content stored in the storage unit according to an output timing, a plurality of frame rate conversion units converting frame rates of the respective read content, and a video output unit combining and displaying the respective content output from the plurality of frame rate conversion units. Accordingly, the resources can be minimized.07-04-2013
20130169754AUTOMATIC INTELLIGENT FOCUS CONTROL OF VIDEO - The invention is directed to systems, methods and computer program products for providing focus control for an image-capturing device. An exemplary method includes capturing an image frame using an image-capturing device and recording video tracking data and gaze tracking data for one or more image frames following the captured image frame and/or for one or more images frames prior to the captured image frame. The exemplary method additionally includes calculating a focus distance and a depth of field based at least partially on the recorded video tracking data and recorded gaze tracking data. The exemplary method additionally includes displaying the captured image frame based on the calculated focus distance and the calculated depth of field.07-04-2013
201301279973D IMAGE PICKUP OPTICAL APPARATUS AND 3D IMAGE PICKUP APPARATUS - An optical apparatus used for a 3D image pickup apparatus for taking two subject images having a disparity by using two lens apparatuses, each of which is directly connectable to an image pickup apparatus, and one image pickup apparatus, the optical apparatus including: a first attaching unit for detachably attaching a first lens apparatus; a second attaching unit for detachably attaching a second lens apparatus; a camera attaching unit for detachably attaching the image pickup apparatus, the image pickup apparatus including an image pickup portion; and a switch unit for alternately switching light rays from the first and second lens apparatuses to guide the light ray to the image pickup apparatus in a state that the first and second lens apparatuses and the image pickup apparatus are connected to the optical apparatus. Intermediate images are formed in the optical apparatus by the first and second lens apparatuses.05-23-2013
20130127993METHOD FOR STABILIZING A DIGITAL VIDEO - A method for stabilizing an input digital video. Input camera positions are determined for each of the input video frames, and an input camera path is determined representing input camera position as a function of time. A smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed camera positions. A stabilized video frame is determined corresponding to each of the smoothed camera positions by: selecting an input video frame having a camera position near to the smoothed camera position; warping the selected input video frame responsive to the input camera position; warping a set of complementary video frames captured from different camera positions than the selected input video frame; and combining the warped input video frame and the warped complementary video frames to form the stabilized video frame.05-23-2013
20130135440Aerial Photograph Image Pickup Method And Aerial Photograph Image Pickup Apparatus - The aerial photograph image pickup method comprises a first step of acquiring still images along an outward route and a return route respectively, a second step of preparing a stereo-image with regard to three images adjacent to each other in advancing direction, and of preparing another stereo-image by relative orientation on one more set of adjacent images and of preparing two sets of stereo-images, a third step of connecting two sets of stereo-images by using feature points extracted from a portion of an image common to the two sets of stereo-images, a step of connecting all stereo-images in the outward route direction and in the return route direction according to images acquired in the first step by repeating the second and third steps, and a step of selecting common tie points from the images adjacent to each other in the adjacent course and connecting the adjacent stereo-images in the course.05-30-2013
20130135438GATE CONTROL SYSTEM AND METHOD - A gate control system includes a system control unit, a gate device, and an image obtaining unit. The system control unit further includes a gate control unit and a determination unit. The image obtaining unit captures image of a scene in an operating area of the gate device and obtains distance information. The gate control unit generates three dimension data according to the captured images and the distance information, and the determination unit determines according to the three dimension data whether a person has passed through the gate device. The system control unit controls the gate device according to the determination unit. The disclosure further provides a gate control method.05-30-2013
20130141542THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD - A three-dimensional object detection device 06-06-2013
20130141541COMPACT CAMERA ACTUATOR AND COMPACT STEREO-SCOPIC IMAGE PHOTOGRAPHING DEVICE - The purpose of the present invention is to provide a compact three-dimensional image photographing device capable of adjusting the first angle of view on an object being picked up on the image sensor by adjusting the space between two lenses by moving the two lenses horizontally, left and right. The compact three-dimensional image photographing device of the present invention comprises a case; a first holder and a second holder mounted spaced apart from each other on the left and right sides in the case so that the holders can move in the left and right directions, each of the holders having a compact camera actuator therein; a guide shaft, passing through the first and second holders and thus mounted on the case, for guiding the left and right movements of the first holder and the second holder; and left and right driving portions, mounted respectively on the first holder and the second holder, for moving the first holder and the second holder left and right.06-06-2013
20130141540TARGET LOCATING METHOD AND A TARGET LOCATING SYSTEM - A target locating method and a target locating system. Images of a target area are recorded utilizing recording devices carried by a vehicle. The recorded images of the target area are matched with a corresponding three dimensional area of a three dimensional map including transferring a target indicator from the recorded images of the target area to the three dimensional map of the corresponding target area. The coordinates of the target indicator position are read in the three dimensional map. The read coordinates of the target indicator position are made available for position requiring equipment.06-06-2013
20130141539MONOCULAR STEREOSCOPIC IMAGING DEVICE - The monocular stereoscopic imaging device according to one aspect of the presently disclosed subject matter includes: an imaging optical system including a zoom lens and a diaphragm; a pupil dividing unit configured to divide a light flux having passed through a imaging optical system into multiple light fluxes; an imaging unit configured to receive the multiple light fluxes, so as to continuously acquire a left-eye image and a right-eye image; and a controlling unit configured to control a zoom lens driving unit to move the zoom lens in accordance with an instruction of changing the focus distance, and configured to control the diaphragm driving unit to maintain at a substantially constant level a stereoscopic effect of the left-eye image and the right-eye image three-dimensionally displayed on a display unit before and after the zoom lens is moved.06-06-2013
20130141538SYSTEM AND METHOD FOR DEPTH FROM DEFOCUS IMAGING - An imaging system includes a positionable device configured to axially shift an image plane, wherein the image plane is generated from photons emanating from an object and passing through a lens, a detector plane positioned to receive the photons of the object that pass through the lens, and a computer programmed to characterize the lens as a mathematical function, acquire two or more elemental images of the object with the image plane of each elemental image at different axial positions with respect to the detector plane, determine a focused distance of the object from the lens, based on the characterization of the lens and based on the two or more elemental images acquired, and generate a depth map of the object based on the determined distance.06-06-2013
20130141537Methodology For Performing Depth Estimation With Defocused Images Under Extreme Lighting Conditions - A methodology for performing a depth estimation procedure with defocused images under extreme lighting conditions includes a camera device with a sensor for capturing blur images of a photographic target under extreme lighting conditions. The extreme lighting conditions may include over-exposed conditions and/or under-exposed conditions. The camera device also includes a depth generator that performs the depth estimation procedure by utilizing the captured blur images. The depth estimation procedure includes a clipped-pixel substitution procedure to compensate for the extreme lighting conditions.06-06-2013
20110211045METHOD AND SYSTEM FOR PRODUCING MULTI-VIEW 3D VISUAL CONTENTS - A method for producing 3D multi-view visual contents including capturing a visual scene from at least one first point of view for generating a first bidimensional image of the scene and a corresponding first depth map indicative of a distance of different parts of the scene from the first point of view. The method further includes capturing the visual scene from at least one second point of view for generating a second bidimensional image; processing the first bidimensional image to derive at least one predicted second bidimensional image predicting the visual scene captured from the at least one second point of view; deriving at least one predicted second depth map predictive of a distance of different parts of the scene from the at least one second point of view by processing the first depth map, the at least one predicted second bidimensional image and the second bidimensional image.09-01-2011
20110221867method and device for optically aligning the axels of motor vehicles - A method and a device are provided for the optical axle alignment of wheels of a motor vehicle. At the wheels that are to be aligned, targets are mounted, having optically recordable marks, the targets being recordable by measuring units that have stereo camera devices. In a referencing process, using a referencing device that is integrated into the measuring units, a measuring position reference system is established for the measuring units. In a calibration process, in which a local 3D coordinate system is established using at least three marks of the target, the determination of a reference plane is carried out, using a significant mark of the target. Finally, using the reference plane, a vehicle longitudinal center plane is ascertained, while taking into account the measuring location reference system. In a subsequent measuring process, an image is recorded of at least three indeterminate marks taken during the calibration process, and their spatial position is ascertained in the local 3D coordinate system by the evaluation unit.09-15-2011
20110221866COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN DISPLAY CONTROL PROGRAM, DISPLAY CONTROL APPARATUS, DISPLAY CONTROL SYSTEM, AND DISPLAY CONTROL METHOD - An image display apparatus includes a stereoscopic image display apparatus configured to display a stereoscopically visible image, and a planar image display apparatus configured to display a planar image. An adjustment section of the image display apparatus adjusts relative positions, relative sizes, and relative rotations of a left-eye image taken by a left-eye image imaging section and a right-eye image taken by a right-eye image imaging section. The adjusted left-eye image and the adjusted right-eye image are viewed by the left eye and the right eye of the user, respectively, thereby displaying the stereoscopic image on the stereoscopic image display apparatus. The adjusted left-eye image and the adjusted right-eye image are made semi-transparent and superimposed one on the other, and thus a resulting superimposed planar image is displayed on the planar image display apparatus.09-15-2011
20110254924OPTRONIC SYSTEM AND METHOD DEDICATED TO IDENTIFICATION FOR FORMULATING THREE-DIMENSIONAL IMAGES - The invention relates to an optronic system for identifying an object including a photosensitive sensor, communication means and a computerized processing means making it possible to reconstruct the object in three dimensions on the basis of the images captured by the sensor and to identify the object on the basis of the reconstruction. The photosensitive sensor is able to record images of the object representing the intensity levels of an electromagnetic radiation reflected by the surface of the object captured from several observation angles around the object and the communication means are able to transmit the said images to the computerized processing means to reconstruct the object in three dimensions by means of a tomography function configured to process the images of the object representing the intensity levels of an electromagnetic radiation reflected by the surface of the object. The invention also relates to a method of computerized processing for object identification by reconstruction of the object in three dimensions. The invention is applied to the field of target detection, to the medical field and also microelectronics, for example.10-20-2011
20130147922STEREO IMAGE PROCESSING APPARATUS AND STEREO IMAGE PROCESSING METHOD - Provided is a stereo image processing apparatus wherein parallax can be calculated with high precision. A window-function shifting unit (06-13-2013
20130147916MICROSCOPY SYSTEM AND METHOD FOR CREATING THREE DIMENSIONAL IMAGES USING PROBE MOLECULES06-13-2013
20130147917COMPUTING DEVICE AND HOUSEHOLD MONITORING METHOD USING THE COMPUTING DEVICE - In a household monitoring method using a computing device, the computing device is connected to one or more depth-sensing cameras and an alarm device. The computing device controls the depth-sensing cameras to capture real-time images of monitored areas in front of the depth-sensing cameras. A presence of a person is detected from the images. If the person is detected to be in exigency, the computing device notifies relevant personnel of the exigency.06-13-2013
20130147918STEREO IMAGE GENERATION APPARATUS AND METHOD - There is provided a stereo image generation apparatus which extracts a plurality of sets of feature points from each image of a subject formed in a left-half area and a right-half area captured using a stereo adapter such that feature points in each set correspond to the same one of points on the subject. The apparatus then calculates a set of correction parameters based on the sets of feature points and an evaluation value indicating a degree of likelihood of being correct feature point for a point with respect to a corresponding feature point extracted. For a given feature point of interest extracted from the one of the left-half and right-half areas, the evaluation value is high in a possible shifting area within which image shifting may occur due to distortion caused by the structure of the stereo adapter and the mounting potion error of the stereo adapter.06-13-2013
20130147919Multi-View Difraction Grating Imaging With Two-Dimensional Displacement Measurement For Three-Dimensional Deformation Or Profile Output - Hardware and software methodology is described for three-dimensional imaging in connection with optical transmission grating used to achieve a plurality of views of an area or entirety of an object imaged with a single digital camera for recording and subsequent processing. Such processing produces three-dimensional data (in terms of element displacement and/or object profile) using a two-dimensional displacement measurement technique only.06-13-2013
20130147920IMAGING DEVICE - The imaging device includes an optical system, imaging unit, and control unit. The optical system is configured to include a focus lens. The imaging unit is configured to capture a left-eye subject and a right-eye subject via the optical system. A image captured by the imaging unit includes a left-eye image for the left-eye subject and the right-eye image for the right-eye subject. The control unit is configured to generate a first AF evaluation for the left-eye image and a second AF evaluation for the right-eye image. The control unit generates a third AF evaluation value on the basis of the first AF evaluation value and the second AF evaluation value. The control unit controls the drive of the focus lens on the basis of the third AF evaluation value.06-13-2013
20130147921Generation of patterned radiation - Imaging apparatus includes an illumination assembly, including a plurality of radiation sources and projection optics, which are configured to project radiation from the radiation sources onto different, respective regions of a scene. An imaging assembly includes an image sensor and objective optics configured to form an optical image of the scene on the image sensor, which includes an array of sensor elements arranged in multiple groups, which are triggered by a rolling shutter to capture the radiation from the scene in successive, respective exposure periods from different, respective areas of the scene so as to form an electronic image of the scene. A controller is coupled to actuate the radiation sources sequentially in a pulsed mode so that the illumination assembly illuminates the different, respective areas of the scene in synchronization with the rolling shutter.06-13-2013
20120274746METHOD FOR DETERMINING A SET OF OPTICAL IMAGING FUNCTIONS FOR THREE-DIMENSIONAL FLOW MEASUREMENT - The invention relates to a method for determining a set of optical imaging functions that describe the imaging of a measuring volume onto each of a plurality of detector surfaces on which the measuring volume can be imaged at in each case a different observation angle by means of detection optics. In addition to the assignment of in each case one image position (x, y) to each volume position (X, Y, Z), the method according to the invention envisages that the shape of the image of a punctiform particle in the measuring volume be described by shape parameter values (a, b, 100 , I) and that the corresponding set of shape parameter values be assigned to each volume position (X, Y. Z) for each detector surface.11-01-2012
20120274744STRUCTURED LIGHT IMAGING SYSTEM - Structured light imaging method and systems are described. An imaging method generates a stream of light pulses, converts the stream after reflection by a scene to charge, stores charge converted during the light pulses to a first storage element, and stores charge converted between light pulses to a second storage element. A structured light image system includes an illumination source that generates a stream of light pulses and an image sensor. The image sensor includes a photodiode, first and second storage elements, first and second switches, and a controller that synchronizes the image sensor to the illumination source and actuates the first and second switches to couple the first storage element to the photodiode to store charge converted during the light pulses and to couple the second storage element to the photodiode to store charge converted between the light pulses.11-01-2012
20100309290SYSTEM FOR CAPTURE AND DISPLAY OF STEREOSCOPIC CONTENT - A system for providing capture and display of stereoscopic content with interchangeable portable devices, the system including a portable transmitter/receiver device configured to communicate stereoscopic content between a first device where an image originates and a second device through which a user may view the stereoscopic content, wherein the portable transmitter/receiver device is configured to communicate through wired and/or wireless communication.12-09-2010
201003092893D Positioning Apparatus and Method - A 3D positioning apparatus is used for an object that includes feature points and a reference point. The object undergoes movement from a first to a second position. The 3D positioning apparatus includes: an image sensor for capturing images of the object; and a processor for calculating, based on the captured images, initial coordinates of each feature point when the object is in the first position, initial coordinates of the reference point, final coordinates of the reference point when the object is in the second position, and final coordinates of each feature point. The processor calculates 3D translational information of the feature points using the initial and final coordinates of the reference point, and 3D rotational information of the feature points using the initial and final coordinates of each feature point. A 3D positioning method is also disclosed.12-09-2010
20110242284VIDEO PLAYBACK DEVICE - An image reproducing apparatus capable of outputting a 3D image signal or a non-3D image signal which can display a stereoscopic or a non-stereoscopic image to an image display apparatus, including: an AV processing unit operable to input data of contents and generate the 3D or non-3D image signal from the contents data; an output unit operable to output the 3D or non-3D image signal generated by the AV processing unit to the display apparatus in accordance with a 3D image output format being a format for outputting an image signal for stereoscopic display; and a receiving unit operable to receive an instruction inputted by a user. In a case where the receiving unit receives the instruction to display the contents in non-3D images when the output unit outputs the 3D image signal in accordance with the 3D image output format, the AV processing unit generates the non-3D image signal from the contents data and the output unit outputs the non-3D image signal to the display apparatus in accordance with the 3D image output mode.10-06-2011
20110242283Method and Apparatus for Generating Texture in a Three-Dimensional Scene - In one aspect of the teachings herein, a 3D imaging apparatus uses a “texture tool” or facilitates the use of such a tool by an operator, to add artificial texture to a scene being imaged, for 3D imaging of surfaces or background regions in the scene that otherwise lack sufficient texture for determining sufficiently dense 3D range data. In at least one embodiment, the 3D imaging apparatus provides for iterative or interactive imaging, such as by dynamically indicating that one or more regions of the scene require artificial texturing via the texture tool for accurate 3D imaging, and then indicating whether or not the addition of artificial texture cured the texture deficiency and/or whether any other regions within the image require additional texture.10-06-2011
20110242282Signal processing device, signal processing method, display device, and program product - A signal processing device includes a synchronization separation unit that separates horizontal and vertical synchronization signals from image signals, a dot counter which counts the number of dots of the image signals, a line counter which counts the number of lines of the image signals, a determination unit which determines the number of pixels in an image display area based on the number of dots and the number of lines, a control unit which controls the timing for shifting and outputting either of the left or the right image signal so that a left or a right image is displayed side by side in a display area in a size where a user can recognize the left or the right image among display areas in a display unit, and a first image signal shift unit which outputs the left or the right image signal to the display unit.10-06-2011
20110242280PARALLAX IMAGE GENERATING APPARATUS AND METHOD - According to one embodiment, a parallax image generating apparatus is for generating, using a first image, a parallax images with a parallax therebetween. The apparatus includes following units. The first estimation unit estimates distribution information items indicating distributions of first depths in the first image by using first methods. The distribution information items falls within a depth range to be reproduced. The first combination unit combines the distribution information items to generate first depth information. The second calculation unit calculates second depth information indicating relative unevenness of an object in the first image. The third combination unit combines the first depth information and the second depth information by using a method different from the first methods, to generate third depth information. The generation unit generates the parallax images based on the third depth information and the first image.10-06-2011
20110234761THREE-DIMENSIONAL OBJECT EMERGENCE DETECTION DEVICE - Provided is a three-dimensional object emergence detecting device capable of detecting the emergence of a three-dimensional object rapidly and correctly at low costs.09-29-2011
201102347603D IMAGE SIGNAL TRANSMISSION METHOD, 3D IMAGE DISPLAY APPARATUS AND SIGNAL PROCESSING METHOD THEREIN - A method for transmitting a 3D image signal an image display device, and an image signal processing method of the device are provided in order to reduce a collision between depth cues, which may occur in the vicinity of left and right corners in reproducing a 3D image. In the method for processing an image signal, first, an encoded video signal is obtained. Next, the encoded video signal is decoded to restore a plurality of image signals, and floating window information of each floating window is extracted from a picture header area of the encoded video signal. And then, an image at an inner area of left or right corner is suppressed according to the floating window information with respect to each of the plurality of images corresponding to the plurality of image signals, and the locally suppressed images are displayed in a stereoscopic manner.09-29-2011
201102347593D MODELING APPARATUS, 3D MODELING METHOD, AND COMPUTER READABLE MEDIUM - A 3D modeling apparatus includes: a generator configured to generate 3D models of a subject based on pairs of images; a selector configured to select a first 3D model and a second 3D model from the 3D models, wherein the second 3D model is to be superimposed on the first 3D model; an extracting unit configured to extract first feature points from the first 3D model and extract second feature points from the second 3D model; an acquiring unit configured to acquire coordinate transformation parameters based on the first and second feature points; a transformation unit configured to transform coordinates of the second 3D model into coordinates in a coordinate system of the first 3D model, based on the coordinate transformation parameters; and a combining unit configured to superimpose the second 3D model having the transformed coordinates on the first 3D model.09-29-2011
20110234758ROBOT DEVICE AND METHOD OF CONTROLLING ROBOT DEVICE - There is provided a robot device including an irradiation unit that irradiates pattern light to an external environment, an imaging unit that acquires an image by imaging the external environment, an external environment recognition unit that recognizes the external environment, an irradiation determining unit that controls the irradiation unit to be turned on when it is determined that irradiation of the pattern light is necessary based on an acquisition status of the image, and a light-off determining unit that controls the irradiation unit to be turned off when it is determined that irradiation of the pattern light is unnecessary or that irradiation of the pattern light is necessary to be forcibly stopped, based on the external environment.09-29-2011
20110234757SUPER RESOLUTION OPTOFLUIDIC MICROSCOPES FOR 2D AND 3D IMAGING - A super resolution optofluidic microscope device comprises a body defining a fluid channel having a longitudinal axis and includes a surface layer proximal the fluid channel. The surface layer has a two-dimensional light detector array configured to receive light passing through the fluid channel and sample a sequence of subpixel shifted projection frames as an object moves through the fluid channel. The super resolution optofluidic microscope device further comprises a processor in electronic communication with the two-dimensional light detector array. The processor is configured to generate a high resolution image of the object using a super resolution algorithm, and based on the sequence of subpixel shifted projection frames and a motion vector of the object.09-29-2011
20110234756DE-ALIASING DEPTH IMAGES - Techniques are provided for de-aliasing depth images. The depth image may have been generated based on phase differences between a transmitted and received modulated light beam. A method may include accessing a depth image that has a depth value for a plurality of locations in the depth image. Each location has one or more neighbor locations. Potential depth values are determined for each of the plurality of locations based on the depth value in the depth image for the location and potential aliasing in the depth image. A cost function is determined based on differences between the potential depth values of each location and its neighboring locations. Determining the cost function includes assigning a higher cost for greater differences in potential depth values between neighboring locations. The cost function is substantially minimized to select one of the potential depth values for each of the locations.09-29-2011
20100315489TRANSPORT OF STEREOSCOPIC IMAGE DATA OVER A DISPLAY INTERFACE - A digital display interface (12-16-2010
20100315488Conversion device and method converting a two dimensional image to a three dimensional image - Disclosed is an image conversion device and method converting a two-dimensional (2D) image into a three-dimensional (3D) image. The image conversion device may selectively adjust illumination within the 2D image, generate a disparity map for the illumination adjusted image, and selectively adjust a depth value of the disparity map based on edge discrimination.12-16-2010
20130155195Method and system for object reconstruction - A system and method are presented for use in the object reconstruction. The system comprises an illuminating unit, and an imaging unit (see FIG. 06-20-2013
20130155196METHOD AND APPARATUS FOR COMMUNICATING USING 3-DIMENSIONAL IMAGE DISPLAY - Provided is a communication method using a three-dimensional (3D) image display device. In the communication method, motion information is determined using a motion image obtained by photographing a user's motion indicating the user's request in relation to an opposite party, distance information indicating the distance between the user who is moving and the 3D image display device is determined, and then, the user's request is determined based on the motion information and the distance information06-20-2013
20130155190DRIVING ASSISTANCE DEVICE AND METHOD - An exemplary driving assistance method includes obtaining images of a surrounding environment of a vehicle captured by cameras mounted on the vehicle, each of the captured images comprising distance information indicating a distance between the corresponding camera and object captured by the corresponding camera. Next, the method includes extracting the distance information from the obtained captured images. The method then creates 3D models based on the extracted distance information, coordinates of each pixel of the at least one captured image and a reference point determined according to the captured images. Further, the method includes controlling display devices to display the created 3D models.06-20-2013
20130155191DEVICE FOR MEASURING THREE DIMENSIONAL SHAPE - A device for measuring three dimensional shape includes a first irradiation unit, a first grating control unit, a second irradiation unit, a second grating control unit, an imaging unit, and an image processing unit. After performance of a first imaging operation as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said first light pattern of multiply varied phases, a second imaging operation is performed as imaging processing of a single operation among a multiplicity of imaging operations performed by irradiation of said second light pattern of multiply varied phases. After completion of the first imaging operation and the second imaging operation, shifting or switching operation of the first grating and the second grating is performed simultaneously.06-20-2013
20130155192STEREOSCOPIC IMAGE SHOOTING AND DISPLAY QUALITY EVALUATION SYSTEM AND METHOD APPLICABLE THERETO - A stereoscopic image shooting system including an image shooting module and a score evaluation module is provided. The image shooting module is used for shooting a plurality of multi-view images of an object. The score evaluation module analyzes a plurality of stereoscopic images formed from the multi-view images to calculate a stereoscopic quality score of the stereoscopic images.06-20-2013
20130155193IMAGE QUALITY EVALUATION APPARATUS AND METHOD OF CONTROLLING THE SAME - Autocorrelation coefficients for three dimensions defined by the horizontal direction, the vertical direction, and the time direction of evaluation target moving image data are acquired. A plurality of noise amounts are calculated by executing frequency analysis of the acquired autocorrelation coefficients for the three dimensions and multiplying each frequency analysis result by a visual response function representing the visual characteristic of a spatial frequency or a time frequency. The product of the plurality of calculated noise amounts is calculated as the moving image noise evaluation value of the evaluation target moving image data.06-20-2013
20130155194ANAGYLPHIC STEREOSCOPIC IMAGE CAPTURE DEVICE - The device comprises an aperture stop disc divided into a plurality of mutually exclusive filtering segments comprising a first, a second and a third filtering segment; the third filtering segment is adapted to pass a third portion of the spectrum which is included in the portion of the spectrum passing the first and second filtering segments.06-20-2013
20130155187MOBILE DEVICE CAPTURE AND DISPLAY OF MULTIPLE-ANGLE IMAGERY OF PHYSICAL OBJECTS - Methods and systems for capturing and displaying multiple-angle imagery of physical objects are presented. With respect to capturing, multiple images of an object are captured from varying angles in response to user input. The images are analyzed to determine whether at least one additional image is desirable to allow generation of a visual presentation of the object. The user is informed to initiate capturing of the at least one more image based on the analysis. The additional image is captured in response to second user input. The presentation is generated based on the multiple images and the additional image. For displaying, a visual presentation of an object is accessed, the presentation having multiple images of the object from varying angles. The presentation is presented to the user of a mobile device according to user movement of the device. The user input determines a presentation speed and order of the images.06-20-2013
20130155188THERMAL IMAGING CAMERA WITH COMPASS CALIBRATION - A thermal imaging camera may include an electronic compass that can be calibrated after assembly of the thermal imaging camera. The electronic compass may include a magnetic sensor configured to sense three orthogonal components of a magnetic field. In some examples, the camera includes a processor configured to receive a plurality of measurements from the magnetic sensor as a physical orientation of the magnetic sensor is changed in a three-dimensional space. The processor may generate a plurality of data points from the plurality of measurements and control a display so as to display a simulated three-dimensional plot of the data points. The processor may control the display so the display updates in substantially real-time as new data points are generated by changing the physical orientation of the magnetic sensor.06-20-2013
20120281071Optical Scanning Device - A device for scanning a body orifice or surface including a light source and a wide angle lens. The light from the light source is projected in a pattern distal or adjacent to the wide angle lens. Preferably, the pattern is within a focal surface of the wide angle lens. The pattern intersects a surface of the body orifice, such as an ear canal, and defines a partial lateral portion of the pattern extending along the surface. A processor is configured to receive an image of the lateral portion from the wide angle lens and determine a position of the lateral portion in a coordinate system using a known focal surface of the wide angle lens. Multiple lateral portions are reconstructed by the processor to build a three-dimensional shape. This three-dimensional shape may be used for purposes such as diagnostic, navigation, or custom-fitting of medical devices, such as hearing aids.11-08-2012
20130155189OBJECT MEASURING APPARATUS AND METHOD - An exemplary object measuring method includes changing a focal length of a zoom lens in response to a user operation and taking images. The method then displays the images or one of them, determines a selected area, and defines the selected area as representing a object in the image. The method further determines virtual X and Y coordinate differences between a center point of an image and the object in the image. Next, the method calculates the actual differences between the testing device and the object. The method then controls the driving unit to drive the testing device to move a determined distance in an X direction and to move a determined distance in a Y direction.06-20-2013
20130182076ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - According to one embodiment, an apparatus includes an output module configured to output a video signal, a display configured to display a video based on the video signal on a screen, an image capture module configured to capture an image of an observer and to output image data, a recognition module configured to perform facial recognition or of left-eye and right-eye regions from the image data, a presentation module configured to present a left-eye image displayed on the screen to a left eye and to present a right-eye image displayed on the screen to a right eye based on a recognition result of the recognition module, and a controller configured to inhibit the facial recognition by the recognition module when the recognition module fails in the facial recognition.07-18-2013
20130182077ENHANCED CONTRAST FOR OBJECT DETECTION AND CHARACTERIZATION BY OPTICAL IMAGING - Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.07-18-2013
20130182078STEREOSCOPIC IMAGE DATA CREATING DEVICE, STEREOSCOPIC IMAGE DATA REPRODUCING DEVICE, AND FILE MANAGEMENT METHOD - Conventional methods have been unable to display stereoscopic images that are safe and have a high degree of freedom because only one type of maximum parallax and one type of minimum parallax are transmitted with a 3-dimensional image. This stereoscopic image data creating device, this stereoscopic image data reproducing device, and this file management method are characterized by comprising: multiplexing 3D information that includes a plurality of sets of image data corresponding to each of a plurality of points of view, a first maximum parallax as the maximum value of a parallax determined geometrically from a mechanism of imaging portion, a first minimum parallax representing a parallax at the position of a subject at the closet distance from the imaging portion as the limit of the suitable parallax range from the mechanism of the imaging portion, a second maximum parallax as the maximum value of the parallax of the actually generated stereoscopic image, and a second minimum parallax as the minimum value of the parallax of the actually generated stereoscopic image; handling the data as a single set of stereoscopic image data; and determining whether the parallax can be adjusted, a stereoscopic image can be displayed, or the like using the 3D information to allow a stereoscopic image to be displayed safer and in a more agreeable manner.07-18-2013
20130182075GEOSPATIAL AND IMAGE DATA COLLECTION SYSTEM INCLUDING IMAGE SENSOR FOR CAPTURING 3D GEOSPATIAL DATA AND 2D IMAGE DATA AND RELATED METHODS - A geospatial and image data collection system includes a laser source configured to direct laser radiation toward a geospatial area, and an image sensor. The image sensor is configured to be operable in a first sensing mode to sense reflected laser radiation from the geospatial area representative of three dimensional (3D) geospatial data, and a second sensing mode to sense ambient radiation from the geospatial area representative of two dimensional (2D) image data. In addition, a controller is configured to operate the image sensor in the first and second sensing modes to generate the 3D geospatial data and 2D image data registered therewith.07-18-2013
20110279649DIGITAL PHOTOGRAPHING APPARATUS, METHOD OF CONTROLLING THE SAME, AND COMPUTER-READABLE STORAGE MEDIUM - An apparatus, computer readable medium, and a method of controlling a digital photographing apparatus comprising a plurality of optical systems, the method including deriving shake information from the plurality of optical systems; and determining a base optical system from among the plurality of optical systems according to the shake information.11-17-2011
20130120537SINGLE-LENS 2D/3D DIGITAL CAMERA - A single-lens 2D/3D camera has a light valve placed in relationship to a lens module to control the light beam received by the lens module for forming an image on an image sensor. The light valve has a light valve area positioned in a path of the light beam. The light valve has two or more clearable sections such that only one section is made clear to allow part of the light beam to pass through. By separately making clear different sections on the light valve, a number of images as viewed through slightly different angles can be captured. The clearable sections include a right section and a left section so that the captured images can be used to produce 3D pictures or displays. The clearable sections also include a middle section so that the camera can be used as a 2D camera.05-16-2013
20110304694SYSTEM AND METHOD FOR 3D VIDEO STABILIZATION BY FUSING ORIENTATION SENSOR READINGS AND IMAGE ALIGNMENT ESTIMATES - Methods and systems to for generating high accuracy estimates of the 3D orientation of a camera within a global frame of reference. Orientation estimates may be produced from an image-based alignment method. Other orientation estimates may be taken from a camera-mounted orientation sensor. The alignment-derived estimates may be input to a high pass filter. The orientation estimates from the orientation sensor may be processed and input to a low pass filter. The outputs of the high pass and low pass filters are fused, producing a stabilized video sequence.12-15-2011
20110310229PROFILE MEASURING DEVICE, PROFILE MEASURING METHOD, AND METHOD OF MANUFACTURING SEMICONDUCTOR PACKAGE - There is provided a profile measuring device. The profile measuring device includes: a projector which projects a certain pattern on an object to be measured using incoherent light having a plurality of wavelength components; a first imaging device which captures a first image of the object on which the certain pattern is projected; a second imaging device which captures a second image of the object on which the certain pattern is projected; and a computing device which measures a profile of the object based on the first image and the second image.12-22-2011
20130188017Instant Calibration of Multi-Sensor 3D Motion Capture System - A method for instantly determining the mutual geometric positions and orientations between a plurality of 3D motion capture sensors has three or more reference markers mounted fixedly relative to each other on substantially one single plane which are sensed by each sensor. Said method enables said sensors to cooperate as a larger sensing system for 3D motion capture applications without requiring said sensors to be mounted rigidly relative to each other.07-25-2013
20130188018SYSTEM & METHOD FOR PROCESSING STEREOSCOPIC VEHICLE INFORMATION - A stereoscopic measurement system determines relative location of a point on an object based on a stereo image pair of the object. The system comprises an image capture device for capturing a stereo image pair of the object, the image pair comprising a first image and a second image of the object. The system comprises a processing system configurable to designate a first point and a second point on the first image, designate the first point and the second point on the second image, define stereo points based on the designated points, and to calculate a distance between the stereo points.07-25-2013
20130188019System and Method for Three Dimensional Imaging - A method of operating a camera with a microfluidic lens to identify a depth of an object in image data generated by the camera has been developed. The camera generates an image with the object in focus, and a second image with the object out of focus. An image processor generates a plurality of blurred images from image data of the focused image, and identifies blur parameters that correspond to the object in the second image. The depth of the object from the camera is identified with reference to the blur parameters.07-25-2013
20130188020METHOD AND DEVICE FOR DETERMINING DISTANCES ON A VEHICLE - A method for determining distances for chassis measurement of a vehicle having a body and at least one wheel includes determining a center of rotation of a wheel of the vehicle by projecting a structured light pattern at least onto the wheel, recording a light pattern reflected by the wheel using a calibrated imaging sensor system, determining a 3D point cloud from the reflected light pattern, and determining the center of rotation of the wheel from the 3D point cloud. The method also includes determining a point on the body by evaluating the previously determined 3D point cloud or by evaluating a plurality of grey-scale images recorded under unstructured illumination. A height level is determined as a vertical distance between the center of rotation of the wheel and the point on the body.07-25-2013
20120287244NON-COHERENT LIGHT MICROSCOPY - An optical microscope (11-15-2012
20120007952RECORDING CONTROL APPARATUS, SEMICONDUCTOR RECORDING APPARATUS, RECORDING SYSTEM, AND NONVOLATILE STORAGE MEDIUM - A recording control apparatus includes an input unit operable to input plural kinds of image data composing a stereoscopic image or a high-definition image, and a recording controller operable to control recording of the plural kinds of image data input by the input unit, to a nonvolatile storage medium. The recording controller controls the recording so that the plural kinds of image data input by the input unit are recorded to different erase blocks of the nonvolatile storage medium in such a manner that different kinds of image data are not mixed in one erase block of the nonvolatile storage medium.01-12-2012
20120013713IMAGE INTEGRATION UNIT AND IMAGE INTEGRATION METHOD - An image integration unit includes: an imaging section which is installed in a moving body and which images a plurality of time-series images at different times; a three-dimensional image information calculating section which calculates three-dimensional image information in each of the time-series images based on the time-series images imaged by the imaging section; a stationary body area extracting section which extracts stationary body areas in each of the time-series images based on the three-dimensional image information; and an integrating section which calculates the corresponding stationary body areas between the time-series images from each of the stationary body areas extracted in each of the time-series images, and matches the corresponding stationary body areas to integrate the time-series images.01-19-2012
20120019622THERMAL POWERLINE RATING AND CLEARANCE ANALYSIS USING THERMAL IMAGING TECHNOLOGY - A method and apparatus are provided to acquire direct thermal measurements, for example, from a LiDAR collecting vehicle or air vessel, of an overhead electrical conductor substantially simultaneous with collection of 3-dimensional location data of the conductor, and utilize temperature information derived from the direct thermal measurements in line modeling, line rating, thermal line analysis, clearance analysis, and/or vegetation management.01-26-2012
20120019621Transmission of 3D models - A method and an apparatus for transmitting a 3D model associated to stereoscopic content are described, and more specifically a method and an apparatus for the progressive transmission of 3D models. Also described are a method and an apparatus for preparing a 3D model associated to a 3D video frame for transmission and a non-transient recording medium comprising such a prepared 3D model. The 3D model is split into one or more components. It is then determined whether a component of the one or more components is hidden by other 3D content. For transmission those components of the 3D model that are not hidden by other 3D content are transmitted first. The remaining components of the 3D model are transmitted subsequently.01-26-2012
20120019620IMAGE CAPTURE DEVICE AND CONTROL METHOD - An image capture device and method creates a first matrix for an image. A pixel value of each point in the first matrix is compared with a pixel value of a corresponding point in a 3D figure template, to detect a three-dimensional (3D) area in the image. A lens of the image capture device is moved and a foci of the lens is adjusted to ensure that the device capture a clear 3D figure image. A second matrix for the clear 3D figure image is created, and a pixel value of each point in the second matrix is compared with a pixel value of a corresponding point in a 3D facial template, to detect a 3D facial area in the clear 3D figure image. The lens is moved and the foci of the lens is adjusted to ensure that the device captures a clear 3D facial image.01-26-2012
20130194387IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS AND IMAGE-PICKUP APPARATUS - The image processing method includes acquiring parallax images produced by image capturing, performing position matching of the parallax images to calculate difference between the parallax images, and deciding, in the difference, an unnecessary image component different from an image component corresponding to the parallax. The method is capable of accurately deciding the unnecessary image component included in a captured image without requiring image capturing multiple times.08-01-2013
20130194388IMAGING UNIT OF A CAMERA FOR RECORDING THE SURROUNDINGS - An imaging unit of a camera for recording the surroundings has an image sensor with a lens for the display of the surroundings on the image sensor. The image sensor and the lens are held by a carrier. The camera additionally has a circuit board and at least the signal and the supply lines of the image sensor arranged an the carrier. The image sensor is mounted an a carrier substrate, which similar to the lens, is arranged on the carrier at a distance from the circuit board, and has a flexible electrical connection to the circuit board.08-01-2013
20120026290MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which object information on an object within a 2-dimensional (hereinafter abbreviated 2D) preview image can be provided as 3D object information of a 3-dimensional (hereinafter abbreviated 3D) type or object information on an object within a 3D preview image can be provided as object information of a 3D type. The present invention includes displaying a preview image via at least one camera on a screen of a touchscreen, recognizing a current position of the mobile terminal, searching for an object information on at least one object within the preview image based on the recognized current position, displaying the found object information within the preview image, and converting and displaying a touched specific point to a 3-dimensional (hereinafter abbreviated 3D) shape if the specific point within the preview image is touched. Accordingly, the present invention converts a preview image for augmented reality to a 2D or 3D image and also converts information on an object within the preview image to a 2D or 3D image, thereby providing a user with various images in the augmented reality.02-02-2012
201302012853-D GLASSES WITH ILLUMINATED LIGHT GUIDE - 3-D or other glasses with an illuminated light guide for tracking with a camera are presented. The light guide can be a light guide plate (LGP) or other transparent and/or translucent material that conveys light through its interior. LEDs embedded within the frames illuminate the light guide and optionally can be dimmed or brightened depending on ambient lighting.08-08-2013
20130201286CONFIGURABLE ACCESS CONTROL SENSING DEVICE - An access control device comprises at least one transit authorization request device, such as an ID sensor activated by access card or badge or a biometric sensor (fingerprint, retina), said transit authorization request device to be activated by a person requesting authorization to pass through said passageway or doorway and a presence detection and tracking device for detecting the presence of a person in the vicinity of said passageway or doorway and for tracking the movement of a person within or through said passageway or doorway. According to the invention, the access control device further comprises a control unit configured for assigning a virtual transit ticket to a person after authorization to pass through said passageway or doorway has been granted to said person, said virtual transit ticket being representative of the transit privileges granted to said person, i.e. the privileges regarding the transit direction through said passageway of doorway, and for controlling said presence detection and tracking device to track the movement of the person with the virtual transit ticket with respect to the granted transit privileges, said control unit comprising a processing module with a configurable decision table for generating an output control signal based on an output signal of said at least one transit authorization request device and an output signal of said presence detection and tracking device, said output control signal to be used for the controlling of the passage of persons through a passageway or a doorway.08-08-2013
20130201287THREE-DIMENSIONAL MEASUREMENT SYSTEM AND METHOD - A three-dimensional measurement system includes a projector 08-08-2013
20130201288HIGH DYNAMIC RANGE & DEPTH OF FIELD DEPTH CAMERA - In order to maximize the dynamic range and depth of field for a depth camera used in a time of flight system, the light source is modulated at a plurality of different frequencies, a plurality of different peak optical powers, a plurality of integration subperiods, a plurality of lens foci, aperture and zoom settings during each camera frame time. The different sets of settings effectively create subrange volumes of interest within a larger aggregate volume of interest, each having their own frequency, peak optical power, lens aperture, lens zoom and lens focus products consistent with the distance, object reflectivity, object motion, field of view, etc. requirements of various ranging applications.08-08-2013
20130201290OCCUPANCY SENSOR AND ASSOCIATED METHODS - A device to detect occupancy of an environment includes a sensor to capture video frames from a location in the environment. The device may compare rules with data using a rules engine. The microcontroller may include a processor and memory to produce results indicative of a condition of the environment. The device may also include an interface through which the data is accessible. The device may generate results respective to the location in the environment. The microcontroller may be in communication with a network. The video frames may be concatenated to create an overview to display the video frames substantially seamlessly respective to the location in which the sensor is positioned. The overview may be viewable using the interface and the results of the analysis performed by the rules engine may be accessible using the interface.08-08-2013
20130201289LOW PROFILE DEPTH CAMERA - The use of one or more angled or curved and diverging light pipes or reflectors placed in a light source's, e.g. diode's, emission path at appropriate distances, angles and divergence, such that a diode's emission spot size is modified and or redirected from the diode's natural emission path to alternative planes at angle to the diode's natural emission path so that a diode emission safe spot size can be achieved on any plane at angle to the original diode natural emission path at minimum distances from the diode's point of emission.08-08-2013
20120075424COMPUTER-READABLE STORAGE MEDIUM HAVING IMAGE PROCESSING PROGRAM STORED THEREIN, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - When an image of a marker existing in a real space is taken by using an outer camera, an image of a plurality of virtual characters which is taken by a virtual camera is displayed on an upper LCD so as to be superimposed on a taken real image of the real space. The virtual characters are located in a marker coordinate system based on the marker, and when a button operation is performed by a user on a game apparatus, the position and the orientation of each virtual character are changed. Then, when a button operation indicating a photographing instruction is provided by the user, an image being displayed is stored in a storage means.03-29-2012
201200867813D Vision On A Chip - A 3D camera for determining distances to regions in a scene wherein gating or modulating apparatus for the 3D camera is incorporated on a photosurface of the camera on which light detectors of the camera are also situated. Each pixel in the photosurface may include its own pixel circuit for gating the pixel on or off or for modulating the sensitivity of the pixel to incident light. The circuit may comprise at least one amplifier inside the pixel, at least one feedback capacitor separate from the light sensitive element and connected between the input and output of each of the at least one amplifier, and at least one controllable connection through which current flows from the light sensitive element into the input of the at least one amplifier. The 3D camera may further include a light source and a controller.04-12-2012
20120086780Utilizing Depth Information to Create 3D Tripwires in Video - A method of processing a digital video sequence is provided that includes detecting a foreground object in an image captured by a depth camera, determining three-dimensional (3D) coordinates of the foreground object, and comparing the 3D coordinates to a 3D video tripwire to determine if the foreground object has crossed the 3D video tripwire. A method of defining a 3D video tripwire is also provided.04-12-2012
20120086778TIME OF FLIGHT CAMERA AND MOTION TRACKING METHOD - In a motion tracking method using a time of flight (TOF) camera that is installed on a track system, three-dimensional (3D) images of people are captured using the TOF camera, and stored in a storage system to create a 3D image database. Scene images of a monitored area are captured in real-time and analyzed to check for motion. A movement direction of the motion is determined once motion has been detected and the TOF camera is moved along the track system to track the motion using a driving device according to the movement direction.04-12-2012
20120086777SYSTEMS AND METHODS FOR DETECTING AND DISPLAYING THREE-DIMENSIONAL VIDEOS - A video player system includes a three-dimensional field detector and controller module for detecting a three-dimensional field of a video data to generate at least one of a detection signal and a control signal based on the three-dimensional field detected. The video data includes data for at least one image and the three-dimensional field within the image. The system also includes a display recomposition module coupled with the three-dimensional field detector and controller module, and the display recomposition module generates a recomposed three-dimensional field within the at least one image based on the detection signal and at least one of a plurality of display parameters associated with a display panel. The display panel is coupled with the three-dimensional field detector and controller module and the display recomposition module and displays the at least one image with the recomposed three-dimensional field based on at least one of the control signal and the display parameters.04-12-2012
201200867763D display system with active shutter plate - A 3D display system uses a lenticular screen or a parallax barrier, along with a shutter plate, as a light directing device to allow a viewer's right eye to see a right image and the left eye to see a left image on a display panel. The right and left images are alternately displayed. The shutter plate has a plurality of right shutter segments and a plurality of left shutter segments arranged in an interleaving manner. When the right image is displayed, the right shutter segments are open and the left shutter segments are closed. When the left image is displayed, the right shutter segments are closed and the left shutter segments are open. But when the 3D display panel is used as a 2D display panel, both the right and left shutter segments are all open so that both the viewer's eyes see the image simultaneously.04-12-2012
20120086775Method And Apparatus For Converting A Two-Dimensional Image Into A Three-Dimensional Stereoscopic Image - A method and apparatus for converting a two-dimensional image into a stereoscopic three-dimensional image. In one embodiment, a computer implemented method of converting a two-dimensional image into a stereoscopic three-dimensional image including for each pixel within a right eye image, identifying at least one corresponding pixel from a left eye image and determining a depth and an intensity value for the each pixel within the right eye image using the at least one corresponding pixel, wherein the depth value is stored in a right eye depth map and the intensity value is stored in the right eye image and inpainting at least one occluded region within the right eye image using the right eye depth map.04-12-2012
20130208095STEREO CAMERA WITH AUTOMATIC CONTROL OF INTEROCULAR DISTANCE BASED ON LENS SETTINGS - A stereographic camera system and method of operating a stereographic camera system are disclosed. The stereographic camera system may include a left camera and a right camera including respective lenses having a focal length and a focus distance, an interocular distance mechanism to set an intraocular distance between the left and right cameras, and a controller. The controller may receive inputs indicating the focal length and the focus distance of the lenses. The controller may control the intraocular distance mechanism, based on the focal length and focus distance of the lenses and one or both of a distance to a nearest foreground object and a distance to a furthest background object, to automatically set the interocular distance such that a maximum disparity of a stereographic image captured by the left and right cameras does not exceed a predetermined maximum disparity.08-15-2013
20130208091AMBIENT LIGHT ALERT FOR AN IMAGE SENSOR - An image camera component and its method of operation are disclosed. The image camera component detects when ambient light within a field of view of the camera component interferes with operation of the camera component to correctly identify distances to objects within the field of view of the camera component. Upon detecting a problematic ambient light source, the image camera component may cause an alert to be generated so that a user can ameliorate the problematic ambient light source.08-15-2013
20130208093SYSTEM FOR REDUCING DEPTH OF FIELD WITH DIGITAL IMAGE PROCESSING - An electronic device may have a camera module. The camera module may capture images having an initial depth of field. The electronic device may receive user input selecting a focal plane and an effective f-stop for use in producing a modified image with a reduced depth of field. The electronic device may include image processing circuitry that selectively blurs various regions of a captured image, with each region being blurred to an amount that varies with distance to the user selected focal plane and in response to the user selected effective f-stop (e.g., a user selected level of depth of field).08-15-2013