Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Target tracking or detecting

Subclass of:

382 - Image analysis

382100000 - APPLICATIONS

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20110002508DIGITALLY-GENERATED LIGHTING FOR VIDEO CONFERENCING APPLICATIONS - A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.01-06-2011
20130044911PARTICLE FILTER - A particle filter is suitable for performing particle filtering on a frame to track a particular object in the frame. The particle filter includes a frame cache, an observation model generator, and a particle filter controller. The frame cache is connected to a system memory through a system bus, in which the system memory stores all image blocks of the frame; and the frame cache obtains the at least one image block of the frame from the system memory and stores the obtained image block. The observation model generator reads at least one pixel from the frame cache, and generates an observation model corresponding to the object and the read image block according to the read pixel. The particle filter controller obtains the observation model from the observation model generator, and determines and outputs an object tracking result of the object according to the observation model.02-21-2013
20090262983Image processing based on object information - A CPU divides an image into plural regions and for each of the regions, generates a histogram and calculates an average brightness Y ave. The CPU determines a focus location on the image by using focus location information, sets a region at the determined location as an emphasis region, and sets the average brightness Y ave of the emphasis region as a brightness criterion Y std. The CPU uses the brightness criterion Y std to determine non-usable regions. By using the regions not excluded as non-usable regions, the CPU calculates an image quality adjustment average brightness Y′ ave, i.e. the average brightness of the entire image, with a weighting W in accordance with the locations of the regions reflected thereto, and executes a bright value correction by using the calculated image quality adjustment average brightness Y′ ave.10-22-2009
20110200228TARGET TRACKING SYSTEM AND A METHOD FOR TRACKING A TARGET - A target tracking system including a tracking module arranged to perform model-based tracking of a target based on received measurements from a sensor. A detector is arranged to detect as a target performs a manoeuvre. An output switching module is arranged to switch from a first output mode in which model estimations of the tracking module are forwarded, to at least a second output mode in which only reliable outputs are forwarded, in response to information indicating the detection of a target manoeuvre being received from the detector. Also a collision avoidance system, a method for tracking a target and a computer program product.08-18-2011
20130028476POSE TRACKING PIPELINE - A method of tracking a target includes receiving from a source a depth image of a scene including the human subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that belong to the human subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the human subject as a body model including a plurality of shapes.01-31-2013
20120201423IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM AND RECORDING MEDIUM - There are provided an image processing apparatus, an image processing method and an image processing program for transforming a target image having no contour of straight line portions.08-09-2012
20120201421System and Method for Automatic Registration Between an Image and a Subject - A patient defines a patient space in which an instrument can be tracked and navigated. An image space is defined by image data that can be registered to the patient space. A tracking device can be connected to a member in a known manner that includes imageable portions that generate image points in the image data. Selected image slices or portions can be used to register reconstructed image data to the patient space.08-09-2012
20130028470IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMUPTER READABLE RECORDING DEVICE - An image processing apparatus includes a corresponding region connecting unit that connects regions that depict the same target between a series of images captured in time series, thereby sets at least one connected region; a connected region feature data calculation unit that calculates feature data of the connected region; a digest index value calculation unit that calculates a digest index value corresponding to a degree at which the target depicted in the series of images is aggregated in each image of the series of images, based on the feature data; and a digest image detector that detects a digest image based on the digest index value.01-31-2013
20130028480GENERIC SUBSTANCE INFORMATION RETRIEVAL USING MOBILE DEVICE - A data processing system configured for computer visualization of drugs for drug interaction information retrieval is disclosed. For each of multiple different substances and using a camera within the mobile or other computing device, imagery of at least one external characteristic of a physical body of the substance is acquired. An identity of each of the multiple different substances is determined based upon the at least one external characteristic from the acquired imagery. Drug interaction data is retrieved for each of the multiple different substances using the determined identities. Drug interaction data for at least one of the multiple different substances is correlated with at least one other of the multiple different substances. At least one generic substance and/or cost information of at least one of the multiple different substances is identified. The correlated drug interaction data, the at least one generic substance, and/or the cost information are displayed.01-31-2013
20130028475LIGHT POSITIONING SYSTEM USING DIGITAL PULSE RECOGNITION - In one aspect, the present disclosure relates to a method of detecting information transmitted by a light source in a complementary metal-oxide-semiconductor (CMOS) image sensor by detecting a frequency of light pulses produced by the light source. In some embodiments, the method includes capturing on the CMOS image sensor with a rolling shutter an image in which different portions of the CMOS image sensor are exposed at different points in time; detecting visible distortions that include alternating stripes in the image; measuring a width of the alternating stripes present in the image; and selecting a symbol based on the width of the alternating stripes present in the image to recover information encoded in the frequency of light pulses produced by the light source captured in the image.01-31-2013
20130028479LANE RECOGNITION DEVICE - The lane mark recognition device is equipped with a lane mark detecting unit which executes a lane mark detection process in each predetermined control cycle, and adds a detection presence/absence data to a ring buffer, a detection presence/absence data addition inhibiting unit which inhibits addition of the detection presence/absence data to the ring buffer when the vehicle is traveling in the intersection, and a lane mark position recognizing unit which recognizes a relative position of the vehicle and the lane mark, when the lane mark is detected in the situation where a lane mark detection rate calculated from the data of the ring buffer is higher than a reliability threshold value.01-31-2013
20130028477IMAGE PROCESSING METHOD AND THERMAL IMAGING CAMERA - For a thermal imaging camera (01-31-2013
20130028473SYSTEM AND METHOD FOR PERIODIC LANE MARKER IDENTIFICATION AND TRACKING - A system and method for determining the presence and period of dashed line lane markers in a roadway. The system includes an imager configured to capture a plurality of high dynamic range images exterior of the vehicle and a processor, in communication with the at least one imager such that the processor is configured to process at least one high dynamic range image. The period of the dashed lane markers in the image is calculated for detecting the presence of the dashed lane marker and for tracking the vehicle within the markers. The processor communicates an output for use by the vehicle for use in lane departure warning (LDW) and/or other driver assist features.01-31-2013
20130028474METHOD AND SYSTEM FOR DYNAMIC FEATURE DETECTION - Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods.01-31-2013
20130028471IMAGE PROCESSING APPARATUS FOR CONVERTING IMAGE IN CHARACTERISTIC REGION OF ORIGINAL IMAGE INTO IMAGE OF BRUSHSTROKE PATTERNS - The importance detection unit 52 detects importance of each pixel composing the original image thus acquired. In addition, the importance map generation unit 52 generates an importance map indicating distribution of the importance detected for each pixel. The characteristic region detection unit 61 detects a characteristic region of the original image, from the original image thus acquired. The determination unit 62 determines a brushstroke pattern that should be applied to the characteristic region thus detected, from at least two types of brushstroke patterns stored in a storage unit. The brushstroke pattern conversion unit 63 converts an image in the characteristic region into an image, to which the brushstroke pattern is applied, based on the brushstroke pattern thus determined. The adjustment unit 64 adjusts color of the image of the brushstroke pattern being the image in the characteristic region, based on the importance map thus generated.01-31-2013
20130028469METHOD AND APPARATUS FOR ESTIMATING THREE-DIMENSIONAL POSITION AND ORIENTATION THROUGH SENSOR FUSION - An apparatus and method of estimating a three-dimensional (3D) position and orientation based on a sensor fusion process. The method of estimating the 3D position and orientation may include determining a position of a marker in a two-dimensional (2D) image, determining a depth of a position in a depth image corresponding to the position of the marker in the 2D image to be a depth of the marker, estimating a 3D position of the marker calculated based on the depth of the marker as a marker-based position of a remote apparatus, estimating an inertia-based position and an inertia-based orientation by receiving inertial information associated with the remote apparatus, estimating a fused position based on a weighted sum of the marker-based position and the inertia-based position, and outputting the fused position and the inertia-based orientation.01-31-2013
20130028468Example-Based Object Retrieval for Video Surveillance - Methods and apparatus are provided for example-based object retrieval that can retrieve objects from video images in real-time. An object of interest is identified in a sequence of images by obtaining an identification from a user of an example object having at least one attribute of interest; generating a query object based on the identified example object, wherein the query object has a substantially similar viewpoint as objects in the sequence of images and wherein the query object comprises a plurality of attributes that are substantially similar as the example object; and processing the sequence of images to identify the object of interest based on a similarity metric to the query object.01-31-2013
20130028467SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.01-31-2013
20110206236NAVIGATION METHOD AND APARATUS - An automated guidance system for a moving frame. The automated guidance system has an imaging system disposed on the frame; a motion sensing system coupled to the frame and configured for sensing movement of the frame; and a processor communicably connected to the vision system for receiving image data from the vision system and generating optical flow from image data of frame surrounding. The processor is communicably connected to the motion sensing system for receiving motion data of the frame from the motion sensing system. The processor is configured for determining, from kinematically aided dense optical flow correction to frame kinematic errors, due to errors in motion data from the motion sensing system.08-25-2011
20090202107Object detection and recognition system - An object recognition system is provided including at least one image capturing device configured to capture at least one image, wherein the image includes a plurality of pixels and is represented in an image data set, an object detection device configured to identify a plurality of pixels corresponding to objects from the at least one image, wherein an object includes a plurality of pixels and is represented in an object data set, wherein the object data set includes a set of features corresponding to each pixel in the object, and an image recognition device configured to recognize objects of interest present in an object by image correlation against a set of template images to recognize an object as one of the templates.08-13-2009
20090324012SYSTEM AND METHOD FOR CONTOUR TRACKING IN CARDIAC PHASE CONTRAST FLOW MR IMAGES - A method for tracking a contour in cardiac phase contrast flow magnetic resonance (MR) images includes estimating a global translation of a contour in a reference image in a time sequence of cardiac phase contrast flow MR images to a contour in a current image in the time sequence of images by finding a 2-dimensional translation vector that maximizes a similarity function of the contour in the reference image and the current image calculated over a bounding rectangle containing the contour in the reference image, estimating an affine transformation of the contour in the reference image to the contour in the current image, and performing a constrained local deformation of the contour in the current image.12-31-2009
20110182473SYSTEM AND METHOD FOR VIDEO SIGNAL SENSING USING TRAFFIC ENFORCEMENT CAMERAS - A system and method for determining the state of a traffic signal light, such as being red, yellow, or green, by employing a plurality of traffic enforcement cameras to be used in determining if a traffic violation has occurred. The system and method automatically predicts, tacks and captures violation events, such as violating a red traffic signal light, to use the video for any number of reasons, particularly for traffic enforcement purposes. There may be provided a tracking camera, a signal camera and an enforcement camera used to capture the video and other pertinent information relating to the event. The signal camera may be operatively connected to a processing unit that runs a video signal sensing (VSS) software unit to determine the active state of the system. Advantageously, this allows the monitoring of intersection for signal light violations without the need for a connection to the light itself.07-28-2011
20120163658Temporal-Correlations-Based Mode Connection - Disclosed herein are a system, method, and computer program product for updating a scene model (06-28-2012
20120163669Systems and Methods for Detecting a Tilt Angle from a Depth Image - A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.06-28-2012
20120170810System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability.07-05-2012
20120170799MOVABLE RECGNITION APPARATUS FOR A MOVABLE TARGET - A movable recognition apparatus and a method thereof, which identify an activity configuration of at least a movable target, provide a plurality of distance measuring devices arranged as a two-dimensional matrix on a plane of a specific space to detect and obtain a plurality of vertical distance values between the movable target and the plane. Then, an analyzing device is applied to establish a contour graph corresponding to the movable target by means of referencing the vertical distance values and to identify the activity configuration in accordance with the shape change of the contour graph. Therefore, the movable recognition apparatus can perform the identification task conveniently with privacy requirement in addition to accuracy of the identified activity configuration.07-05-2012
20100021006OBJECT TRACKING METHOD AND SYSTEM - An object tracking method uses a system having an object identifying device and at least one video tracking device, wherein the object identifying device monitors an area to identify an object entering the area and the video tracking device wired/wirelessly connected to the object identifying device monitors the area monitored by the object identifying device. The method includes: extracting, at the object identifying device, object identification information of the object; providing, at the object identifying device, the object identification information to the video tracking device; tracking, at the video tracking device, the object to extract physical information of the object; mapping, at the video tracking device, the physical information to the object identification information to generate object information of the object; and storing, at the video tracking device, the object information in a memory of the video tracking device.01-28-2010
20090196462VIDEO AND AUDIO CONTENT ANALYSIS SYSTEM - The present invention is directed to various methods and systems for analysis and processing of video and audio signals from a plurality of sources in real-time or off-line. According to some embodiments of the present invention, analysis and processing applications are dynamically installed in the processing units.08-06-2009
20090296987ROAD LANE BOUNDARY DETECTION SYSTEM AND ROAD LANE BOUNDARY DETECTING METHOD - A road lane boundary detection system includes a detection region setting unit that sets a certain region in a road image, as a target detection region to be searched for detection of a road lane boundary, and a detecting unit that processes image data in the target detection region set by the detection region setting unit, so as to detect the road lane boundary. The detection region setting unit sets a first detection region as the target detection region if no road lane boundary is detected, and sets a second detection region as the target detection region if the road lane boundary is detected, such that the first and second detection regions are different in size from each other.12-03-2009
20090310823Object tracking method using spatial-color statistical model - An object tracking method utilizing spatial-color statistical models is used for tracking an object in different frames. A first object is extracted from a first frame and a second object is extracted from a second frame. The first object is divided into several first blocks and the second object is divided into several second blocks according to pixel parameters of each pixel within the first object and the second object. The comparison between the first blocks and the second blocks is made to find the corresponding relation therebetween. The second object is identified as the first object according to the corresponding relation.12-17-2009
20080260207Vehicle environment monitoring apparatus - A vehicle environment monitoring apparatus capable of extracting an image of a monitored object in an environment around a vehicle by separating the same from the background image with a simple configuration having a single camera mounted on the vehicle is provided. The apparatus includes a first image portion extracting processing unit to extract first image portions (A10-23-2008
20080260206IMAGE PROCESSING APPARATUS AND COMPUTER PROGRAM PRODUCT - An image processing apparatus includes a feature-quantity calculating unit that calculates feature quantities of target regions each indicating a tracking object in respective target images, the target images being obtained by capturing the tracking object at a plurality of time points; a provisional-tracking processing unit that performs provisional tracking of the target region by associating the target regions of the target images with each other using the calculated feature quantities; and a final-tracking processing unit that acquires a final tracking result of the target region based on a result of the provisional tracking.10-23-2008
20080260205Image Processing Device and Method - The present invention relates to an image processing device and a corresponding image processing method for processing medical image data showing at least two image objects, including a segmentation unit for detection and/or segmentation of image objects in said image data. To allow a more accurate and better segmentation of target objects which are hard to localize and detect, it is proposed that the segmentation unit comprises: a selection unit (10-23-2008
20110188707System and Method for Pleographic Subject Identification, Targeting, and Homing Utilizing Electromagnetic Imaging in at Least One Selected Band - The inventive data processing system and method enable automatic recognition of images captured using various electromagnetic (EM) imaging systems and techniques, and more particularly to a system and method for applying pleographic processing for subject identification, recognition, matching, targeting, and or homing, utilizing one or more EM imaging systems, devices, in at least one selected EM band.08-04-2011
20090022365METHOD AND APPARATUS FOR MEASURING POSITION AND ORIENTATION OF AN OBJECT - An information processing method includes acquiring an image of an object captured by an imaging apparatus, acquiring an angle of inclination measured by an inclination sensor mounted on the object or the imaging apparatus, detecting a straight line from the captured image, and calculating a position and orientation of the object or the imaging apparatus, on which the inclination sensor is mounted, based on the angle of inclination, an equation of the detected straight line on the captured image, and an equation of a straight line in a virtual three-dimensional space that corresponds to the detected straight line.01-22-2009
20090290755System Having a Layered Architecture For Constructing a Dynamic Social Network From Image Data - A system having a layered architecture for constructing dynamic social network from image data of actors and events. It may have a low layer for capturing raw data and identifying actors and events. The system may have a middle layer that receives actor and event information from the low layer and puts it in to a two dimensional matrix. A high layer of the system may add weighted relationship information to the matrix to form the basis for constructing a social network. The system may have a sliding window thus making the social network dynamic.11-26-2009
20080240502Depth mapping using projected patterns - Apparatus for mapping an object includes an illumination assembly, which includes a single transparency containing a fixed pattern of spots. A light source transilluminates the single transparency with optical radiation so as to project the pattern onto the object. An image capture assembly captures an image of the pattern that is projected onto the object using the single transparency. A processor processes the image captured by the image capture assembly so as to reconstruct a three-dimensional (3D) map of the object.10-02-2008
20130028478OBJECT INSPECTION WITH REFERENCED VOLUMETRIC ANALYSIS SENSOR - A positioning method and system for non-destructive inspection of an object include providing at least one volumetric analysis sensor having sensor reference targets; providing a sensor model of a pattern of at least some of the sensor reference targets; providing object reference targets on at least one of the object and an environment of the object; providing an object model of a pattern of at least some of the object reference targets; providing a photogrammetric system including at least one camera and capturing at least one image in a field of view, at least a portion of the sensor reference and the object reference targets being apparent on the image; determining a sensor spatial relationship and an object spatial relationship; determining a sensor-to-object spatial relationship of the at I act one volumetric analysis sensor with respect to the object; repeating the steps and tracking a displacement of the volumetric analysis sensor and the object.01-31-2013
20110194732IMAGE RECOGNITION APPARATUS AND METHOD - An image recognition apparatus detects a specific object image from an image to be processed, calculates a coincidence degree between an object recognisability state of the object image and that of an object in registered image information, and calculates a similarity between the image feature of the object image and the image feature in the registered image information. Based on the similarity and coincidence degree, the image recognition apparatus recognizes whether the object of the object image is that of the registered image information. When the similarity is lower than the first threshold and the coincidence degree is equal to or higher than the second threshold, the image recognition apparatus recognizes that the object of the object image is different from that of the registered image information.08-11-2011
20110194731METHOD OF DETERMINING REFERENCE FEATURES FOR USE IN AN OPTICAL OBJECT INITIALIZATION TRACKING PROCESS AND OBJECT INITIALIZATION TRACKING METHOD - A method of determining reference features for use in an optical object initialization tracking process is disclosed, said method comprising the following steps: a) capturing at least one current image of a real environment or synthetically generated by rendering a virtual model of a real object to be tracked with at least one camera and extracting current features from the at least one current image, b) providing reference features adapted for use in an optical object initialization tracking process, c) matching a plurality of the current features with a plurality of the reference features, d) estimating at least one parameter associated with the current image based on a number of current and reference features which were matched, and determining for each of the reference features which were matched with one of the current features whether they were correctly or incorrectly matched, e) wherein the steps a) to d) are processed iteratively multiple times, wherein in step a) of every respective iterative loop a respective new current image is captured by at least one camera and steps a) to d) are processed with respect to the respective new current image, and f) determining at least one indicator associated to reference features which were correctly matched and/or to reference features which were incorrectly matched, wherein the at least one indicator is determined depending on how often the respective reference feature has been correctly matched or incorrectly matched, respectively.08-11-2011
20090190799METHOD FOR CHARACTERIZING THE EXHAUST GAS BURN-OFF QUALITY IN COMBUSTION SYSTEMS - A method for characterizing a flue gas burnout quality of a combustion process in a combustion system having a gas burnout zone includes optically detecting in a visible wavelength range, in a flow cross section of the gas burnout zone, low-soot combustion regions, regions without combustion, and sooting regions, so as to provide a plurality of successive individual images, the regions without combustion and the sooting regions having different dynamics. The plurality of successive individual images are analyzed so as to distinguish regions of transition, to the low-soot combustion regions, of the regions without combustion and the sooting regions.07-30-2009
20090190798SYSTEM AND METHOD FOR REAL-TIME OBJECT RECOGNITION AND POSE ESTIMATION USING IN-SITU MONITORING - Provided are a system and method for real-time object recognition and pose estimation using in-situ monitoring. The method includes the steps of: a) receiving 2D and 3D image information, extracting evidences from the received 2D and 3D image information, recognizing an object by comparing the evidences with model, and expressing locations and poses by probabilistic particles; b) probabilistically fusing various locations and poses and finally determining a location and a pose by filtering inaccurate information; c) generating ROI by receiving 2D and 3D image information and the location and pose from the step b) and collecting and calculating environmental information; d) selecting an evidence or a set of evidences probabilistically by receiving the information from the step c) and proposing a cognitive action of a robot for collecting additional evidence; and e) repeating the steps a) and b) and the steps c) and d) in parallel until a result of object recognition and pose estimation is probabilistically satisfied.07-30-2009
20090123028Target Position Setting Device And Parking Assist Device With The Same - A target position setting device includes a distance meter, an imager, first and second calculating portions, a determination portion, and a setting portion. The distance meter measures a distance to an object around a vehicle. The imager takes an image of an environment around the vehicle. The first calculating portion calculates a first candidate of a target position of the vehicle according to a measuring result of the distance meter. The second calculating portion calculates a second candidate of the target position of the vehicle according to an imaging result of the imager. The determination portion determines whether a relationship between the first candidate and the second candidate meets a given condition. The setting portion sets the target position according to the second candidate of the target position when the determination portion determines that the relationship between the first candidate and the second candidate meets the given condition.05-14-2009
20100158313COUPLING ALIGNMENT APPARATUS AND METHOD - An apparatus for axially aligning a first coupling member and a second coupling member that can be connected so as to form a rotating assembly. The apparatus includes a measurement arrangement configured to be mounted onto the first coupling member and to be rotated therewith. The measurement arrangement includes an emitter arrangement configured to emit first and second signals in the direction of the second coupling member so as to cause at least a portion of said first and second signals to be reflected by the second coupling member. The measurement apparatus further has a capture arrangement configured to capture at least a portion of the first and second reflected signals. The apparatus includes a control arrangement configured to determine an offset in axial alignment between the first and second coupling member based on at least the first and second reflected signals.06-24-2010
20130028472MULTI-HYPOTHESIS PROJECTION-BASED SHIFT ESTIMATION - A method for determining a shift between two images, determining a first correlation in a first direction, the first correlation being derived from a first image projection characteristics and a second image projection characteristics, and a second correlation in a second direction, the second correlation being derived from the first image projection characteristics and the second image projection characteristics. The method determines a set of hypotheses from a first plurality of local maxima of the first correlation and a second plurality of local maxima of the second correlation. The method then calculates a two-dimensional correlation score between the first image and the second image based on a shift indicated in at least one of the set of hypotheses, and selecting one of the set of hypotheses as the shift between the first image and the second image based on the calculated two-dimensional correlation score.01-31-2013
20100074472SYSTEM FOR AUTOMATED SCREENING OF SECURITY CAMERAS - The present invention involves a system for automatically screening closed circuit television (CCTV) cameras for large and small scale security systems, as used for example in parking garages. The system includes six primary software elements, each of which performs a unique function within the operation of the security system to provide intelligent camera selection for operators, resulting in a marked decrease of operator fatigue in a CCTV system. Real-time image analysis of video data is performed wherein a single pass of a video frame produces a terrain map which contains parameters indicating the content of the video. Based on the parameters of the terrain map, the system is able to make decisions about which camera an operator should view based on the presence and activity of vehicles and pedestrians, furthermore, discriminating vehicle traffic from pedestrian traffic. The system is compatible with existing CCTV (closed circuit television) systems and is comprised of modular elements to facilitate integration and upgrades.03-25-2010
20100074470Combination detector and object detection method using the same - Provided are a detector and a method of detecting an object using the detector. The method includes combining a first detector and a second detector in a combination scheme to form a multi-layer combination detector, the second detector being of a type different from that of the first detector, processing a binary classification detection with respect to an inputted sample starting from an uppermost layer detector, allowing a sample of an object detected from a current layer to approach a lower layer, while rejecting a sample of a non-object detected from the current layer whereby the rejected non-object may not approach the lower layer, and outputting a sample passing through all layers as a detected object.03-25-2010
20100074469Vehicle and road sign recognition device - The present invention includes: image capturing means (03-25-2010
20080240499Jointly Registering Images While Tracking Moving Objects with Moving Cameras - A method tracks a moving object by registering a current image in a sequence of images with a previous image. The sequence of images is acquired of a scene by a moving camera. The registering produces a registration result. The moving object is tracked in the registered image to produce a tracking result. The registered current image is registered with the previous image using tracking result for all the images in the sequence.10-02-2008
20100046799METHODS AND SYSTEMS FOR DETECTING OBJECTS OF INTEREST IN SPATIO-TEMPORAL SIGNALS - Methods and systems detect objects of interest in a spatio-temporal signal. According to one embodiment, a system processes a digital spatio-temporal input signal containing zero or more foreground objects of interest superimposed on a background. The system comprises a foreground/background separation module, a foreground object grouping module, an object classification module, and a feedback connection. The foreground/background separation module receives the spatio-temporal input signal and, according to one or more adaptable parameters, produces foreground/background labels designating elements of the spatio-temporal input signal as either foreground or background. The foreground object grouping module is connected to the foreground/background separation module and identifies groups of selected foreground-labeled elements as foreground objects. The object classification module is connected to the foreground object grouping module and generates object-level information related to the foreground object. The object-level information adapts the one or more adaptable parameters of the foreground/background separation module, via the feedback connection.02-25-2010
20100046798IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - In an image processing apparatus that performs tracking processing based on a correlation between flame images, when an object that is a tracking target is missed and a frame indicating the tracking target is set to a uniform background during tracking processing, a display of the frame may blur. An image processing apparatus is provided which detects a tracking target candidate region which has a highest correlation with a set tracking target region, calculates a difference between an evaluation value acquired in the tracking target candidate region and an evaluation value acquired in a peripheral region of the tracking target candidate region, and stops tracking if the difference is less than a threshold value.02-25-2010
20100046797METHODS AND SYSTEMS FOR AUDIENCE MONITORING - Systems and methods for audience monitoring are provided that include receiving an input including a recording or live feed of an audience composed of several persons, detecting foreground of the input, performing blob segmentation of the input, and analyzing human presence on each segmented blob by identifying at least one person, identifying a spatial distribution of at least one identified person, determining a dwell time of at least one identified person, determining a temporal distribution of at least one identified person, and determining a gaze direction of at least one identified person. Such detecting provides the ability to track individual persons present in the audience, and how long they remain in the audience. The method also provides the ability to determine gaze direction of persons in the audience, and how long one or more persons are gazing in a particular direction.02-25-2010
20100046796 METHOD OF RECOGNIZING A MOTION PATTERN OF AN OBJECT - A method and a motion recognition system is disclosed for recognizing a motion pattern of at least one object by means of determining relative motion blur variations around the at least on object in an image or a sequence of images. Motion blur parameters are extracted from the motion blur in the images, and based thereon the motion blur variations are determined by means of determining variations between the motion blur parameters.02-25-2010
20130034265APPARATUS AND METHOD FOR RECOGNIZING GESTURE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM THEREOF - According to one embodiment, a time series information acquisition unit acquires time series information of a position or a size of a specific part of a user's body. An operation segment detection unit detects a movement direction of the specific part from the time series information, and detects a plurality of operation segments each segmented by two of a start point, a turning point and an end point of the movement direction. A recognition unit specifies a first operation segment to be recognized and a second operation segment following the first operation segment among the plurality of operation segments, and recognizes a motion of the specific part in the first operation segment by using a first feature extracted from the time series information of the first operation segment and a second feature extracted from the time series information of the second operation segment.02-07-2013
20130034269PROCESSING-TARGET IMAGE GENERATION DEVICE, PROCESSING-TARGET IMAGE GENERATION METHOD AND OPERATION SUPPORT SYSTEM - A processing-target image generation device generates a processing-target image which is an object to be subjected to an image conversion process for acquiring an output image based on an input image taken by an image-taking part. A coordinates correspondence part causes input coordinates, spatial coordinates, and projection coordinates to correspond to each other, the input coordinates being on an input image plane on which the input image is located, the spatial coordinates being on a space model on which the input image is projected, the projection coordinates being on a processing-target image plane on which the processing-target image is positioned and the image projected on the space model is re-projected.02-07-2013
20130034264LOCOMOTION ANALYSIS METHOD AND LOCOMOTION ANALYSIS APPARATUS - An exemplary locomotion analysis method includes steps of: acquiring a depth map including an image of a measured object, filtering out a background image of the depth map according to a depth threshold, finding out the image of the measured object from the residual image of the depth map, calculating three-dimensional (3D) coordinates of the measured object according to the image of the measured object has been found out, recording the 3D coordinates to reconstruct a 3D moving track of the measured object and performing a locomotion analysis of the measured object according to the 3D moving track. Moreover, an exemplary locomotion analysis apparatus applied to the above method also is provided.02-07-2013
20130034268METHOD AND SYSTEM FOR USE IN PERFORMING SECURITY SCREENING - A method and apparatus for screening luggage are provided. X-ray images derived by scanning the luggage with X-rays are received and processed with an automated threat detection (ATD) engine. A determination is then made whether to subject respective ones of the X-ray images to further visual inspection by a human operator at least in part based on results obtained by the ATD engine. In certain cases, visual inspection by a human operator is by-passed and the ATD results are relied upon in order to mark luggage for further inspection or to mark luggage as clear. In another aspect, X-ray images derived by scanning the luggage using two or more X-ray scanning devices are pooled at a centralized location. ATD operations are applied to the X-ray images, which are then provided “on-demand” to a human operator for visual inspection. Results of the visual inspection are entered by the human operator and then conveyed to on-site screening technicians associated with respective X-ray scanning devices.02-07-2013
20130034266METHOD AND SYSTEM FOR DETECTION AND TRACKING EMPLOYING MULTI-VIEW MULTI-SPECTRAL IMAGING - Multi view multi spectral detection and tracking system comprising at lease one imager, at least one of the at least one imager being a multi spectral imager, the at least one imager acquiring at least two detection sequences, and at least two tracking sequences, each sequence including at least one image, each acquired image being associated with respective image attributes, an object detector, coupled with the at least one imager, detecting objects of interest in the scene, according to the detection sequence of images and the respective image attributes, an object tracker coupled with the object detector, the object tracker tracking the objects of interest in the scene and determining dynamic spatial characteristics and dynamic spectral characteristics for each object of interest according to the tracking sequences of images and the respective image attributes and an object classifier, coupled with the object tracker, classifying the objects of interest according to the dynamic spatial characteristics and the dynamic spectral characteristics.02-07-2013
20130034267Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image.02-07-2013
20130034262Hands-Free Voice/Video Session Initiation Using Face Detection - A communication system includes a telecommunication appliance connected to a communication network, an image acquisition appliance coupled to the telecommunication appliance, software executing on the telecommunication appliance from a non-transitory physical medium, the software providing a first function enabling detecting that an image acquired by the camera comprises a human face in at least a portion of the image, and a second function initiating a communication event directed to a pre-programmed destination, the second function initiated by the first function detecting the human face image portion.02-07-2013
20130034263Adaptive Threshold for Object Detection - Systems and methods for developing and using adaptive threshold values for different input images for object detection are disclosed. In embodiments, detector response histogram-based systems and methods train models for predicting optimal threshold values for different images. In embodiments, when training the model, an optimal threshold value for an image is defined as the value that maximizes the reduction of false positive image patches while preserving as many true positive image patches as possible. Once trained, the model may be used to set different threshold values for different images by inputting a detector response histogram for the image patches of an image into the model to determine a threshold value for detection.02-07-2013
20090175500Object tracking apparatus - An object tracking apparatus tracks an object on image data captured continuously. The object tracking apparatus includes an object color adjusting unit and a particle filter processing unit. The object color adjusting unit calculates tendency of color change in regions on image data and adjusts a color of the object set as an object color based on the tendency of color change to obtain a reference color. The particle filter processing unit estimates a region corresponding to the object on image data based on likelihood of each particle calculated by comparing a color around each particle with the reference color, using particles which move on image data according to a predefined rule.07-09-2009
20130077828IMAGE PROCESSING - Apparatus and method for processing a sequence of images of a scene, the method including: tracking a region of interest in the sequence of images (e.g. using a Self Adaptive Discriminant filter), selecting a particular image in the sequence, selecting a set of images from the sequence, the set of images including one or more images that precede the particular image in the sequence of images; and determining a value indicative of the level of change between the region of interest in the particular image and the regions of interest in the images in the set of images (e.g. using a Change Detection Process).03-28-2013
20130077826Method and apparatus for three-dimensional tracking of infra-red beacons - A method for processing data includes identifying a time signature of an infra-red (IR) beacon. Image data associated with the IR beacon is identified using the time signature.03-28-2013
20130077827AUTOMATED CRYSTAL IDENTIFICATION ACHIEVED VIA MODIFIABLE TEMPLATES - A nuclear imaging system (03-28-2013
20130077825IMAGE PROCESSING APPARATUS - There is provided an image processing apparatus. The image processing apparatus includes: a color reproducing unit for reproducing a luminance of a color phase, which is not set to each pixel of a pair of image data composed of Bayer array, based upon the adjacent pixels; and a matching processing unit for extracting blocks with a predetermined size from the pair of image data whose luminance is reproduced, and executing a matching process so as to specify blocks having high correlation. The color reproducing unit and the matching processing unit respectively execute the luminance reproduction and the matching process with only a color phase with the highest degree of occupation in the Bayer array.03-28-2013
20130077823SYSTEMS AND METHODS FOR NON-CONTACT HEART RATE SENSING - An embodiment generally relates to systems and methods for estimating heart rates of individuals using non-contact imaging. A processing module can process multi-spectral video images of individuals and detect skin blobs within different images of the multi-spectral video images. The skin blobs, can be converted into time series signals and processed with a band pass filter. Further, the time series signals can be processed to separate pulse signals from unnecessary signals. The heart rate of the individual can be estimated according to the resulting time series signal processing.03-28-2013
20130077824HEURISTIC MOTION DETECTION METHODS AND SYSTEMS FOR INTERACTIVE APPLICATIONS - A method is provided for motion detection comprising acquiring a series of images comprising a current image and a previous image, determining a plurality of optical flow vectors, each representing movement of one of a plurality of visual elements from a first location in the previous image to a second location in the current image, storing the optical flow vectors in a current vector map associated with time information, and determining motion by calculating an intensity ratio between the current vector map and at least one prior vector map.03-28-2013
20130077822METHOD FOR CREATING AN INDEX USING AN ALL-IN-ONE PRINTER AND ADJUSTABLE GROUPING PARAMETERS - A method for indexing and printing images on a printing system, the method includes inputting images with metadata into the printing system; by means of the metadata, selectively grouping the images into a plurality of groups by a controller of the printing system; selecting at least one representative image as an index image from each group; selecting an output format; and using the index images to create a index image file corresponding to the selected output format.03-28-2013
20130077821Enhancing Video Using Super-Resolution - A method and apparatus for processing images. A portion of a selected image in which a moving object is present is identified. The selected image is one of a sequence of images. Pixels in a region of interest are identified in the selected image. First values are identified for a first portion of the pixels using the images and first transformations. The first portion of the pixels corresponds to the background in the selected image. A first transformation is configured to align features of the background between one image in the images and the selected image. Second values are identified for a second portion of the pixels using the images and second transformations. The second portion of the pixels corresponds to the moving object in the selected image. A second transformation is configured to align features of the moving object between one image in the images and the selected image.03-28-2013
20130077820MACHINE LEARNING GESTURE DETECTION - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human subject observed with a sensor such as a depth camera. A gesture detection module is trained via machine learning to identify one or more features of a virtual skeleton and indicate if the feature(s) collectively indicate a particular gesture.03-28-2013
20130077819BUILDING FOOTPRINT EXTRACTION APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT - A system, method and computer program product cooperate to extract a building footprint from other data associated with a property. Imagery data of real property is input to a computing device, the imagery data containing a plurality of parcels. A processing circuit detects contrasts of candidate man-made structures on a parcel of the plurality of parcels. The candidate man-made structures are then associated with the parcel. A building footprint is then extracted by distinguishing a man-made structure on said parcel from natural terrain, recognizing that man-made structures when viewed from above generally show a strong contrast from background terrain. Remaining candidate man-made structures are removed by observing that they having features inconsistent with predetermined extraction logic.03-28-2013
20130077818DETECTION METHOD OF OPTICAL NAVIGATION DEVICE - A detection method of an optical navigation device is disclosed. The device is used for determining whether an object is lifted from the optical navigation device or not. The method includes steps of reading the detection image detected by the optical navigation device, calculating the image signal value thereof during non-lift status, and integrating a historical threshold value with the image signal value according to adaptive factors for generating an adjustment threshold value serving as the navigation threshold of the detection image. The historical threshold value is the navigation threshold of a former detection image of the detection image. A step of comparing the adjustment threshold with the image signal value for determining whether the image signal value passes the navigation threshold or not may also be included. If the image signal value does not pass the navigation threshold, the object is determined as in the lift status.03-28-2013
20090268942Methods and apparatus for detection of motion picture piracy for piracy prevention - A copiers' camera or camcorder in a motion-picture audience region is detected by illuminating the audience region with invisible infrared light, and locating any copiers' camera or camcorder within the audience region by imaging the audience region with one or more infrared-light-sensitive cameras. The image captured by the infrared-sensitive camera(s) during a performance may be correlated with information about the audience region, such as row and seat numbers. Copiers may be identified by their presence at seats where copying activity is detected, and the infrared images may be preserved as evidence of the piracy.10-29-2009
20120207353System And Method For Detecting And Tracking An Object Of Interest In Spatio-Temporal Space - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm.08-16-2012
20120207349TARGETED CONTENT ACQUISITION USING IMAGE ANALYSIS - A method is provided in which a tag is affixed to a known individual that is to be identified within a known field of view of an image capture system. The tag is a physical tag comprising at least a known feature. Subsequent to affixing the tag to the known individual, image data is captured within the known field of view of the image capture system, which is then provided to a processor. Image analysis is performed on the captured image data to detect the at least a known feature. In dependence upon detecting the at least a known feature, an occurrence of the known individual within the captured image data is identified.08-16-2012
20130039542SITUATIONAL AWARENESS - Police officers are provided with client devices capable of capturing multimedia and streaming multimedia. The client devices can upload captured multimedia to a central server or share streams in real time. A network operation center can review the multimedia in real time or afterwards. Situational awareness is the provision of multimedia to a police officer as that officer approaches the location of an incident. The multimedia may be real time streams as the officer responds to a particular location, or the multimedia may be historical files as the officer familiarizes himself with incidents as he patrols a new neighborhood. Since the client device also reports real time reporting patterns, police officers can review high resolution and fidelity patrolling and incident reports to analyze the efficacy of patrol coverage. Since the client device may run supplementary applications, example applications are disclosed.02-14-2013
20130039533METHODS AND SYSTEMS FOR IMAGE DETECTION - A method is provided for image detection. The method includes measuring a temperature of an analog-to-digital (A/D) converter of an imaging system during an imaging scan of an object, and correcting a gain of the A/D converter based on the measured temperature of the A/D converter.02-14-2013
20130039543STOCK ANALYTIC MONITORING - In selected embodiments video footage is automatically analyzed to determine whether product stock levels at particular product locations are low. Video analytics may be employed to track product removal from shelves and determine approximate quantities of product remaining on each shelf based on product size and dedicated shelf area. In selected implementations an alarm notification is generated to alert store personnel that restocking is appropriate. Such an alarm notification optionally includes a still image of the area corresponding to the alarm together with data related to the product and projected quantities needed to restock the shelf. In some embodiments the system automatically identifies the store personnel who are currently located in areas near where the alarm event occurred and the notification is wirelessly distributed to their mobile devices.02-14-2013
20130039541ROBOT SYSTEM, ROBOT CONTROL DEVICE AND METHOD FOR CONTROLLING ROBOT - A robot system includes a robot having a movable section, an image capture unit provided on the movable section, an output unit that allows the image capture unit to capture a target object and a reference mark and outputs a captured image in which the reference mark is imaged as a locus image, an extraction unit that extracts the locus image from the captured image, an image acquisition unit that performs image transformation on the basis of the extracted locus image by using the point spread function so as to acquire an image after the transformation from the captured image, a computation unit that computes a position of the target object on the basis of the acquired image, and a control unit that controls the robot so as to move the movable section toward the target object in accordance with the computed position.02-14-2013
20130039540INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING PROCESSING PROGRAM, RECORDING MEDIUM HAVING INFORMATION PROVIDING PROCESSING PROGRAM RECORDED THEREON, AND INFORMATION PROVIDING METHOD - There are provided an information providing device, an information providing processing program, and an information providing method which can efficiently recommend information related to a shooting spot matching a user's preference. An information providing server is configured to decide a coincidence between user object information included in image data registered by a given user, and representative object information of a location whose position can be specified, and notify location information associated with the representative object information, to the user based on a decision result of the coincidence.02-14-2013
20130039538BALL TRAJECTORY AND BOUNCE POSITION DETECTION - Disclosed in some examples is a method, system and medium relating to determining a ball trajectory and bounce position on a playing surface. An example method includes recording a first and a second sequence of ball images before and after a ball bounce on the playing surface; constructing a composite image of the trajectory of the ball from the first and second sequences; and determining a bounce position of the ball from the composite image.02-14-2013
20130039539Portable Electronic Device - A portable electronic device includes a light source, which includes at least one luminescence diode and emits light during operation. The portable electronic device also includes a device for detecting an object in the beam path of the light emitted by the light source during operation. The device is designed to reduce the luminous flux of the light emitted by the light source during operation if the object is identified for a minimum duration within a minimum distance from the light source in the beam path.02-14-2013
20130039537IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus includes a character recognition unit configured to perform character recognition of a character region where characters exist in an image to generate character code, a detection unit configured to detect a region of the image where a feature change in the image is small, and a placement unit configured to place data obtained from the character code in the detected region.02-14-2013
20130039536Method and System for Optoelectronic Detection and Location of Objects - Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.02-14-2013
20130039532PARKING LOT INFORMATION SYSTEM USING IMAGE TECHNOLOGY FOR IDENTIFYING AVAILABLE PARKING SPACES - A parking lot information system comprising a digital camera for obtaining an image of parking spaces in the parking lot where each parking space is marked with a visual identifier, a computer coupled to the digital camera for identifying available parking spaces by recognizing the identifiers marking the available parking spaces, and a display coupled to the computer for displaying information on the available parking spaces.02-14-2013
20130039535METHOD AND APPARATUS FOR REDUCING COMPLEXITY OF A COMPUTER VISION SYSTEM AND APPLYING RELATED COMPUTER VISION APPLICATIONS - A method for reducing complexity of a computer vision system and applying related computer vision applications includes: obtaining instruction information, wherein the instruction information is used for a computer vision application; obtaining image data from a camera module and defining at least one region of recognition corresponding to the image data by user gesture input on a touch-sensitive display; outputting a recognition result of the aforementioned at least one region of recognition; and searching at least one database according to the recognition result. Associated apparatus are also provided. For example, the apparatus includes an instruction information generator, a processing circuit, and a database management module, where the instruction information generator obtains the instruction information, and the processing circuit obtains the image data from the camera module, defines the aforementioned at least one region of recognition and outputs a recognition result of the at least one region of recognition.02-14-2013
20130039534MOTION DETECTION METHOD FOR COMPLEX SCENES - A motion detection method for complex scenes has steps of receiving an image frame including a plurality of pixels, each of the pixel including a first pixel information; performing a multi-background generation module based on the plurality of pixels; generating a plurality of background pixels based on the multi-background generation module; performing a moving object detection module; and deriving the background pixel based on the moving object detection module.02-14-2013
20130039531METHOD AND APPARATUS FOR CONTROLLING MULTI-EXPERIENCE TRANSLATION OF MEDIA CONTENT - A method or apparatus for controlling a media device using gestures may include, for example, modifying media content to generate first updated media content according to a comparison of first information descriptive of a first environment of the source device to second information descriptive of a second environment of the recipient device, capturing images of a gesture, identifying a command from the gesture, and modifying the first updated media content to generate second updated media content according to the command. Other embodiments are disclosed.02-14-2013
20130044915METHOD AND APPARATUS FOR RECOGNIZING CHARACTERS - A method and an apparatus for recognizing characters using an image are provided. A camera is activated according to a character recognition request and a preview mode is set for displaying an image photographed through the camera in real time. An auto focus of the camera is controlled and an image having a predetermined level of clarity is obtained for character recognition from the images obtained in the preview mode. The image for character recognition is character-recognition-processed so as to extract recognition result data. A final recognition character row is drawn that excludes non-character data from the recognition result data. A first word is combined including at least one character of the final recognition character row and a predetermined maximum number of characters. A dictionary database that stores dictionary information on various languages using the first word is searched, so as to provide the user with the corresponding word.02-21-2013
20130044916METHOD AND APPARATUS OF PUSH & PULL GESTURE RECOGNITION IN 3D SYSTEM - The present invention provides method and apparatus of PUSH & PULL gesture recognition in 3D system. The method comprising determining whether the gesture is PUSH or PULL as a function of distances from the object performing the gesture to the cameras and the characteristics of moving traces of the object in the image planes of the two cameras.02-21-2013
20130044912USE OF ASSOCIATION OF AN OBJECT DETECTED IN AN IMAGE TO OBTAIN INFORMATION TO DISPLAY TO A USER - Camera(s) capture a scene, including an object that is portable. An image of the scene is processed to segment therefrom a portion corresponding to the object, which is then identified from among a set of predetermined real world objects. An identifier of the object is used, with a set of associations between object identifiers and user identifiers, to obtain a user identifier that identifies a user at least partially from among a set of users. Specifically, the user identifier may identify a group of users that includes the user (“weak identification”) or alternatively the user identifier may identify the user uniquely (“strong identification”) in the set. The user identifier is used either alone or in combination with user input to obtain and store in memory, information to be output to the user. At least a portion of the obtained information is thereafter output, e.g. displayed by projection into the scene.02-21-2013
20130044914METHODS FOR DETECTING AND RECOGNIZING A MOVING OBJECT IN VIDEO AND DEVICES THEREOF - A method, non-transitory computer readable medium, and apparatus that extracts at least one key image from one or more images of an object. Outer boundary makers for an identifier of the object in the at least one key image are detected. An identification sequence from the identifier of the object between the outer boundary markers in the at least one key image is recognized. The recognized identification sequence of the object in the at least one key image is provided.02-21-2013
20130044913Plane Detection and Tracking for Structure from Motion - Plane detection and tracking algorithms are described that may take point trajectories as input and provide as output a set of inter-image homographies. The inter-image homographies may, for example, be used to generate estimates for 3D camera motion, camera intrinsic parameters, and plane normals using a plane-based self-calibration algorithm. A plane detection and tracking algorithm may obtain a set of point trajectories for a set of images (e.g., a video sequence, or a set of still photographs). A 2D plane may be detected from the trajectories, and trajectories that follow the 2D plane through the images may be identified. The identified trajectories may be used to compute a set of inter-image homographies for the images as output.02-21-2013
20090268945ARCHITECTURE FOR CONTROLLING A COMPUTER USING HAND GESTURES - Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria.10-29-2009
20130136299METHOD AND APPARATUS FOR RECOVERING DEPTH INFORMATION OF IMAGE - An image processing apparatus and method may estimate binocular disparity maps of middle views from among a plurality of views through use of images of the plurality of views. The image processing apparatus may detect a moving object from the middle views based on the binocular disparity maps of the frames. Pixels in the middle views may be separated into dynamic pixels and static pixels through detection of the moving object. The image processing apparatus may apply bundle optimization and a local three-dimensional (3D) line model-based temporal optimization to the middle views so as to enhance binocular disparity values of the static pixels and dynamic pixels.05-30-2013
20130188836METHOD AND APPARATUS FOR PROVIDING HAND DETECTION - A method for providing hand detection may include receiving feature transformed image data for a series of image frames, determining asymmetric difference data indicative of differences between feature transformed image data of a plurality of frames of the series of image frames and a reference frame, and determining a target area based on an intersection of the asymmetric difference data. An apparatus and computer program product corresponding to the method are also provided.07-25-2013
20080267452APPARATUS AND METHOD OF DETERMINING SIMILAR IMAGE - An apparatus of determining a similar image contains a subject-region-detecting unit that detects a subject region from a received image, a pixel-value-distribution-generating unit that generates pixel value distribution of pixels included in the subject region detected by the subject-region-detecting unit, and a determination unit that determines whether or not an image relative to the subject region is similar to a previously registered subject image based on the pixel value distribution generated by the pixel-value-distribution-generating unit and a registered pixel value distribution of the previously registered subject image.10-30-2008
20130188826IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including a moving object detection unit configured to detect a moving object which is an image different from a background in a current image, a temporary pause determination unit configured to determine whether the moving object is paused for a predetermined time period or more, a reliability processing unit configured to calculate non-moving object reliability for a pixel of the current image using the current image and a temporarily paused image including a temporarily paused object serving as the moving object which is paused for a predetermined time period or more, the non-moving object reliability representing likelihood of being a non-moving object which is an image different from the background that does not change for a predetermined time period or more, and a non-moving object detection unit configured to detect the non-moving object from the current image based on the non-moving object reliability.07-25-2013
20100104134Interaction Using Touch and Non-Touch Gestures - A computer interface may use touch- and non-touch-based gesture detection systems to detect touch and non-touch gestures on a computing device. The systems may each capture an image, and interpret the image as corresponding to a predetermined gesture. The systems may also generate similarity values to indicate the strength of a match between a captured image and corresponding gesture, and the system may combine gesture identifications from both touch- and non-touch-based gesture identification systems to ultimately determine the gesture. A threshold comparison algorithm may be used to apply different thresholds for different gesture detection systems and gesture types.04-29-2010
20100104136METHOD AND APPARATUS FOR DETECTING THE PLACEMENT OF A GOLF BALL FOR A LAUNCH MONITOR - A novel method and apparatus for detecting the placement of a golf ball for a launch monitor is disclosed. The method comprises capturing an image of a scan zone that is adjacent to the launch monitor and in the field of view of the launch monitor's image sensor, analyzing the scan zone image for the placement of an object, and determining if the object is likely the golf ball. An apparatus is also disclosed that implements the golf ball detection method.04-29-2010
20090154770Moving Amount Calculation System and Obstacle Detection System - An arithmetic device (06-18-2009
20090154768METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values.06-18-2009
20090141937Subject Extracting Method, Subject Tracking Method, Image Synthesizing Method, Computer Program for Extracting Subject, Computer Program for Tracking Subject, Computer Program for Synthesizing Images, Subject Extracting Device, Subject Tracking Device, and Image Synthesizing Device - A binary mask image for extracting subject is generated by binarizing an image after image-processing (processed image) with a predefined threshold value. Based on an image before image-processing (pre-processing image) and the binary mask image for extracting image, a subject image in which only a subject included in the pre-processing image is extracted is generated by eliminating a background region from the pre-processing image.06-04-2009
20120183176PERFORMING REVERSE TIME IMAGING OF MULTICOMPONENT ACOUSTIC AND SEISMIC DATA - A technique includes performing reverse time imaging to determine an image in a region of interest. The reverse time imaging includes modeling a pressure wavefield and a gradient wavefield in the region of interest based at least in part on particle motion data and pressure data acquired by sensors in response to energy being produced by at least one source.07-19-2012
20120183178METHOD AND DEVICE FOR RECOGNITION OF INFORMATION APPLIED ON PACKAGES - Embodiments describe a system and method for reading the information on bundled packages wrapped in transparent film. The film can obscure information on the outside of the packages making the automated identification and tracking of the packages difficult. Embodiments described herein provide a system and method for capturing the unique information regardless of the obscuring effects of packaging films. A camera that is insensitive to UV light captures visible light emitted by labels after the labels are irradiated by UV light. The light emission induces greater contrast overcoming any distortion that might have occurred due to the transparent packaging film.07-19-2012
20120183177IMAGE SURVEILLANCE SYSTEM AND METHOD OF DETECTING WHETHER OBJECT IS LEFT BEHIND OR TAKEN AWAY - An image surveillance system and a method of detecting whether an object is left behind or taken away are provided. The image surveillance system includes: a foreground detecting unit which detects a foreground region based on a pixel information difference between a background image and a current input image; a still region detecting unit which detects a candidate still region by clustering foreground pixels of the foreground region, and determines whether the candidate still region is a falsely detected still region or a true still region; and an object detecting unit which determines whether an object is left behind or taken away, based on edge information about the true still region.07-19-2012
20120183175METHOD FOR IDENTIFYING A SCENE FROM MULTIPLE WAVELENGTH POLARIZED IMAGES - Techniques for identifying images of a scene including illuminating the scene with a beam of 3 or more wavelengths, polarized according to a determined direction; simultaneously acquiring for each wavelength an image X07-19-2012
20120213405MOVING OBJECT DETECTION APPARATUS - A moving object detection apparatus generates frame difference image data each time a frame data is captured, based on the captured frame data and previous frame data, and such frame difference image data is divided into pixel blocks. Subsequently, for each of the pixel blocks a discrete cosine transformation (DCT), a two-dimensional DCT coefficient is calculated, and such two-dimensional DCT coefficients are accumulated and stored. The value of each element of the two-dimensional DCT coefficient is arranged to form a characteristic vector, and, for each of the pixel blocks at the same position of the frame difference image data, the characteristic vector is generated and then such characteristic vector is arranged to form a time-series vector. The time-series vector derived from moving-object-capturing pixel blocks is used to calculate a principal component vector and a principal component score.08-23-2012
20130083970IMAGE PROCESSING - Apparatus and method for processing a sequence of images of a scene, the method including: tracking a region of interest in the sequence of images (e.g. using a Self Adaptive Discriminant filter); selecting a particular image in the sequence; selecting a set of images from the sequence, the set having one or more images that precede the particular image in the sequence of images; for each pixel in the region of interest in the particular image, determining a value for a parameter; for each pixel in the region of interest of each image in the set of images, determining a value for the parameters; and comparing a function of the determined values for the region of interest in the particular image to a further function of the determined values for the regions of interest in the images in the set of images.04-04-2013
20130083969COLOR IMAGE PROCESSING METHOD, COLOR IMAGE PROCESSING DEVICE, AND COLOR IMAGE PROCESSING PROGRAM - An object area detection means detects an object area which is an area to be subjected to image processing from an input image. A reflection component reconstruction means calculates color information of the object area and a perfect diffusion component, which is a low-frequency component of the object area, and reconstructs a surface reflection component based on the color information and the low-frequency component. A surface reflection component correction means corrects the reconstructed surface reflection component according to a reference surface reflection component that is the surface reflection component set in advance according to the object area. A reproduced color calculation means calculates a reproduced color that is a color obtained by correcting each pixel included in the input image by using the perfect diffusion component and the corrected surface reflection component and generates an output image based on the reproduced color.04-04-2013
20130083968VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device includes: a first edge image generation element 04-04-2013
20130083966Match, Expand, and Filter Technique for Multi-View Stereopsis - In accordance with one or more aspects of a match, expand, and filter technique for multi-view stereopsis, features across multiple images of an object are matched to obtain a sparse set of patches for the object. The sparse set of patches is expanded to obtain a dense set of patches for the object, and the dense set of patches is filtered to remove erroneous patches. Optionally, reconstructed patches can be converted into 3D mesh models.04-04-2013
20130083965APPARATUS AND METHOD FOR DETECTING OBJECT IN IMAGE - An apparatus and method detects an object in an original image captured by an image capturing device. The apparatus and method detects a location of the object using a thermal image for the captured image, designates a region of the detected object as an image inpainting region, restores a region corresponding to the region of the detected object using its surrounding information, examines a difference between the restored image and the original image, and separates an object region from the original image, thereby more accurately detecting the object.04-04-2013
20130083963ELECTRONIC CAMERA - An electronic camera includes an imager. An imager repeatedly outputs an image representing a scene captured on an imaging surface. A searcher searches for a specific object image from the image outputted from the imager by executing a plurality of comparing processes respectively corresponding to a plurality of postures possibly taken by the imager in a direction around an axis orthogonal to the imaging surface. An executer executes a processing operation different depending on a search result of the searcher. A recorder repeatedly records the image outputted from the imager in parallel with a process of the imager. A restrictor executes a restricting process of restricting the comparing process executed by the searcher to any one of the plurality of comparing processes, in association with a process of the recorder.04-04-2013
20130083967System and Method for Extracting Features in a Medium from Data Having Spatial Coordinates - Systems and methods are provided for extracting various features from data having spatial coordinates. Based on a few known data points in a point cloud, other data points can be interpolated for a given parameter using probabilistic methods, thereby generating a greater number of data points. Using the greater number of data points, a Boolean function, related in part to the given parameter, can be used to extract more detailed features. Based on the Boolean values, a shape of a body having the characteristic(s) defined by the Boolean function can be constructed in a layered manner. The extraction of the features may be carried out automatically by a computing device.04-04-2013
20130083964METHOD AND SYSTEM FOR THREE DIMENSIONAL MAPPING OF AN ENVIRONMENT - A three-dimensional modeling system includes a multi-axis range sensor configured to capture a first set of three-dimensional data representing characteristics of objects in an environment; a data sensor configured to capture a first set of sensor data representing distances between at least a subset of the objects and the data sensor; a computer-readable memory configured to store each of the first set of three-dimensional data and the first set of sensor data; a mobile base; a processor; and a computer-readable medium containing programming instructions configured to, when executed, instruct the processor to process the first set of three-dimensional data and the first set of sensor data to generate a three-dimensional model of the environment.04-04-2013
20130083962IMAGE PROCESSING APPARATUS - An image processing apparatus includes a definer. The definer defines a target image on a designated image. A first detector detects a degree of overlapping between the target image and a first specific object image appearing on the designated image. A second detector detects a degree of overlapping between the target image and a second specific object image appearing on the designated image. A modifier modifies the target image when the degree of overlapping detected by the first detector falls below a first reference or the degree of overlapping detected by the second detector is equal to or more than a second reference. A restrictor restricts a process of the modifier when the degree of overlapping detected by the first detector is equal to or more than the first reference and the degree of overlapping detected by the second detector falls below the second reference.04-04-2013
20130083961IMAGE INFORMATION PROCESSING APPARATUS AND IMAGE INFORMATION PROCESSING METHOD - According to one embodiment, a viewer image processing module detects facial image data on a viewer from a shot image signal obtained by shooting the viewer, a viewed program image processing module detects facial image data on a performer included in program data the viewer is viewing, and a synchronous control module creates viewer information that correlates facial image data on the performer, facial image data on the viewer, and program information on the program with one another and transmits the viewer information to a viewing data entry module.04-04-2013
20130083960FUNCTION-CENTRIC DATA SYSTEM - Various embodiments of the invention provide a function centric data system that reduces avionics system weight and power requirements. In some embodiments, the function centric data system is housed in a vibration resistant package. A variety of functions typically performed by other avionics systems are incorporated into the system, allowing centralize power and processing management, reducing weight and improving system reliability. In some embodiments, the function centric data system is configured to provide high rate data sampling, allowing ground stations to apply sophisticated failure prediction algorithms, reducing maintenance costs and mean time between flights. Embodiments include methods of wireless networking with automatic hand offs and adaptive multi-hop topologies to allow this data to be promptly transferred when the aircraft lands. Embodiments also include methods for data processing to predict imminent failures using Bayesian statistics and catastrophe prediction methods.04-04-2013
20130083959Multi-Modal Sensor Fusion - A method and apparatus for processing images. A sequence of images for a scene is received from an imaging system. An object in the scene is detected using the sequence of images. A viewpoint of the imaging system is registered to a model of the scene using a region in the model of the scene in which an expected behavior of the object is expected to occur.04-04-2013
20100142758Method for Providing Photographed Image-Related Information to User, and Mobile System Therefor - System for providing a mobile user, object related information related to an object visible thereto, the system including a camera directable toward the object, a local interest points and semi global geometry (LIPSGG) extraction processor, and a remote LIPSGG identifier, the camera acquiring an image of at least a portion of the object, the LIPSGG extraction processor being coupled with the camera, the LIPSGG extraction processor extracting an LIPSGG model of the object from the image, remote LIPSGG identifier being coupled with the LIPSGG extraction processor via a network, the remote LIPSGG identifier receiving the LIPSGG model from the LIPSGG extraction processor, via the network, the remote LIPSGG identifier identifying the object according to the LIPSGG model, the remote LIPSGG identifier retrieving the object related information, the remote LIPSGG identifier providing the object related information to the mobile user operating the camera.06-10-2010
20100329508Detecting Ground Geographic Features in Images Based on Invariant Components - Systems, devices, features, and methods for detecting geographic features in images, such as, for example, to develop a navigation database are disclosed. For example, a method of detecting a path marking from collected images includes collecting a plurality of images of geographic areas along a path. An image of the plurality of images is selected. Components that represent an object on the path in the selected image are determined. In one embodiment, the determined components are independent or invariant to scale of the object. The determined components are compared to reference components in a data library. If the determined components substantially meet a matching threshold with the reference components, the object in the selected image is identified to be a path marking corresponding to the reference components in the data library.12-30-2010
20130089237SENSORS AND SYSTEMS FOR THE CAPTURE OF SCENES AND EVENTS IN SPACE AND TIME - Various embodiments comprise apparatuses and methods including a light sensor. The light sensor includes a first electrode, a second electrode, a third electrode, and a light-absorbing semiconductor in electrical communication with each of the first electrode, the second electrode, and the third electrode. A light-obscuring material to substantially attenuate an incidence of light onto a portion of the light-absorbing semiconductor is disposed between the second electrode and the third electrode. An electrical bias is to be applied between the second electrode, and the first and the third electrodes and a current flowing through the second electrode is related to the light incident on the light sensor. Additional methods and apparatuses are described.04-11-2013
20130089236Iris Recognition Systems - The present invention concerns a method for capturing an image of an iris free of specularities from a spectacle-wearing user for use in an iris recognition identification system, which includes an illumination source and an image capture device. The method comprises illuminating the user's eye from a first illumination position associated with a first optical path, and capturing a first image of the eye; and determining if the first image comprises a specular image in a first region of interest, the specular image being formed by light reflected from the spectacles. If a specular image is present, the method further comprises illuminating the eye from a second illumination position associated with a second optical path different to the first optical path, such that the specular image is shifted to a second region; and capturing a second image of the eye.04-11-2013
20130089234TRAJECTORY INTERPOLATION APPARATUS AND METHOD - A trajectory interpolation apparatus is disclosed. The first storage part stores first time and first location information of a movable body at the first time. The second storage stores second time and second location information of the movable body at the second time. The calculation part calculates a first moving distance from the first time and a second moving distance from the second time based on a relationship between the time and the speed stored in the second storage part, regarding third time between the first time and the second time. The determination part determines, as the interpolation point, one of intersection points for a circle in which the first location is set as its center and the first moving distance is set as its radius, and another circle in which the second location is set as its center and the second moving distance is set as its radius.04-11-2013
20130089235MOBILE APPARATUS AND METHOD FOR CONTROLLING THE SAME - A method of controlling a mobile apparatus includes acquiring a first original image and a second original image, extracting a first feature point of the first original image and a second feature point of the second original image, generating a first blurring image and a second blurring image by blurring the first original image and the second original image, respectively, calculating a similarity between at least two images of the first original image, the second original image, the first blurring image, and the second blurring image, determining a change in scale of the second original image based on the calculated similarity, and controlling at least one of an object recognition and a position recognition by matching the second feature point of the second original image to the first feature point of the first original image based on the change in scale.04-11-2013
20130051622Method For Calculating Weight Ratio By Quality Grade In Grain Appearance Quality Grade Discrimination Device - A method is provided for calculating a weight ratio by quality grade using a grain appearance quality grade discrimination device. The method involves the steps of imaging a plurality of grains; discriminating the quality grade of the grains on the basis of data of the imaged grains; tallying, by quality grade, the number of pixels in said data of the imaged grains with regards to the grains whose quality grade has been discriminated; multiplying the number of pixels tallied by quality grade by a weight conversion coefficient per pixel predetermined by quality grade, and thereby converting said number of pixels into a weight by quality grade; and calculating the weight ratio by quality grade of the grains on the basis of the weight by quality grade.02-28-2013
20130051621ADAPTIVE IMAGE ACQUISITION AND PROCESSING WITH IMAGE ANALYSIS FEEDBACK - Described are systems, methods, computer programs, and user interfaces for image location, acquisition, analysis, and data correlation that uses human-in-the-loop processing, Human Intelligence Tasks (HIT), and/or or automated image processing. Results obtained using image analysis are correlated to non-spatial information useful for commerce and trade. For example, images of regions of interest of the earth are used to count items (e.g., cars in a store parking lot to predict store revenues), detect events (e.g., unloading of a container ship, or evaluating the completion of a construction project), or quantify items (e.g., the water level in a reservoir, the area of a farming plot).02-28-2013
20130051618Method for controlling a light emission of a headlight of a vehicle - A method for controlling a light emission of at least one headlight of a vehicle, which has a traffic sign recognition device. The method includes receiving at least one traffic sign recognition signal from an interface to the traffic sign recognition device. In this instance, the at least one traffic sign recognition signal represents a traffic sign recognized in a course of the road currently being traveled by the vehicle. The method also includes setting a debounce time and/or a debounce stretch for a change in the light emission of the at least one headlight between first and second radiation characteristics as a function of the at least one traffic sign recognition signal. Finally, the method includes delaying the change in the light emission of the at least one headlight by the debounce time set and/or the debounce stretch set, to control light emission of the at least one headlight.02-28-2013
20130051617METHOD FOR SENSING MOTION AND DEVICE FOR IMPLEMENTING THE SAME - A method for sensing a motion of an object is to be implemented by a motion recognition device that includes an image acquiring unit and a processor. In the method, the image acquiring unit is configured to acquire a series of image frames by detecting intensity of light received thereby. The processor is configured to receive at least one of the image frames and to determine whether an object is detected in the at least one of the image frames. When an object is detected, the processor is further configured to receive the image frames from the image acquiring unit, and to determine a motion of the object with respect to a three-dimensional coordinate system according to the image frames thus received.02-28-2013
20130051614SIGN LANGUAGE RECOGNITION SYSTEM AND METHOD - A sign language recognition method includes a depth-sensing camera capturing an image of a gesture of a signer and gathering data about distances between a number of points on the signer and the depth-sensing camera, building a three dimension (3D) model of the gesture, comparing the 3D model of the gesture with a number of 3D models of different gestures to find out the representations of the 3D model of the gesture, and displaying or vocalizing the representations of the 3D model of the gesture.02-28-2013
20130051613MODELING OF TEMPORARILY STATIC OBJECTS IN SURVEILLANCE VIDEO DATA - A foreground object blob having a bounding box detected in frame image data is classified by a finite state machine as a background, moving foreground, or temporally static object, namely as the temporally static object when the detected bounding box is distinguished from a background model of a scene image of the video data input and remains static in the scene image for a threshold period. The bounding box is tracked through matching masks in subsequent frame data of the video data input, and the object sub-classified within a visible sub-state, an occluded sub-state, or another sub-state that is not visible and not occluded as a function of a static value ratio. The ratio is a number of pixels determined to be static by tracking in a foreground region of the background model corresponding to the tracked object bounding box over a total number of pixels of the foreground region.02-28-2013
20130051612SEGMENTING SPATIOTEMPORAL DATA BASED ON USER GAZE DATA - A segmentation task is specified to a user, and gaze data generated by monitoring eye movements of the user viewing spatiotemporal data as a plurality of frames is received. The gaze data includes fixation locations based on the user's gaze throughout the frames. A first frame and a second frame of the frames are selected based on the fixation locations. Segmentation is performed on the first and second frames to segment first and second objects, respectively, from the first and second frames based on a region of interest associated with the first and second frames, the region of interest corresponding to a location of one of the fixation locations. A determination is made as to whether the first and second objects are relevant to the segmentation task, and if so, association data to associate the first object with the second object when the first and second objects is generated.02-28-2013
20130051611IMAGE OVERLAYING AND COMPARISON FOR INVENTORY DISPLAY AUDITING - Image overlaying and comparison for inventory display auditing is disclosed herein. An example method to perform inventory display auditing disclosed herein comprises overlaying a reference image over a current image displayed on a camera display, the reference image corresponding to an inventory display to be audited, comparing the reference image and the current image to determine whether the current image and the reference image correspond to a same scene and when the reference image and the current image are determined to correspond to the same scene, indicating a difference region in the current image displayed on the camera display, the difference region being a first region of the current image that differs from a corresponding first region of the reference image.02-28-2013
20100135527Image recognition algorithm, method of identifying a target image using same, and method of selecting data for transmission to a portable electronic device - An image recognition algorithm includes a keypoints-based comparison and a region-based color comparison. A method of identifying a target image using the algorithm includes: receiving an input at a processing device, the input including data related to the target image; performing a retrieving step including retrieving an image from an image database, and, until the image is either accepted or rejected, designating the image as a candidate image; performing an image recognition step including using the processing device to perform an image recognition algorithm on the target and candidate images in order to obtain an image recognition algorithm output; and performing a comparison step including: if the image recognition algorithm output is within a pre-selected range, accepting the candidate image as the target image; and if the image recognition algorithm output is not within the pre-selected range, rejecting the candidate image and repeating the retrieving, image recognition, and comparison steps.06-03-2010
20090324011METHOD OF DETECTING MOVING OBJECT - Proposed is a method of detecting a moving object, including: providing an image-set at least including a first image and a second image correlated in a time series, the first image preceding the second image; defining a detecting region and a detecting direction so as to construct a virtual gate in the first image; estimating the motion vector in a time series; comparing, by the virtual gate, the second image with the first image so as to determine a difference therebetween in terms of an object's position and motion vector; and retrieving the object to be an effective moving object upon determination of the object as lying within the detecting region defined in the virtual gate and moving in a direction substantively the same with the detecting direction. This invention presents a moving object detection method without the need to construct a background model a priori.12-31-2009
20090316956Image Processing Apparatus - An image processing accuracy estimation unit estimates an image processing accuracy by calculating a size of an object by which the accuracy of measurement of the distance of the object photographed by an on-vehicle camera becomes a permissible value or less. An image post-processing area determination unit determines, in accordance with the estimated image processing accuracy, a partial area inside a detection area of the object as an image post-processing area for which an image post-processing is carried out and lattices the determined image post-processing area to cells. An image processing unit processes the image photographed by the on-vehicle camera to detect a candidate for object and calculates a three-dimensional position of the detected object candidate. An image post-processing unit calculates, in each the individual cell inside the determined area the probability as to whether the detected object is present and determines the presence/absence of the object.12-24-2009
20090116692REALTIME OBJECT TRACKING SYSTEM - A real-time computer vision system tracks one or more objects moving in a scene using a target location technique which does not involve searching. The imaging hardware includes a color camera, frame grabber and processor. The software consists of the low-level image grabbing software and a tracking algorithm. The system tracks objects based on the color, motion and/or shape of the object in the image. A color matching function is used to compute three measures of the target's probable location based on the target color, shape and motion. The method then computes the most probable location of the target using a weighting technique. Once the system is running, a graphical user interface displays the live image from the color camera on the computer screen. The operator can then use the mouse to select a target for tracking. The system will then keep track of the moving target in the scene in real-time.05-07-2009
20090304234TRACKING POINT DETECTING DEVICE AND METHOD, PROGRAM, AND RECORDING MEDIUM - A tracking point detecting device includes: a frame decimation unit for decimation the frame interval of a moving image configured of multiple frame images continuing temporally; a first detecting unit for detecting, of two consecutive frames of the decimated moving image, a temporally-subsequent frame pixel corresponding to a predetermined pixel of a temporally-previous frame; a forward-direction detecting unit for detecting the pixel corresponding to a predetermined pixel of a temporally-previous frame of the decimated moving image, at each of the decimated frames in the same direction as time; an opposite-direction detecting unit for detecting the pixel corresponding to the detected pixel of a temporally-subsequent frame of the decimated moving image, at each of the decimated frames in the opposite direction of time; and a second detecting unit for detecting a predetermined pixel of each of the decimated frames by employing the pixel positions detected in the forward and opposite directions.12-10-2009
20090304231Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device - A method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device includes: decomposing a frame into intensity, color and direction features according to human perceptions; filtering an input image by a Gaussian pyramid to obtain levels of pyramid representations by down sampling; calculating the features of pyramid representations; using a linear center-surround operator similar to a biological perception to expedite the calculation of a mean value of the peripheral region; using the difference of each feature between a small central region and the peripheral region as a measured value; overlaying the pyramid feature maps to obtain a conspicuity map and unify the conspicuity maps of the three features; obtaining a saliency map of the frames by linear combination; and using the saliency map for a segmentation to mark an interesting region of a frame in the large region of the conspicuity maps.12-10-2009
20090304230Detecting and tracking targets in images based on estimated target geometry - A system for detecting and tracking targets captured in images, such as people and object targets that are captured in video images from a surveillance network. Targets can be detected by an efficient, geometry-driven approach that determines likely target configuration of the foreground imagery based on estimated geometric information of possible targets. The detected targets can be tracked using a centralized tracking system.12-10-2009
20090304229OBJECT TRACKING USING COLOR HISTOGRAM AND OBJECT SIZE - A solution for monitoring an area uses color histograms and size information (e.g., heights and widths) for blob(s) identified in an image of the area and model(s) for existing object track(s) for the area. Correspondence(s) between the blob(s) and the object track(s) are determined using the color histograms and size information. Information on an object track is updated based on the type of correspondence(s). The solution can process merges, splits and occlusions of foreground objects as well as temporal and spatial fragmentations.12-10-2009
20090103777Lock and hold structured light illumination - A method, system, and associated program code, for 3-dimentional image acquisition, using structured light illumination, of a surface-of-interest under observation by at least one camera. One aspect includes: illuminating the surface-of-interest, while static/at rest, with structured light to obtain initial depth map data therefor; while projecting a hold pattern comprised of a plurality of snake-stripes at the static surface-of-interest, assigning an identity to and an initial lock position of each of the snake-stripes of the hold pattern; and while projecting the hold pattern, tracking, from frame-to-frame each of the snake-stripes. Another aspect includes: projecting a hold pattern comprised of a plurality of snake-stripes; as the surface-of-interest moves into a region under observation by at least one camera that also comprises the projected hold pattern, assigning an identity to and an initial lock position of each snake-stripe as it sequentially illuminates the surface-of-interest; and while projecting the hold pattern, tracking, from frame-to-frame, each snake-stripe while it passes through the region. Yet another aspect includes: projecting, in sequence at the surface-of-interest positioned within a region under observation by at least one camera, a plurality of snake-stripes of a hold pattern by opening/moving a shutter cover; as each of the snake-stripes sequentially illuminates the surface-of-interest, assigning an identity to and an initial lock position of that snake-stripe; and while projecting the hold pattern, tracking, from frame-to-frame, each of the snake-stripes once it has illuminated the surface-of-interest and entered the region.04-23-2009
20120219189METHOD AND DEVICE FOR DETECTING FATIGUE DRIVING AND THE AUTOMOBILE USING THE SAME - The present application discloses a method and device of detecting fatigue driving, comprising: analyzing an eye image in the driver's eye image area with a rectangular feature template to obtain the upper eyelid line; determining the eye closure state according to the curvature or curvature feature value of the upper eyelid line; and collecting statistics on the eye closure state and thereby determining whether the driver is in a fatigue state. The present application determines whether the eyes are opened or closed according to the shape of the upper eyelid, which is more accurate because the upper eyelid line has characteristics of higher relative contrast, anti-interference capacity, and adaptability to the changes in the facial expression.08-30-2012
20120219188METHOD OF PROVIDING A DESCRIPTOR FOR AT LEAST ONE FEATURE OF AN IMAGE AND METHOD OF MATCHING FEATURES - A method of providing a descriptor for at least one feature of an image comprises the steps of providing an image captured by a capturing device and extracting at least one feature from the image, and assigning a descriptor to the at least one feature, the descriptor depending on at least one parameter which is indicative of an orientation, wherein the at least one parameter is determined from the orientation of the capturing device measured by a tracking system. The invention also relates to a method of matching features of two or more images.08-30-2012
20120219187Data Capture and Identification System and Process - An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image.08-30-2012
20120219186Continuous Linear Dynamic Systems - Aspects of the present invention include systems and methods for segmentation and recognition of action primitives. In embodiments, a framework, referred to as the Continuous Linear Dynamic System (CLDS), comprises two sets of Linear Dynamic System (LDS) models, one to model the dynamics of individual primitive actions and the other to model the transitions between actions. In embodiments, the inference process estimates the best decomposition of the whole sequence into continuous alternating between the two set of models, using an approximate Viterbi algorithm. In this way, both action type and action boundary may be accurately recognized.08-30-2012
20120219185APPARATUS AND METHOD FOR DETERMINING A LOCATION IN A TARGET IMAGE - An apparatus and a computer-implemented method are provided for determining a location in a target image (T) of a site on a surface of a physical object using two or more reference images (I08-30-2012
20120219184MONITORING OF VIDEO IMAGES - A characteristic motion in a video is identified by determining pairs of moving features that have an indicative relationship between the motions of the two moving features in the pair. For example, the motion of a pedestrian is identified by an indicative relationship between the motions of the pedestrian's feet. This indicative relationship may be that one of the feet moves relative to the surroundings while the other remains stationary.08-30-2012
201202191833D Object Detecting Apparatus and 3D Object Detecting Method - A 3D-object detecting apparatus may include a detection-image creating device configured to detect a 3D object on an image-capture surface from an image captured by an image-capture device and to create a detection image in which a silhouette of only the 3D object is left; a density-map creating device configured to determine the 3D objects spatial densities at corresponding coordinate points in a coordinate plane on the basis of the detection image and mask images obtained for the corresponding coordinate points on the basis of virtual cuboids arranged for the corresponding coordinate points and to create a density map having pixels for the corresponding coordinate points such that the pixels have pixel values corresponding to the determined spatial densities; and a 3D-object position detecting device that detects the position of the 3D object as a representative point in a high-density region in the density map.08-30-2012
20120219180Automatic Detection of Vertical Gaze Using an Embedded Imaging Device - A method of detecting and applying a vertical gaze direction of a face within a digital image includes analyzing one or both eyes of a face within an acquired image, including determining a degree of coverage of an eye ball by an eye lid within the digital image. Based on the determined degree of coverage of the eye ball by the eye lid, an approximate direction of vertical eye gaze is determined. A further action is selected based on the determined approximate direction of vertical eye gaze.08-30-2012
20120219179COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or design is sequentially detected from images. Then, an amount of movement of the predetermined object or design is calculated on the basis of: a position, in a first image, of the predetermined object or design detected from the first image; and a position, in a second image, of the predetermined object or design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or design detected from the first image is corrected to a position internally dividing, in a predetermined ratio, line segments connecting: the position, in the first image, of the predetermined object or design detected from the first image; to the position, in the second image, of the predetermined object or design detected from the second image.08-30-2012
20120219178COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - A position of a predetermined object or a predetermined design is sequentially detected from images. Then, an amount of movement of the predetermined object or the predetermined design is calculated on the basis of: a position, in a first image, of the predetermined object or the predetermined design detected from the first image; and a position, in a second image, of the predetermined object or the predetermined design detected from the second image acquired before the first image. Then, when the amount of movement is less than a first threshold, the position, in the first image, of the predetermined object or the predetermined design detected from the first image is corrected to the position, in the second image, of the predetermined object or the predetermined design detected from the second image.08-30-2012
20120219177COMPUTER-READABLE STORAGE MEDIUM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - First, a series of edge pixels representing a contour of an object or of a design represented in the object are detected from an image acquired from a capturing apparatus. Then, a plurality of straight lines are generated on the basis of the series of detected edge pixels, and vertices of the contour are detected on the basis of the plurality of straight lines. Further, relative positions and orientations of the capturing apparatus and the object relative to each other are calculated on the basis of the detected vertices, and a virtual camera in a virtual space is set on the basis of the positions and the orientations. Then, a virtual space image obtained by capturing the virtual space with the virtual camera is displayed on a display device.08-30-2012
20120219176Method and Apparatus for Pattern Tracking - A method and apparatus for pattern tracking The method includes the steps of performing a foreground detection process to determine a hand-pill-hand region, performing image segmentation to separate the determined hand portion of the hand-pill-hand region from the pill portion thereof, building three reference models, one for each hand region and one for the pill region, initializing a dynamic model for tracking the hand-pill-hand region, determining N possible next positions for the hand-pill-hand region, for each such determined position, determining various features, building a new model for that region in accordance with the determined position, for each position, comparing the new model and a reference model, determining a position whose new model generates a highest similarity score, determining whether that similarity score is greater than a predetermined threshold, and wherein if it is determined that the similarity score is greater than the predetermined threshold, the object is tracked.08-30-2012
20120219175ASSOCIATING AN OBJECT IN AN IMAGE WITH AN ASSET IN A FINANCIAL APPLICATION - The invention relates to a method for associating an object in an image with an asset of a number of assets in a financial application. The method includes receiving the image of the object comprising global positioning system (GPS) data, where the image is captured using an image-taking device with GPS functionality and processing the image to generate processed GPS data. The method further includes determining, using the processed GPS data, a geographic location of the object in the image, and identifying, using the geographic location, the object by performing a recognition analysis of the image. The method further includes associating, based on the recognition analysis, the object in the image with the asset of the assets of an owner in the financial application, and storing, in the financial application, the image of the object associated with the asset of the assets of the owner.08-30-2012
20120219174EXTRACTING MOTION INFORMATION FROM DIGITAL VIDEO SEQUENCES - A method for analyzing a digital video sequence of a scene to extract background motion information and foreground motion information, comprising: analyzing at least a portion of a plurality of image frames captured at different times to determine corresponding one-dimensional image frame representations; combining the one-dimensional frame representations to form a two-dimensional spatiotemporal representation of the video sequence; using a data processor to identify a set of trajectories in the two-dimensional spatiotemporal representation of the video sequence; analyzing the set of trajectories to identify a set of foreground trajectory segments representing foreground motion information and a set of background trajectory segments representing background motion information; and storing an indication of the foreground motion information or the background motion information or both in a processor-accessible memory.08-30-2012
20110007945FAST ALGORITHM FOR STREAMING WAVEFRONT - The invention is generally directed to the field of image processing, and more particularly to a method and an apparatus for determining a wavefront of an object, in particular a human eye. The invention discloses a method and an apparatus for real-time wavefront sensing of an optical system utilizing two different algorithms for detecting centroids of a centroid image as provided by a Hartmann-Shack wavefront sensor. A first algorithm detects an initial position of all centroids and a second algorithm detects incremental changes of all centroids detected by said first algorithm.01-13-2011
20110007942Real-Time Tracking System - There is provided a real-time tracking system and a method associated therewith for identifying and tracking objects moving in a physical region, typically for producing a physical effect, in real-time, in response to the movement of each object. The system scans a plane, which intersects a physical space, in order to collect reflection-distance data as a function of position along the plane. The reflection-distance data is then processed by a shape-analysis subsystem in order to locate among the reflection-distance data, a plurality of discontinuities, which are in turn associated to one or more detected objects. Each detected object is identified and stored in an identified-object structure. The scanning and processing is repeated for a number of iterations, wherein each detected object is identified with respect to the previously scanned objects, through matching with the identified-object structures, in order to follow the course of each particular object.01-13-2011
20090041299Method and Apparatus for Recognition of an Object by a Machine - Disclosed is a method and apparatus for recognition of an object by a machine including isolating and processing an image to help facilitate recognition of the object by the machine.02-12-2009
20130070968APPARATUS AND METHOD FOR CALCULATING ENERGY CONSUMPTION BASED ON THREE-DIMENSIONAL MOTION TRACKING - An apparatus and method calculate an energy consumption based on 3D motion tracking. The method includes setting at least one specific portion of an analysis target as a reference point, analyzing the reference point before and after the lapse of a predetermined time, and determining an energy consumption of the analysis target on the basis of the analyzed reference point.03-21-2013
20130070967MOTION ANALYSIS THROUGH GEOMETRY CORRECTION AND WARPING - An object in a hot atmosphere with a temperature greater than 400 F in a gas turbine moves in a 3D space. The movement may include a vibrational movement. The movement includes a rotational movement about an axis and a translational movement along the axis. Images of the object are recorded with a camera, which may be a high-speed camera. The object s provided with a pattern that is tracked in images. Warpings of sub-patches in a reference image of the object are determined to form standard format warped areas. The warpings are applied piece-wise to areas in following images to create corrected images. Standard tracking such as SSD tracking is applied to the piece-wise corrected images to determine a movement of the object. The image correction and object tracking are performed by a processor.03-21-2013
20130070965IMAGE PROCESSING METHOD AND APPARATUS - An image processing method and apparatus for obtaining a wide dynamic range image, the method including: obtaining a plurality of low dynamic range images having different exposure levels for a same scene; generating motion map representing whether motion occurred, depending on brightness ranks of the plurality of low dynamic range images; obtaining weights for the plurality of low dynamic range images; generating a weight map by combining the weights and the motion map; and generating a wide dynamic range image by fusing the plurality of low dynamic range images and the weight map. According to the image processing method and apparatus, it is possible to accurately detect motion area using a rank map, obtain a wide dynamic range image at a higher operation speed, and reduce a possibility that a phenomenon such as color warping occurs by directly combining images without using a tone mapping process.03-21-2013
20130070962EGOMOTION ESTIMATION SYSTEM AND METHOD - A computer-implemented method for determining an egomotion parameter using an egomotion estimation system is provided. First and second image frames are obtained. A first portion of the first image frame and a second portion of the second image frame are selected to respectively obtain a first sub-image and a second sub-image. A transformation is performed on each of the first sub-image and the second sub-image to respectively obtain a first perspective image and a second perspective image. The second perspective image is iteratively adjusted to obtain multiple adjusted perspective images. Multiple difference values are determined that respectively correspond to the respective difference between the first perspective image and the adjusted perspective images. A translation vector for an ego motion parameter is determined. The translation vector corresponds to one of the multiple difference values.03-21-2013
20130070966Method and device for checking the visibility of a camera for surroundings of an automobile - A method for checking the visibility of a camera for surroundings of an automobile is proposed which includes receiving a camera image and a step of dividing the camera image into a plurality of partial images. A visibility value is determined based on a number of objects detected in the particular partial image. A visibility probability is subsequently determined for each of the partial images based on the blindness values and the visibility values of the particular partial images.03-21-2013
20130070964INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes an identifying unit, a character recognition unit, an obtaining unit, a correcting unit, and an output unit. The identifying unit identifies a still image included in a moving image. The character recognition unit performs character recognition on the still image identified by the identifying unit. The obtaining unit obtains information about the moving image. The correcting unit corrects, on the basis of the information obtained by the obtaining unit, a character recognition result generated by the character recognition unit. The output unit outputs the character recognition result corrected by the correcting unit in association with the moving image.03-21-2013
20130070963ADAPTIVE FEATURE RECOGNITION TOOL - The present invention provides an adaptive feature recognition tool that can be used to determine the location and/or count discrete features on an object being manufactured in a relatively quick time fashion. The tool can include an elongated rigid member that has a first end with a generally planar surface, the generally planar surface having a plurality of contrast targets thereon. The elongated rigid member can also have a second end for placement at a desired location, for example placement on a plurality of features whose number and/or location(s) on the object is desired. In addition, an exposure device that is operable to expose specific subsets of the plurality of contrast targets to a line-of-sight digital imaging device can be included.03-21-2013
20130070961System and Method for Providing Temporal-Spatial Registration of Images - A video imaging system for use with or in a mobile video capturing system (e.g., an airplane or UAV). A multi-camera rig containing a number of cameras (e.g., 4) receives a series of mini-frames (e.g., from respective field steerable mirrors (FSMs)). The mini-frames received by the cameras are supplied to (1) an image registration system that calibrates the system by registering relationships corresponding to the cameras and/or (2) an image processor that processes the mini-frames in real-time to produce a video signal. The cameras can be infra-red (IR) cameras or other electro-optical cameras. By creating a rigid model of the relationships between the mini-frames of the plural cameras, real-time video stitching can be accelerated by reusing the movement relationship of a first mini-frame of a first camera on corresponding mini-frames of the other cameras in the system.03-21-2013
20130094695METHOD AND APPARATUS FOR AUTO-DETECTING ORIENTATION OF FREE-FORM DOCUMENT USING BARCODE - Method and apparatus of detecting orientation of document using a barcode decoding. The method includes (1) capturing an image of the document with an imaging arrangement having a solid-state imager; (2) determining a presence of a barcode in the captured image of the document; (3) decoding the barcode; (4) determining an up-direction of the document as a function of an orientation of the barcode in the document; and (5) setting an orientation of the document in the captured image based upon the up-direction of the document. In one implementation, the barcode is configured with orientation data indicating the up-direction of the document.04-18-2013
20090092287Mixed Media Reality Recognition With Image Tracking - An MMR system integrating image tracking and recognition comprises a plurality of mobile devices, a pre-processing server or MMR gateway, and an MMR matching unit, and may include an MMR publisher. The MMR matching unit receives an image query from the pre-processing server or MMR gateway and sends it to one or more of the recognition units to identify a result including a document, the page, and the location on the page. Image tracking information also is provided for determining relative locations of images within a document page. The mobile device includes an image tracker for providing at least a portion of the image tracking information. The present invention also includes methods for image tracking-assisted recognition, recognition of multiple images using a single image query, and improved image tracking using MMR recognition.04-09-2009
20090092286IMAGE GENERATING APPARATUS, IMAGE GENERATING PROGRAM, IMAGE GENERATING PROGRAM RECORDING MEDIUM AND IMAGE GENERATING METHOD - When an obstacle does not exist in a horizontal direction in a direction of a virtual camera, a PC coordinate is set as a point of gaze. When the player character comes close to a high wall while the procedure of S04-09-2009
20090092284Light Modulation Techniques for Imaging Objects in or around a Vehicle - Method and system for obtaining information about an object in a compartment in a vehicle includes directing illumination into the compartment, spatial or temporally modulating the illumination, receiving light reflected from an object in the compartment, and analyzing the reflected light to obtain information about the object. The compartment may be a passenger compartment of an automobile, the trunk of an automobile or the interior of a trailer of a truck. The illumination may be directed from a light source and the reflected light received at a receiver spaced apart from the light source. Analysis of the reflected light may therefore entail applying a triangulation calculation to enable a determination of a distance between the light source and illuminated point on the object. The same method and system can be adapted for monitoring the environment around the vehicle.04-09-2009
20090092283SURVEILLANCE AND MONITORING SYSTEM - A system having one or more devices for detection, surveillance and monitoring. Video images of scenes with persons from the devices may be processed and provided to a biometrics component for standoff biometric acquisition and matching. Various remote and internal databases may be resorted to for biometric matching. Matching results may go to the history component and the strategy and association component. The output of the latter component may be subject to behavior inference and analysis. The system may be interconnected with outside entities such as an access control system.04-09-2009
20090092282System and Method for Tracking Objects with a Synthetic Aperture - A computer implemented method tracks 3D positions of an object moving in a scene. A sequence of images is acquired of the scene with a set of cameras such that each time instant a set of images are acquired of the scene, in which each image includes pixels. Each set of images is aggregated into a synthetic aperture image including the pixels, and the pixels in each the set of images are matched corresponding to multiple locations and multiple depths of a target window with an appearance model to determine scores for the multiple locations and multiple depths. A particular location and a particular depth having a maximal score is selected as the 3D position of the moving object.04-09-2009
20130058534Method for Road Sign Recognition - The invention relates to a method and to a device for the recognition of road signs (03-07-2013
20130058535DETECTION OF OBJECTS IN AN IMAGE USING SELF SIMILARITIES - An image processor (03-07-2013
20130058530IMAGE PROCESSING APPARATUS AND METHOD - An information processing apparatus comprises a first imaging section configured to image the holding surface of a holding platform on which an object is held from different directions, a recognition section configured to, read out the characteristics of the object image of a object contained in the first imaged image based on each of the first imaged images that are respectively imaged by the first imaging section from different directions and compare the read characteristics with the pre-stored characteristics of each object, thereby recognizing the object corresponding to the object image every first imaged image and a determination section configured to determine the recognition result of the object held on the holding platform based on the recognition result of the object image every first imaged image.03-07-2013
20130058532Tracking An Object With Multiple Asynchronous Cameras - The path and/or position of an object is tracked using two or more cameras which run asynchronously so there is need to provide a common timing signal to each camera. Captured images are analyzed to detect a position of the object in the image. Equations of motion for the object are then solved based on the detected positions and a transformation which relates the detected positions to a desired coordinate system in which the path is to be described. The position of an object can also be determined from a position which meets a distance metric relative to lines of position from three or more images. The images can be enhanced to depict the path and/or position of the object as a graphical element. Further, statistics such as maximum object speed and distance traveled can be obtained. Applications include tracking the position of a game object at a sports event.03-07-2013
20130058529VISUAL INPUT OF VEHICLE OPERATOR - The present invention relates to a method for determining a vehicle operator's visual input of an object in the operator's surroundings, which method comprises receiving an object position signal indicative of the position of at least one object, receiving an operator motion input signal indicative of operator physiological data comprising information relating to body motion of the operator, estimating an operator eye-gaze direction, and determining a visual input quality value representative of level of visual input of the at least one object received by the operator, based on the object position signal and the estimated operator eye-gaze direction.03-07-2013
20130058533IMAGE RECONSTRUCTION BY POSITION AND MOTION TRACKING - A system, method, and apparatus provide the ability to reconstruct an image from an object. A hand-held image acquisition device is configured to acquire local image information from a physical object. A tracking system obtains displacement information for the hand-held acquisition device while the device is acquiring the local image information. An image reconstruction system computes the inverse of the displacement information and combines the inverse with the local image information to transform the local image information into a reconstructed local image information. A display device displays the reconstructed local image information.03-07-2013
20130058531Electronic Toll Management and Vehicle Identification - Identifying a vehicle in a toll system includes accessing image data for a first vehicle and obtaining license plate data from the accessed image data for the first vehicle. A set of records is accessed. The license plate data for the first vehicle is compared with the license plate data for vehicles in the set of records. Based on the comparison of the license plate data, a set of vehicles is identified from the vehicles having records in the set of records. Second vehicle identifier data is accessed for the first vehicle and for a vehicle in the set of vehicles. Using a processing device, the second vehicle identifier data for the first vehicle is compared with the second vehicle identifier data for the vehicle in the set of vehicles. The vehicle in the set of vehicles is identified as the first vehicle based on results of the comparison.03-07-2013
20130058525OBJECT TRACKING DEVICE - In an object tracking device, a search region setting unit sets the search region of an object in a frame image at a present point in time, based on an object region in a frame image at a previous point in time, zoom center coordinates in the frame image at the previous point in time, and a ratio between the zoom scaling factor of the frame image at the previous point in time and the zoom scaling factor of the frame image at the present point in time. A normalizing unit normalizes the image of a search region of the object included in the frame image at the present point in time to a fixed size. A matching unit searches the normalized mage of the search region for an object region similar to a template image.03-07-2013
20130058527SENSOR DATA PROCESSING - A method and apparatus for processing sensor data comprising measuring a value of a first parameter of a scene using a first sensor (e.g. a camera) to produce a first image of the scene, measuring a value of a second parameter of the scene using a second sensor (e.g. a laser scanner) to produce a second image, identifying a first point of the first image that corresponds to a class of features of the scene, identifying a second point of the second image that corresponds to the class of features, projecting the second point onto the first image, determining a similarity value between the first point and the projection of the second point on to the first image, and comparing the determined similarity value to a predetermined threshold value. The method or apparatus may be used on an autonomous vehicle.03-07-2013
20130058526DEVICE FOR AUTOMATED DETECTION OF FEATURE FOR CALIBRATION AND METHOD THEREOF - A method for automated detection of feature for calibration is provided, which includes capturing images of a polyhedral structure including a plurality of rectangular planes and triangular planes in different directions through a plurality of cameras, and generating a plurality of image files, each of the rectangular planes having calibration objects formed thereon to be used as input values of a calibration engine, and each of the triangular planes having a marker formed thereon to grasp absolute and relative relationships between the rectangular planes; searching for the calibration objects in the image files; searching for the same plane in which the calibration objects are formed using the calibration objects; and indexing the respective calibration objects formed on the same plane.03-07-2013
20130058528METHOD AND SYSTEM FOR DETECTING VEHICLE POSITION BY EMPLOYING POLARIZATION IMAGE - Disclosed are a method and a system for detecting a vehicle position by employing a polarization image. The method comprises a step of capturing a polarization image by using a polarization camera; a step of acquiring two road shoulders in the polarization image based on a difference between a road surface and each of the two road shoulders in the polarization image, and determining a part between the two road shoulders as the road surface; a step of detecting at least one vehicle bottom from the road surface based on a significant pixel value difference between each wheel and the road surface in the polarization image; and a step of generating a vehicle position from the vehicle bottom based on a pixel value difference between a vehicle outline corresponding to the vehicle bottom and background in the polarization image.03-07-2013
20130058524IMAGE PROCESSING SYSTEM PROVIDING SELECTIVE ARRANGEMENT AND CONFIGURATION FOR AN IMAGE ANALYSIS SEQUENCE - A computer-implemented method of processing a selected image using multiple processing operations is provided. An image analysis sequence having multiple processing steps is constructed. The image analysis sequence is constructed in response to receipt of multiple processing operation selections. Individual processing steps in the image analysis sequence are associated with a processing operation that is indicated in a corresponding processing operation selection. The processing steps are arranged in response to receipt of arrangement information that relates to a selective arrangement of the processing steps. At least one of the processing steps in the image analysis sequence is configured such that the processing operation associated with the processing step processes a specified input image to generate an output image when the processing step is performed. A display signal is generated for display of the output image at a display device.03-07-2013
20130058523UNSUPERVISED PARAMETER SETTINGS FOR OBJECT TRACKING ALGORITHMS - A method for automatically optimizing a parameter set for a tracking algorithm comprising receiving a series of image frames and processing the image frames using a tracking algorithm with an initialized parameter set. An updated parameter set is then created according to the processed image frames utilizing estimated tracking analytics. The parameters are validated using a performance metric that may be manually or automatically preformed using a GUI. The image frames are collected from a video camera with a fixed set-up at a fixed location. The image frames may include a training traffic video or a training video for tracking humans.03-07-2013
20110013806Methods of object search and recognition - Embodiments of the invention disclose techniques for processing of machine-readable forms of unfixed or flexible format. An auxiliary brief description may be optionally specified to determine the spatial orientation of the image. A method of searching for elements of a document comprises the following main operations in addition to the operations of preliminary image processing: selecting the varieties of structural description from several available variants, determining the orientation of the image, selecting the text objects, where the text must be recognized, and determining the minimal required volume of recognition, recognizing the text objects, searching for elements of the form. Searching for elements of the form comprises the following actions: selecting a searched element in the structural description, gaining the algorithm of search constraints from the structural description, searching for the element, testing the obtained variants.01-20-2011
20110013805IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND INTERFACE APPARATUS - In order to detect a specific detection object from an input image, a color serving as a reference is calculated in a reference image region. The difference for each color component between each pixel in the detection window and the reference color is calculated. Whether or not the detection object is included in the detection window is discriminated by a feature vector indicating how the difference is distributed in the detection window.01-20-2011
20110013804Method for Normalizing Displaceable Features of Objects in Images - A method normalizes a feature of an object in an image. The feature of the object is extracted from a 2D or 3D image. The feature is displaceable within a displacement zone in the object, and wherein the feature has a location within the displacement zone. An associated description of the feature is determined. Then, the feature is displaced to a best location in the displacement zone to produce a normalized feature.01-20-2011
20120189161VISUAL ATTENTION APPARATUS AND CONTROL METHOD BASED ON MIND AWARENESS AND DISPLAY APPARATUS USING THE VISUAL ATTENTION APPARATUS - Disclosed are a visual attention apparatus based on mind awareness and an image output apparatus using the same. Exemplary embodiments of the present invention can reduce data throughput by performing object segmentation and context analysis according to downsampling and colors and approximate shapes of input images so as to detect attention regions using extrinsic visual attention and intrinsic visual attention. In addition, the exemplary embodiments of the present invention can detect the attention regions having different viewpoints for each user by detecting the attention regions due to the extrinsic visual attention and the intrinsic visual attention and processing and displaying the attention regions as various regions of interest, thereby increasing the image immersion and the utility of contents.07-26-2012
20090110237METHOD FOR POSITIONING A NON-STRUCTURAL OBJECT IN A SERIES OF CONTINUING IMAGES - A method for positioning a non-structural object in a series of continuing images is disclosed, which comprises the steps of: establishing a pattern representing a target object while analyzing the pattern for obtaining positions relative to a representative feature of the pattern; picking up a series of continuing images including the target object for utilizing the brightness variations at the boundary defining the representative feature which are detected in the series of continuing images to calculate and thus obtain a predictive candidate position of the representative feature in an image picked up next to the series of continuing images; calculating the differences between the boundaries defining the representative feature at the predictive candidate position in the series of continuing images and also calculating the similarities between the pattern and those boundaries; and using the differences and the similarities to calculate and thus obtain the position of the representative feature in the image picked up next to the series of continuing images.04-30-2009
20090268944LINE OF SIGHT DETECTING DEVICE AND METHOD - A line of sight detecting method includes estimating a face direction of an object person based on a shot face image of the object person, detecting a part of an eye outline in the face image of the object person, detecting a pupil in the face image of the object person, and estimating the direction of a line of sight of the object person based on the correlation of the pupil position in the eye outline and the face direction with respect to the direction of the line of sight, and the pupil position and the face direction of the object person.10-29-2009
20090268941VIDEO MONITOR FOR SHOPPING CART CHECKOUT - A system ensures payment for the purchase of merchandise carried through a checkout aisle on the lower tray of a shopping cart. For that purpose, the system includes a controller with an embedded program for identifying a virtual structure substantially equivalent to the physical structure of the tray. Further, the system includes a sensor that determines when a cart is positioned at the checkout aisle. The system also includes a camera for creating an image of the physical structure of the tray and transmitting the image to the controller. The controller includes a means for activating the embedded program to compare the image with the virtual structure. As a result of the comparison, the controller determines whether merchandise is on the physical structure of the tray. During the comparison, the controller removes the virtual structure from the image.10-29-2009
20120224746CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior.09-06-2012
20120224747In-Vehicle Apparatus for Recognizing Running Environment of Vehicle - An in-vehicle running-environment recognition apparatus including an input unit for inputting an image signal from in-vehicle imaging devices for photographing external environment of a vehicle, an image processing unit for detecting a first image area by processing the image signal, the first image area having a factor which prevents recognition of the external environment, an image determination unit for determining a second image area based on at least any one of size of the first image area, position thereof, and set-up positions of the in-vehicle imaging devices having the first image area, an environment recognition processing being performed in the second image area, the first image area being detected by the image processing unit, and an environment recognition unit for recognizing the external environment of the vehicle based on the second image area.09-06-2012
20120224745EVALUATION OF GRAPHICAL OUTPUT OF GRAPHICAL SOFTWARE APPLICATIONS EXECUTING IN A COMPUTING ENVIRONMENT - Graphic objects generated by a software application executing in a computing environment are evaluated. The computing environment includes a graphical user interface for managing I/O functions, a data storage device for storing computer usable program code and data, and a data processing engine in communication with the graphical user interface and the data storage device The data processing engine receives and processes origin data from the data storage device to produce projected values for data points in the graphic image intended to be displayed. The data processing engine also creates and processes a snapshot of the displayed graphic object to produce actual values of data points in the displayed graphic object, compares the projected values to the actual values, and outputs an indication of the degree of similarity between the intended graphic object and the displayed graphic object.09-06-2012
20130064427METHODS AND SYSTEMS FOR OBJECT TRACKING - Methods and systems for object tracking are disclosed in which the bandwidth of a “slow” tracking system (e.g., an optical tracking system) is augmented with sensor data generated by a “fast” tracking system (e.g., an inertial tracking system). The tracking data generated by the respective systems can be used to estimate and/or predict a position, velocity, and orientation of a tracked object that can be updated at the sample rate of the “fast” tracking system. The methods and systems disclosed herein generally involve an estimation algorithm that operates on raw sensor data (e.g., two-dimensional pixel coordinates in a captured image) as opposed to first processing and/or calculating object position and orientation using a triangulation or “back projection” algorithm.03-14-2013
20130064420AUTOMATED SYSTEM AND METHOD FOR OPTICAL CLOUD SHADOW DETECTION OVER WATER - System and method for detecting cloud shadows over water from ocean color imagery received from remote sensors.03-14-2013
20130064421RESOLVING HOMOGRAPHY DECOMPOSITION AMBIGUITY BASED ON VIEWING ANGLE RANGE - The homography between captured images of a planar object is determined and decomposed into at least one possible solution, and typically at least two ambiguous solutions. The removal of the ambiguity between the two solutions, or validation of a single solution, is performed using a viewing angle range. The viewing angle range may be used by comparing the viewing angle range to the orientation of each solution as derived from the rotation matrix resulting from the homography decomposition. Any solution with an orientation outside the viewing angle range may be eliminated as a solution.03-14-2013
20130064424IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An image corresponding to a pattern having a first size is detected from a first detection region in an acquired, first image, and an image corresponding to a pattern having a second size is detected from a second detection region different from the first detection region in the first image.03-14-2013
20130064429IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user.03-14-2013
20130064430IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing device comprises: image display means which displays at least one of transformation target images containing an object of interest; first reference point receiving/determining means which receives information on first reference point candidates according to a user operation, receives according to a user operation a determination signal targeted at the first reference point candidates displayed on the transformation target image based on the information on the first reference point candidates, and determines first reference points based on the information on the first reference point candidates targeted by the received determination signal; second reference point receiving/determining means which determines second reference points by receiving information on the second reference points; and geometric transformation means which outputs a transformed image by conducting geometric transformation to the transformation target image based on the first reference points determined by the first reference point receiving/determining means and the second reference points determined by the second reference point receiving/determining means.03-14-2013
20130064428STRUCTURE DETECTION APPARATUS AND METHOD, AND COMPUTER-READABLE MEDIUM STORING PROGRAM THEREOF - A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data.03-14-2013
20130064422METHOD FOR DETECTING DENSITY OF AREA IN IMAGE - Light is allowed to be incident from above wells provided on a microplate M and the light transmitted to the lower surface is received to obtain an original image of the wells (Step S03-14-2013
20130064423FEATURE EXTRACTION AND PROCESSING FROM SIGNALS OF SENSOR ARRAYS - Feature extraction includes extracting features from signals of a plurality of sensors of a sensor array, including, for each sensor, obtaining a signal of the sensor corresponding to responses of the sensor during one or more exposures to samples, computing a baseline function from the signal, and computing the features based on the baseline function and values corresponding to responses of the sensor during each exposure. Feature vectors are formed from the features of the sensors. The features in each feature vector correspond to the same exposure. At least one of computing the baseline function by interpolating baseline values corresponding to responses of the sensor prior to each exposure, and forming the feature vectors by combining features of at least one sensor with features of at least one redundant sensor of the sensor array in the feature vectors is performed.03-14-2013
20130064426EFFICIENT SYSTEM AND METHOD FOR BODY PART DETECTION AND TRACKING - A method is provided for detecting a body part in a video stream from a mobile device. A video stream of a human subject is received from a camera connected to the mobile device. The video stream has frames. A first frame of the video stream is identified for processing. This first frame is then partitioned into observation windows, each observation window having pixels. In each observation window, non-skin-toned pixels are eliminated; and the remaining pixels are compared to determine a degree of entropy of the pixels in the observation window. In any observation window having a degree of entropy above a predetermined threshold, a bounded area is made around the region of high entropy pixels. The consistency of the entropy is analyzed in the bounded area. If the bounded area has inconsistently high entropy, a body part is determined to be detected at that bounded area.03-14-2013
20130163819SYSTEM AND METHOD FOR INDENTIFYING IMAGE LOCATIONS SHOWING THE SAME PERSON IN DIFFERENT IMAGES - The same person is automatically recognized in different images from his or her clothing. Color pixel values of a first and second image are captures and areas are selected for a determination whether they show the same person. First histograms of pixels area are computed, representing sums of contributions from pixels with color values in histogram bins. Each histogram bin corresponds to a combination of a range of color values and a range of heights in the areas. The ranges of color values are normalized relative to a distribution of color pixel values in areas. Furthermore, second histograms of pixels in the areas are computed, the second histograms representing sums of contributions from pixels with color values in further histogram bins. The further histogram bins are at least partly unnormalized. First and second histogram intersection scores of the first and second histograms are computed. A combined detection score is computed from the first and second histogram scores.06-27-2013
20130163818METHOD FOR THE AUTHENTICATION AND/OR IDENTIFICATION OF A SECURITY ITEM - A method for authenticating and/or identifying a security article that includes a transparent or translucent substrate and, on a side of a first face of the substrate, a first image. The method includes superimposing at least partially the first image of the article with a second image. The second image may be produced by an electronic imager. The second image may be situated on the side of a second face of the substrate that is opposite to the first face. The method permits observation of an authentication and/or identification information item of the security article during a change of the angle of observation of the first and second superimposed images.06-27-2013
20130163817METHOD AND AN APPARATUS FOR GENERATING IMAGE CONTENT - A method and a system for generating image content. The method and system allow segments of a panoramic scene, to be generated with reduced distortion. The method and system reduce the amount of distortion by mapping pixel data onto a pseudo camera focal plane which is provided substantially perpendicularly to the focal location of the camera that captured the image. A camera arrangement can implement the method and system.06-27-2013
20130163813IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus for compositing a plurality of images that are shot with different exposures, comprises an object detection unit configured to detect object regions from the images; a main object determination unit configured to determine a main object region from among the object regions; a distance calculation unit configured to calculate object distance information regarding distances to the main object region for the object regions; and a compositing unit configured to composite the object regions of the plurality of images using a compositing method based on the object distance information, so as to generate a high dynamic range image.06-27-2013
20130163812INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processor includes an image capturing part configured to obtain a displayed screen image; a storage part configured to store the screen image each time the screen image is obtained; an image comparison part configured to generate one or more difference pixels by comparing a screen image stored last and the obtained screen image; a difference region determination part configured to determine the smallest rectangular region including the difference pixels as a difference region based on a predetermined rectangle formed of a predetermined number of pixels, the screen image being divided using the predetermined rectangle as a unit; a compressed difference image generation part configured to generate a compressed difference image by compressing a difference image using the predetermined rectangle as a unit, the difference region being cut out from the screen image into the difference image; and an image transmission part configured to transmit the compressed difference image.06-27-2013
20130064425IMAGE RECOGNIZING APPARATUS, IMAGE RECOGNIZING METHOD, AND PROGRAM - An image recognizing apparatus is equipped with: a detecting unit configured to detect, from an input image, a candidate area for a target of recognition, based on a likelihood of a partial area in the input image; an extracting unit configured to extract, from a plurality of candidate areas detected by the detecting unit, a set of the candidate areas which are in an overlapping relation; a classifying unit configured to classify an overlapping state of the set of the candidate areas; and a discriminating unit configured to discriminate whether or not the respective candidate areas are the target of recognition, based on the overlapping state of the set of the candidate areas and the respective likelihoods of the candidate areas.03-14-2013
20120195460CONTEXT AWARE AUGMENTATION INTERACTIONS - A mobile platform renders different augmented reality objects based on the spatial relationship, such as the proximity and/or relative positions between real-world objects. The mobile platform detects and tracks a first object and a second object in one or more captured images. The mobile platform determines the spatial relationship of the objects, e.g., the proximity or distance between objects and/or the relative positions between objects. The proximity may be based on whether the objects appear in the same image or the distance between the objects. Based on the spatial relationship of the objects, the augmentation object to be rendered is determined, e.g., by searching a database. The selected augmentation object is rendered and displayed.08-02-2012
20090245570METHOD AND SYSTEM FOR OBJECT DETECTION IN IMAGES UTILIZING ADAPTIVE SCANNING - An object detection method and system for detecting an object in an image utilizing an adaptive image scanning strategy is disclosed herein. An initial rough shift can be determined based on the size of a scanning window and the image can be scanned continuously for several detections of similar sizes using the rough shift. The scanning window can be classified with respect to a cascade of homogenous classification functions covering one or more features of the object. The size and scanning direction of the scanning window can be adaptively changed depending on the probability of the object occurrence in accordance with scan acceleration. The object can be detected by an object detector and can be localized with higher precision and accuracy.10-01-2009
20090238410FACE RECOGNITION WITH COMBINED PCA-BASED DATASETS - A face recognition method for working with two or more collections of facial images is provided. A representation framework is determined for a first collection of facial images including at least principle component analysis (PCA) features. A representation of said first collection is stored using the representation framework. A modified representation framework is determined based on statistical properties of original facial image samples of a second collection of facial images and the stored representation of the first collection. The first and second collections are combined without using original facial image samples. A representation of the combined image collection (super-collection) is stored using the modified representation framework. A representation of a current facial image, determined in terms of the modified representation framework, is compared with one or more representations of facial images of the combined collection. Based on the comparing, it is determined which, if any, of the facial images within the combined collection matches the current facial image.09-24-2009
20090238409Method for testing a motion vector - A method for testing a motion vector is described, which has: provision of at least one item of motion information assigned to the image sequence; storing a first image section of the first image in a first buffer memory and storing a second image section of the second image in a second intermediate memory, whereby a position of the first image section in the first image and a position of the second image section in the second image have reciprocal offset, which is dependent on the at least one item of motion information; determining a first image block in the first image section and a second image block in a second image section using the motion vector; comparing the contents of the first and of the second image block.09-24-2009
20090238408IMAGE-SIGNAL PROCESSOR, IMAGE-SIGNAL PROCESSING METHOD, AND PROGRAM - An image-signal processing apparatus configured to track an object moving in an image includes a setting unit configured to set an eliminating area in an image constituting a moving image; a motion-vector detecting unit configured to detect an object in the image constituting a moving image and detect a motion vector corresponding to the object using an area excluding the eliminating area in the image; and an estimating unit configured to estimate a position to which the object moves on the basis of the detected motion vector.09-24-2009
20090238407Object detecting apparatus and method for detecting an object - An apparatus for detecting an object, includes: a candidate point detection unit detecting a candidate point between the ground and an object from an image; a tracking unit calculating positions of the candidate point at a first time and a second time; a difference calculation unit calculating a difference between an estimated position at the second time and the candidate point position at the second time; and a state determination unit determining a new state of the candidate point at the second time based on the difference, and changing the search threshold value or a state.09-24-2009
20090238406Dynamic state estimation - According to an implementation, a set of particles is provided for use in estimating a location of a state of a dynamic system. A local-mode seeking mechanism is applied to move one or more particles in the set of particles, and the number of particles in the set of particles is modified. The location of the state of the dynamic system is estimated using particles in the set of particles. Another implementation provides dynamic state estimation using a particle filter for which the particle locations are modified using a local-mode seeking algorithm based on a mean-shift analysis and for which the number of particles is adjusted using a Kullback-Leibler-distance sampling process. The mean-shift analysis may reduce degeneracy in the particles, and the sampling process may reduce the computational complexity of the particle filter. The implementation may be useful with non-linear and non-Gaussian systems.09-24-2009
20090238405METHOD AND SYSTEM FOR ENABLING A USER TO PLAY A LARGE SCREEN GAME BY MEANS OF A MOBILE DEVICE - The present invention relates to a system and method for determining and tracking one or more objects, or one or more image sections within each image of a video stream to be displayed on user's mobile device, comprising: (a) one or more video streams to be run on a streaming server; (b) an image capture software component for capturing images of said one or more video streams, according to a first group of one or more sets of rules; (c) a receiver for receiving one or more commands generated by a user and transferring said commands to an extra-layer software component; (d) an extra-layer software component for: (d.1.) determining one or more objects or image sections within the captured images; (d.2.) tracking said objects or image sections within said captured images; and (d.3.) processing said captured images, to generate corresponding images to be displayed on a mobile device screen, according to a second group of one or more sets of rules and according to user's commands received by means of said receiver; (e) a compression software component for compressing the images, processed by means of said extra-layer software component, according to a third group of one or more sets of rules; (f) a data software component for providing groups of one or more sets of rules to said image capture software component, said extra-layer software component and said compression software component; and (g) a transmitter for transmitting the compressed images to a mobile device. The system and method further comprises a relayout software component for: (a) determining one or more objects or image sections within each image of the one or more video streams; (b) tracking said objects or image sections within said each image of said one or more video streams; and (c) processing said each image, to generate corresponding images to be displayed on a mobile device screen, according to a first group of one or more sets of rules and according to user's commands received by means of the receiver.09-24-2009
20090238404METHODS FOR USING DEFORMABLE MODELS FOR TRACKING STRUCTURES IN VOLUMETRIC DATA - A computerized method for tracking of a 3D structure in a 3D image including a plurality of sequential image frames, one of which is a current image frame, includes representing the 3D structure being tracked with a parametric model with parameters for local shape deformations. A predicted state vector is created for the parametric model using a kinematic model. The parametric model is deformed using the predicted state vector, and a plurality of actual points for the 3D structure is determined using a current frame of the 3D image, and displacement values and a measurement vectors are determined using differences between the plurality of actual points and the plurality of predicted points. The displacement values and the measurement vectors are filtered to generate an updated state vector and an updated covariance matrix, and an updated parametric model is generated for the current image frame using the updated state vector.09-24-2009
20130163810INFORMATION INQUIRY SYSTEM AND METHOD FOR LOCATING POSITIONS - An information inquiry system includes an information acquisition unit, to acquire information of a bus route. An image capture unit captures an image of an object on the bus route. An information processing unit compares the object in the image with the information of the bus route to locate the object and access information of the located object. A storage unit stores the information of the object. An output unit displays the information of the bus route as a map and highlights the located object in the map. An information inquiring method is also provided.06-27-2013
20090034795METHOD FOR GEOLOCALIZATION OF ONE OR MORE TARGETS - The subject of the invention is a method for geolocalization of one or more stationary targets from an aircraft by means of a passive optronic sensor. The sensor acquires at least one image I02-05-2009
20090097705OBTAINING INFORMATION BY TRACKING A USER - A device may obtain tracking information of a face or a head of a user, determine a position and orientation of the user, and determine a direction of focus of the user based on the tracking information, the position, and the orientation. In addition, the device may retrieve information associated with a location at which the user focused.04-16-2009
20090232357DETECTING BEHAVIORAL DEVIATIONS BY MEASURING EYE MOVEMENTS - According to one embodiment of the present invention, a computer implemented method, apparatus, and computer usable program product is provided for detecting behavioral deviations in members of a cohort group. A member of a cohort group is identified. Each member of the cohort group shares a common characteristic. Ocular metadata associated with the member of the cohort group is generated in real-time. The ocular metadata describes movements of an eye of the member of the cohort group. The ocular metadata is analyzed to identify patterns of ocular movements. In response to the patterns of ocular movements indicating behavioral deviations in the member of the cohort group, the member of the cohort group is identified as a person of interest. A person of interest may be subjected to an increased level of monitoring and/or other security measures.09-17-2009
20090232356Tracking System and Method for Tracking Objects - Disclosed are tracking system and a method for locating a plurality of objects. The tracking system includes an identification module, a receiver, a processing module, and a transmitter. The identification module is configured to obtain unit identification information associated with the one or more traceable units. The receiver is configured to receive an information of a spatial location and unit identification information of the one or more traceable units. The processing module is electronically coupled to the identification module and the receiver and is configured to identify the one or more traceable units based on the obtained unit identification information and the received unit identification information. The processing module is further configured to determine locations of the one or more traceable units based on the information of the spatial location of the one or more identified traceable units. The transmitter is electronically coupled to the processing module.09-17-2009
20090232353METHOD AND SYSTEM FOR MARKERLESS MOTION CAPTURE USING MULTIPLE CAMERAS - Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space (09-17-2009
20090046893SYSTEM AND METHOD FOR TRACKING AND ASSESSING MOVEMENT SKILLS IN MULTIDIMENSIONAL SPACE - Accurate simulation of sport to quantify and train performance constructs by employing sensing electronics for determining, in essentially real time, the player's three dimensional positional changes in three or more degrees of freedom (three dimensions); and computer controlled sport specific cuing that evokes or prompts sport specific responses from the player that are measured to provide meaningful indicia of performance. The sport specific cuing is characterized as a virtual opponent that is responsive to, and interactive with, the player in real time. The virtual opponent continually delivers and/or responds to stimuli to create realistic movement challenges for the player.02-19-2009
20090232355REGISTRATION OF 3D POINT CLOUD DATA USING EIGENANALYSIS09-17-2009
20090010493Motion-Validating Remote Monitoring System - A method of autonomously monitoring a remote site, including the steps of locating a primary detector at a site to be monitored; creating one or more geospatial maps of the site using an overhead image of the site; calibrating the primary detector to the geospatial map using a detector-specific model; detecting an object in motion at the site; tracking the moving object on the geospatial map; and alerting a user to the presence of motion at the site. In addition thermal image data from a infrared cameras, rather than optical/visual image data, is used to create detector-specific models and geospatial maps in substantially the same way that optical cameras and optical image data would be used.01-08-2009
20090010492IMAGE RECOGNITION DEVICE, FOCUS ADJUSTMENT DEVICE, IMAGING APPARATUS, IMAGE RECOGNITION METHOD AND FOCUS ADJUSTMENT METHOD - An image recognition device includes a detection unit which is configured to detect a first difference between partial information of at least a part of the first image information and the reference information and to detect a second difference between partial information of at least a part of the second image information and the reference information. A recognition unit is configured to recognize a first area corresponding to the reference image in the first image information. A calculation unit is configured to calculate a determination value based on a reference area in the second image information corresponding to the first area by weighting the second difference. The recognition unit is configured to recognize a second area corresponding to the reference image in the second image information based on at least one of the second difference and the determination value.01-08-2009
20090010491METHOD AND APPARATUS FOR PROVIDING PICTURE FILE - A method and an apparatus for providing a picture file are provided. The picture file providing apparatus includes a controller which searches for one or more picture files based on a location of a subject, and a screen display unit which forms a display screen to display the one or more picture files that were found, in order to provide a user with the direction information included in each picture file. Each picture file includes picture data, information on a location in which the picture data was created, and information on a direction of a captured image of a subject included in the picture data.01-08-2009
20090010490SYSTEM AND PROCESS FOR DETECTING, TRACKING AND COUNTING HUMAN OBJECTS OF INTEREST - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period.01-08-2009
20120114180Identification Of Objects In A 3D Video Using Non/Over Reflective Clothing - A computing system generates a depth map from at least one image, detects objects in the depth map, and identifies anomalies in the objects from the depth map. Another computing system identifies at least one anomaly in an object in a depth map, and uses the anomaly to identify future occurrences of the object. A system includes a three dimensional (3D) imaging system to generate a depth map from at least one image, an object detector to detect objects within the depth map, and an anomaly detector to detect anomalies in the detected objects, wherein the anomalies are logical gaps and/or logical protrusions in the depth map.05-10-2012
20120114179FACE DETECTION DEVICE, IMAGING APPARATUS AND FACE DETECTION METHOD - A face detection device for detecting the face of a person in an input image may include the following elements: a face detection circuit including a hardware circuit configured to detect a face in an input image; a signal processing circuit configured to perform signal processing based on an input image signal in accordance with a rewritable program including a face detection program for detecting a face in an input image; and a controller configured to allow the face detection circuit and the signal processing circuit to perform face detection on an image of a frame or on respective images of adjacent frames among consecutive frames, and to control face detection by the signal processing circuit on the basis of a face detection result obtained by the face detection circuit.05-10-2012
20120114178VISION SYSTEM AND METHOD OF ANALYZING AN IMAGE - A vision system comprises a camera that captures an image and a processor coupled to process the received image to determine at least one feature descriptor for the image. The processor includes an interface to access annotated map data that includes geo-referenced feature descriptors. The processor is configured to perform a matching procedure between the at least one feature descriptor determined for the at least one image and the retrieved geo-referenced feature descriptors.05-10-2012
20120114177IMAGE PROCESSING SYSTEM, IMAGE CAPTURE APPARATUS, IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND PROGRAM - There is provided an image processing system in which an image capture apparatus and an image processing apparatus are connected to each other via a network. When a likelihood indicating the probability that a detection target object detected from a captured image is a predetermined type of object does not meet a designated criterion, the image capture apparatus generates tentative object information for the detection target object, and transmits it to the image processing apparatus. The image processing apparatus detects, from detection targets designated by the tentative object information, a detection target as the predetermined type of object.05-10-2012
20120114176IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes an object detection unit configured to detect an object from an image, a tracking unit configured to track the detected object, a trajectory management unit configured to manage a trajectory of the object being tracked, and a specific object detection unit configured to detect a specific object from the image. In a case where the specific object determination unit detects the object being tracked by the object tracking unit to be the specific object, the trajectory management unit manages a trajectory of the object being tracked at a time point before the time point the object being tracked is detected to be the specific object as the trajectory of the specific object.05-10-2012
20120114175OBJECT POSE RECOGNITION APPARATUS AND OBJECT POSE RECOGNITION METHOD USING THE SAME - An object pose recognition apparatus and method. The object pose recognition method includes acquiring first image data of an object to be recognized and 3-dimensional (3D) point cloud data of the first image data, and storing the first image data and the 3D point cloud data in a database, receiving input image data of the object photographed by a camera, extracting feature points from the stored first image data and the input image data, matching the stored 3D point cloud data and the input image data based on the extracted feature points and calculating a pose of the photographed object, and shifting the 3D point cloud data based on the calculated pose of the object, restoring second image data based on the shifted 3D point cloud data, and re-calculating the pose of the object using the restored second image data and the input image data.05-10-2012
20120114174Voxel map generator and method thereof - A volume cell (VOXEL) map generation apparatus includes an inertia measurement unit to calculate inertia information by calculating inertia of a volume cell (VOXEL) map generator, a Time of Flight (TOF) camera to capture an image of an object, thereby generating a depth image of the object and a black-and-white image of the object, an estimation unit to calculate position and posture information of the VOXEL map generator by performing an Iterative Closest Point (ICP) algorithm on the basis of the depth image of the object, and to recursively estimate a position and posture of the VOXEL map generator on the basis of VOXEL map generator inertia information calculated by the inertia measurement unit and VOXEL map generator position and posture information calculated by the ICP algorithm, and a grid map construction unit to configure a grid map based on the recursively estimated VOXEL map generator position and posture.05-10-2012
20120114173IMAGE PROCESSING DEVICE, OBJECT TRACKING DEVICE, AND IMAGE PROCESSING METHOD - An edge extracting unit of a contour image generator generates an edge image of an input image using an edge extraction filter, etc. A foreground processing unites extracts the foreground from the input image using a background image and expands the foreground to generate an expanded foreground image. The foreground processing unit further generates a foreground boundary image constructed of the boundary of the expanded foreground region. A mask unit masks the edge image using the expanded foreground image to eliminate edges in the background. A synthesis unit synthesizes the masked edge image and the foreground boundary image to generate a contour image.05-10-2012
20120114172TECHNIQUES FOR FACE DETECTION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a resource budget. In addition, face tracking tasks may be performed.05-10-2012
20120114171EDGE DIVERSITY OBJECT DETECTION - Methods for detecting objects in an image. The method includes a) receiving magnitude and orientation values for each pixel in an image and b) assigning each pixel to one of a predetermined number of orientation bins based on the orientation value of each pixel. The method also includes c) determining, for a first pixel, a maximum of all the pixel magnitude values for each orientation bin in a predetermined region surrounding the first pixel. The method also includes d) summing the maximum pixel magnitude values for each of the orientation bins in the predetermined region surrounding the first pixel, e) assigning the sum to the first pixel and f) repeating steps c), d) and e) for all the pixels in the image.05-10-2012
20080317286SECURITY DEVICE AND SYSTEM - A security device and system is disclosed. This security device is particularly useful in a security system where there are many security cameras to be monitored. This device automatically highlights to a user a camera feed in which an incident is occurring. This assists a user in identifying incidents and to make an appropriate decision regarding whether or not to intervene. This highlighting is performed by a trigger signal generated in accordance with a comparison between a sequence of representations of sensory data and other corresponding sequences of representations of sensory data.12-25-2008
20100086177IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus which is capable of suppressing an increase in the circuit size of buffers between data-processing circuits, thereby enabling an associated component thereof to be implemented by hardware. A position control unit sequentially shifts a position of a sub window image by a predetermined skip amount in a predetermined scanning direction, for scanning, and further repeating the scanning for skipped sub window images, after shifting a start position of the scanning, to thereby determine positions of all sub window images each as an area from a face image is to be detected.04-08-2010
20130163811LAPTOP DETECTION - Provided herein are devices, systems, and methods for the detection of objects (e.g., laptop computers, electronics, explosives, etc.) within luggage. In particular, methods are provided for the detection of laptop computers within luggage (e.g., luggage containing other metallic objects and/or electronic devices).06-27-2013
20080310678Pedestrian Detecting Apparatus - A first pedestrian judging unit judges, on the basis of the size and motion state of a target three-dimensional object, whether the object is a pedestrian. A second pedestrian judging unit judges, on the basis of shape data on the object, whether the object is a pedestrian. A pedestrian judging unit finally determines that the object is a pedestrian when both the first and second pedestrian judging units judge the object as a pedestrian, when the second pedestrian judging unit judges the object as a pedestrian, when the first pedestrian judging unit judges the object as a pedestrian and a result of this judgment is held for a preset period, or when the first pedestrian judging unit judges the object as a pedestrian in a current judgment operation and the second pedestrian judging unit judged the object as a pedestrian in the previous judging operation.12-18-2008
20120237085METHOD FOR DETERMINING THE POSE OF A CAMERA AND FOR RECOGNIZING AN OBJECT OF A REAL ENVIRONMENT - A method for determining the pose of a camera (09-20-2012
20120237084SYSTEM AND METHOD FOR IDENTIFYING THE EXISTENCE AND POSITION OF TEXT IN VISUAL MEDIA CONTENT AND FOR DETERMINING A SUBJECT'S INTERACTIONS WITH THE TEXT - A reading meter system and method is provided for identifying the existence and position of text in visual media content (e.g., a document to be displayed (or being displayed) on a computer monitor or other display device) and determining if a subject has interacted with the text and/or the level of the subject's interaction with the text (e.g., whether the subject looked at the text, whether the subject read the text, whether the subject comprehended the text, whether the subject perceived and made sense of the text, and/or other levels of the subject's interaction with the text). The determination may, for example, be based on data generated from an eye tracking device. The reading meter system may be used alone and/or in connection with an emotional response tool (e.g., a software-based tool for determining the subject's emotional response to the text and/or other elements of the visual media content on which the text appears). If used together, the reading meter system and emotional response tool advantageously may both receive, and perform processing on, eye date generated from a common eye tracking device.09-20-2012
20130163814IMAGE SENSING APPARATUS, INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - Face recognition data to be used in recognizing a person corresponding to a face image is managed upon associating the feature amount of the face image, a first person's name, and a second person's name different from the first person's name with each other for each registered person. A person corresponding to a face image included in a captured image is identified using the feature amount managed in the face recognition data, and the second person's name for the identified person is stored in a storage in association with the captured image. When the image stored in the storage is read out and displayed on a display device, the first person's name which corresponds to the second person's name associated with the readout image is displayed on the display device together with the readout image.06-27-2013
20090220122TRACKING SYSTEM FOR ORTHOGNATHIC SURGERY - Systems and methods are provided for measuring relative movement between two portions of the facial skeleton. A target (09-03-2009
20110026770Person Following Using Histograms of Oriented Gradients - A method for using a remote vehicle having a stereo vision camera to detect, track, and follow a person, the method comprising: detecting a person using a video stream from the stereo vision camera and histogram of oriented gradient descriptors; estimating a distance from the remote vehicle to the person using depth data from the stereo vision camera; tracking a path of the person and estimating a heading of the person; and navigating the remote vehicle to an appropriate location relative to the person.02-03-2011
20120269390IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM - An image processing apparatus comprising a storage unit configured to store image data; a readout unit configured to read out the image data stored in the storage unit; a detection unit configured to detect a target object from the image data read out by the readout unit; a conversion unit configured to convert a resolution of the image data read out by the readout unit; and a write unit configured to write the image data having the resolution converted by the conversion unit in the storage unit, wherein the readout unit outputs the readout image data in parallel to the detection unit and the conversion unit.10-25-2012
20120269382Object Recognition Device and Object Recognition Method - An object recognition device includes; an image-capturing unit mounted to a mobile body; an image generation unit that converts images captured by the image-capturing unit at different time points to corresponding synthesized images as seen vertically downwards from above; a detection unit that compares together a plurality of the synthesized images and detects corresponding regions; and a recognition unit that recognizes an object present upon the road surface from a difference between the corresponding regions.10-25-2012
20090087025Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof - A method and system for detecting a shadow region and a highlight region from a foreground region in a surveillance system, and a recording medium thereof, are provided. The system includes an image capturing unit to capture a new image, a background model unit to receive the new image and update a stored background model with the new image, a difference image obtaining unit to compare the new image with the background model and to obtain a difference image between the new image and the background model, a penumbra region extraction unit to extract a partial shadow region or a partial highlight region by measuring a sharpness of an edge of the difference image and expanding a background region, and an umbra region extraction unit to extract a complete shadow region or a complete highlight region based on the result of the extraction by the penumbra region extraction unit.04-02-2009
20120002843DROWSINESS ASSESSMENT DEVICE AND PROGRAM - Local maxima values and local minima values are derived from eyelid openness time series data in a segment in which a continuous closed eye period of extracted blinks is a specific time duration (for example 1 second) or longer. When plural local minima values are present in the segment of continuous closed eye period of 1 second or longer, blinks are extracted passing over and back through each variable closed eye threshold value of a variable closed eye threshold that is slid in a direction from the derived local maxima value towards the local minima value in set steps to a low value, and a inter-blink interval derived. Determination is made that a blink burst has occurred when the derived inter-blink interval is 1 second or less, and say greater than 0.2 seconds, thereby detecting a blink burst. Blink bursts can be detected with good precision, and the state of drowsiness can be assessed with good precision.01-05-2012
20120300983SYSTEMS AND METHODS FOR MULTI-PASS ADAPTIVE PEOPLE COUNTING UTILIZING TRAJECTORIES - People are counted in a segment of video with a video processing system that is configured with a first set of parameters. This produces a first output. Based on this first output, a second set of parameters is chosen. People are then counted in the segment of video using the second set of parameters. This produces a second output. People are counted with a video played forward. People are counted with a video played backwards. The results of these two counts are reconciled to produce a more accurate people count.11-29-2012
20120213406SUBJECT DESIGNATING DEVICE AND SUBJECT TRACKING APPARATUS - A subject designating device includes: a representative value calculation unit that calculates a representative value for each image of a brightness image and chrominance images based upon pixel values indicated at pixels present within a first subject area; a second image generation unit that creates a differential image by subtracting the representative value from pixel values indicated at pixels present within a second subject area; a binarizing unit that binarizes the differential image; a synthesizing unit that creates a synthetic image by combining binary images in correspondence to the brightness image and the chrominance images; a mask extraction unit that extracts a mask constituted with a white pixel cluster from the synthetic image; an evaluation value calculation unit that calculates an evaluation value indicating a likelihood of the mask representing the subject; and a subject designating unit that designates the subject in the target image based upon the evaluation value.08-23-2012
20100183196DYNAMIC TRACKING OF SOFT TISSUE TARGETS WITH ULTRASOUND IMAGES, WITHOUT USING FIDUCIAL MARKERS - An apparatus and method of dynamically tracking a soft tissue target with ultrasound images, without the use of fiducial markers. In one embodiment, the apparatus includes an ultrasound imager to generate a reference ultrasound and a first ultrasound image having a soft tissue target, and a processing device coupled to the ultrasound imager to receive the reference ultrasound image and the first ultrasound image, to register the first ultrasound image with the reference ultrasound image, and to determine a displacement of the soft tissue target based on registration of the first ultrasound image with the reference ultrasound image.07-22-2010
20090185715SYSTEM AND METHOD FOR DEFORMABLE OBJECT RECOGNITION - The present invention provides a system and method for detecting deformable objects in images even in the presence of partial occlusion, clutter and nonlinear illumination changes. A holistic approach for deformable object detection is disclosed that combines the advantages of a match metric that is based on the normalized gradient direction of the model points, the decomposition of the model into parts and a search method that takes all search results for all parts at the same time into account. Despite the fact that the model is decomposed into sub-parts, the relevant size of the model that is used for the search at the highest pyramid level is not reduced. Hence, the present invention does not suffer the speed limitations of a reduced number of pyramid levels that prior art methods have.07-23-2009
20110280442OBJECT MONITORING SYSTEM AND METHOD - An object monitoring system and method identify a foreground object from a current frame of a video stream of a monitored area. The object monitoring system determines whether an object has entered or exited the monitored area according to the foreground object, and generates a security alarm. The object monitoring system searches N pieces of reference images just before an image is captured at the time of a generation of the security alarm, and detects information related to the object from the N pieces of reference images. By comparing the related information with vector descriptions of human body models stored in a feature database, and a holder or a remover of the object can be recognized.11-17-2011
20110280447METHODS AND SYSTEMS FOR CONTENT PROCESSING - Cell phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others relate to coping with fixed focus limitations of cell phone cameras, e.g., in reading digital watermark data. Still others concern user interface improvements. A great number of other features and arrangements are also detailed.11-17-2011
20110280446Method and Apparatus for Selective Disqualification of Digital Images - An unsatisfactory scene is disqualified as an image acquisition control for a camera. An image is acquired. One or more eye regions are determined. The eye regions are analyzed to determine whether they are blinking, and if so, then the scene is disqualified as a candidate for a processed, permanent image while the eye is completing the blinking.11-17-2011
20110280444CAMERA AND CORRESPONDING METHOD FOR SELECTING AN OBJECT TO BE RECORDED - A camera is described having an image capturing device, an evaluation and control unit and a storage unit, the evaluation and control unit analyzes an image sequence having at least two successively captured images recorded by the image capturing device to segment and stabilize at least one object to be recorded during the image recording. The evaluation and control unit ascertains a deliberate panning movement of the camera and compares it with ascertained movements of objects represented in the captured images, the evaluation and control unit determining at least one object as an object to be recorded, the ascertained movement of which is most consistent with the camera's ascertained panning movement, and the evaluation and control unit storing an image section of the image captured by the image capturing device in the storage unit which represents the at least one object to be recorded. Also described is a corresponding method.11-17-2011
20110280443IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes an identification criterion creating unit that creates an identification criterion so as to enable identification of specific regions in a target image to be processed that is selected in chronological order from among images constituting a set of time-series images; includes a feature data calculating unit that calculates the feature data of each segmented region in the target image to be processed; and includes a specific region identifying unit that, based on the feature data of each segmented region, identifies the specific regions in the target image to be processed by using the identification criterion. Moreover, the identification criterion creating unit creates the identification criterion based on the pieces of feature data of the specific regions identified in the images that have been already processed.11-17-2011
20110280441PROJECTOR AND PROJECTION CONTROL METHOD - A method controls a projection of a projector. The method predetermines hand gestures, and assigns an operation function of an input device to each of the predetermined hand gestures. When an electronic file is projected onto a screen, the projector receives an image of a speaker captured by an image-capturing device connected to the projector. The projector identifies whether a hand gesture of the speaker matches one of the predetermined hand gestures. If the hand gesture matches one of the hand gestures, the projector may execute a corresponding assigned operation function.11-17-2011
20110280440Method and Apparatus Pertaining to Rendering an Image to Convey Levels of Confidence with Respect to Materials Identification - A control circuit accesses image information regarding an image of a target. This information comprises, at least in part, information regarding material content of the target. The control circuit also accesses confidence information regarding at least one degree of confidence as pertains to the target's material content. The control circuit uses this confidence information to facilitate rendering the image such that the rendered image integrally conveys information both about materials included in the target and a relative degree of confidence that the materials are correctly identified.11-17-2011
20110280439TECHNIQUES FOR PERSON DETECTION - Techniques are disclosed that involve the detection of persons. For instance, embodiments may receive, from an image sensor, one or more images (e.g., thermal images, infrared images, visible light images, three dimensional images, etc.) of a detection space. Based at least on the one or more images, embodiments may detect the presence of person(s) in the detection space. Also, embodiments may determine one or more characteristics of such detected person(s). Exemplary characteristics include (but are not limited to) membership in one or more demographic categories and/or activities of such persons. Further, based at least on such person detection and characteristics determining, embodiments may control delivery of content to an output device.11-17-2011
20110280438IMAGE PROCESSING METHOD, INTEGRATED CIRCUIT FOR IMAGE PROCESSING AND IMAGE PROCESSING SYSTEM - An image processing method includes: identifying at least one moving object of a current image according to the current image and at least one image different from the current image; and utilizing a processing circuit to generate an adjusted current image by performing a first image adjustment operation upon the at least one moving object of the current image and performing a second image adjustment operation upon a surrounding region of the at least one moving object of the current image, where the first image adjustment operation is different from the second image adjustment operation.11-17-2011
20100322476VISION BASED REAL TIME TRAFFIC MONITORING - A system and method for detecting and tracking one or more vehicles using a system for obtaining two-dimensional visual data depicting traffic flow on a road is disclosed. In one exemplary embodiment, the system and method identifies groups of features for determining traffic data. The features are classified as stable features or unstable features based on whether each feature is on the frontal face of a vehicle close to the road plane. In another exemplary embodiment, the system and method identifies vehicle base fronts as a basis for determining traffic data. In yet another exemplary embodiment, the system and method includes an automatic calibration procedure based on identifying two vanishing points12-23-2010
20120288140METHOD AND SYSTEM FOR SELECTING A VIDEO ANALYSIS METHOD BASED ON AVAILABLE VIDEO REPRESENTATION FEATURES - A method is performed for selecting a video analysis method based on available video representation features. The method includes: determining a plurality of available video representation features for a first video output from a first video source and for a second video output from a second video source; and analyzing the plurality of video representation features as compared to at least one threshold to select one of a plurality of video analysis methods to track an object between the first and the second videos.11-15-2012
20120288139SMART BACKLIGHTS TO MINIMIZE DISPLAY POWER CONSUMPTION BASED ON DESKTOP CONFIGURATIONS AND USER EYE GAZE - Methods and devices to conserve power on a mobile device determine an active region on a display and dimming a portion of the display backlight corresponding to the non-active regions. The method includes detecting an active region and a non-active region on a display. The detection may be based on a user interaction with the display or processing an image of the user to determine where on the display the user is looking. The method may control a brightness of a backlight of the display depending on the active and non-active region.11-15-2012
20120288143MOTION TRACKING SYSTEM FOR REAL TIME ADAPTIVE IMAGING AND SPECTROSCOPY - This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker (11-15-2012
20120288151ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device provisionally determines a specific object corresponding to a target portion from a luminance of a target portion, groups adjacent target portions provisionally determined to correspond to a same specific object as a target object, derives a representative distance that is a representative value of the relative distance of target portions in the target object, and grouping the target portions as the target object, the target portions corresponding to the same specific object with respect to the target object and the luminance, when a difference in horizontal distance from the target object of the target portions and a difference in height from the target object of the target portions fall within a first predetermined range, a difference between the relative distance and the representative distance of target portions falls within a second predetermined range.11-15-2012
20120288153APPARATUS FOR DETECTING OBJECT FROM IMAGE AND METHOD THEREFOR - An image processing apparatus stores a background model in which a feature amount is associated with time information for each state at each position of an image to be a background, extracts a feature amount for each position of an input video image, compares the feature amount in the input video image with that of each state in the background model, to determine the state similar to the input video image, and updates the time information of the state similar to the input video image, determines a foreground area in the input video image based on the time information of the state similar to the input video image, detects a predetermined subject from the foreground area, and updates the time information of the state in the background model.11-15-2012
20090147992THREE-LEVEL SCHEME FOR EFFICIENT BALL TRACKING - A three-level ball detection and tracking method is disclosed. The ball detection and tracking method employs three levels to generate multiple ball candidates rather than a single one. The ball detection and tracking method constructs multiple trajectories using candidate linking, then uses optimization criteria to determine the best ball trajectory.06-11-2009
20120189165METHOD OF PROCESSING BODY INSPECTION IMAGE AND BODY INSPECTION APPARATUS - A method of processing a body inspection image and a body inspection apparatus are disclosed. In one embodiment, the method may comprise recognizing a target region by means of pattern recognition, and performing privacy protection processing on the recognized target region. The target region may comprise a head and/or crotch part. According to the present disclosure, it is possible to achieve a compromise between privacy protection and body inspection.07-26-2012
20120189164RULE-BASED COMBINATION OF A HIERARCHY OF CLASSIFIERS FOR OCCLUSION DETECTION - A person detection system includes a face detector configured to detect a face in an input video sequence, the face detector outputting a face keyframe to be stored if a face is detected; and a person detector configured to detect a person in the input video sequence if the face detector fails to detect a face, the person detector outputting a person keyframe to be stored, if a person is detected in the input video sequence.07-26-2012
20120189160LINE-OF-SIGHT DETECTION APPARATUS AND METHOD THEREOF - A line-of-sight detection apparatus includes a detection unit configured to detect a face from image data, a first extraction unit configured to extract a feature amount corresponding to a direction of the face from the image data, a calculation unit configured to calculate a line-of-sight reliability of each of a right eye and a left eye based on the face, a selection unit configured to select an eye according to the line-of-sight reliability, a second extraction unit configured to extract a feature amount of an eye region of the selected eye from the image data, and an estimation unit configured to estimate a line of sight of the face based on the feature amount corresponding to the face direction and the feature amount of the eye region.07-26-2012
20110123067Method And System for Tracking a Target - A method and system for tracking one or more targets is described. The method includes the step of selecting a first template having a first image of a target and cyclically repeated steps of accumulating new images of the target, producing updated templates containing the new images, and tracking the target using the updated templates. Embodiments of the method use techniques directed to detection and mitigation of target occlusion events.05-26-2011
20100266161METHOD AND APPARATUS FOR PRODUCING LANE INFORMATION - A method of producing lane information for use in a map database is disclosed. In at least one embodiment, the method includes acquiring one or more source images of a road surface and associated position and orientation data, the road having a direction and lane markings parallel to the direction of the road; acquiring road information representative of the direction of said road; transforming the one or more source images to obtain a transformed image in dependence of the road information, wherein each column of pixels of the transformed image corresponds to a surface parallel to the direction of said road; applying a filter with asymmetrical mask on the transformed image to obtain a filtered image; and producing lane information from the filtered image in dependence of the position and orientation data associated with the one or more source images.10-21-2010
200900870294D GIS based virtual reality for moving target prediction - The technology of the 4D-GIS system deploys a GIS-based algorithm used to determine the location of a moving target through registering the terrain image obtained from a Moving Target Indication (MTI) sensor or small Unmanned Aerial Vehicle (UAV) camera with the digital map from GIS. For motion prediction the target state is estimated using an Extended Kalman Filter (EKF). In order to enhance the prediction of the moving target's trajectory a fuzzy logic reasoning algorithm is used to estimate the destination of a moving target through synthesizing data from GIS, target statistics, tactics and other past experience derived information, such as, likely moving direction of targets in correlation with the nature of the terrain and surmised mission.04-02-2009
20100014708TARGET RANGE-FINDING METHOD AND DEVICE - The present invention provides a target range-finding method and device. The device includes a marking portion on the target, which is set with an area or size and defined by a first and second measurement edge. An image acquisition device includes a lens and operating screen. The operating screen displays the target image captured by the image acquisition device. A measuring mark selection unit selects the position of the first and second measurement edges of the target image from the operating screen of the image acquisition device. A processing unit calculates the range of the target. The target range-finding device presents better range-finding accuracy, ease-of-operation and higher efficiency as well as improved applicability.01-21-2010
20100124358METHOD FOR TRACKING MOVING OBJECT - A method for tracking a moving object is provided. The method detects the moving object in a plurality of continuous images so as to obtain space information of the moving object in each of the images. In addition, appearance features of the moving object in each of the images are captured to build an appearance model. Finally, the space information and the appearance model are combined to track a moving path of the moving object in the images. Accordingly, the present invention is able to keep tracking the moving object even if the moving object leaves the monitoring frame and returns again, so as to assist the supervisor in finding abnormal acts and making following reactions.05-20-2010
20100128930DETECTION OF ABANDONED AND VANISHED OBJECTS - Disclosed herein are a method and system for classifying a detected region of change of a video frame as one of an abandoned object event and an object removal event, wherein a plurality of boundary blocks define a boundary of said region of change. For each one of a set of said boundary blocks (05-27-2010
20110142285SYSTEM AND METHOD FOR TRANSITIONING FROM A MISSILE WARNING SYSTEM TO A FINE TRACKING SYSTEM IN A DIRECTIONAL INFRARED COUNTERMEASURES SYSTEM - A method for transitioning a target from a missile warning system to a fine tracking system in a directional countermeasures system includes capturing at least one image within a field of view of the missile warning system. The method further includes identifying a threat from the captured image or images and identifying features surrounding the threat. These features are registered with the threat and image within a field of view of the fine tracking system is captured. The registered features are used to identify a location of a threat within this captured image.06-16-2011
20110129119MULTI-OBJECT TRACKING WITH A KNOWLEDGE-BASED, AUTONOMOUS ADAPTATION OF THE TRACKING MODELING LEVEL - The invention proposes a method for object and object configuration tracking based on sensory input data, the method comprising the steps of:06-02-2011
20110280445METHOD AND SYSTEM FOR ANALYZING AN IMAGE GENERATED BY AT LEAST ONE CAMERA - A method for analyzing an image of a real object, particularly a printed media object, generated by at least one camera comprises the following steps: generating at least a first image by the camera capturing at least one real object, defining a first search domain comprising multiple data sets of the real object, each of the data sets being indicative of a respective portion of the real object, and analyzing at least one characteristic property of the first image of the camera with respect to the first search domain, in order to determine whether the at least one characteristic property corresponds to information of at least a particular one of the data sets of the first search domain. If it is determined that the at least one characteristic property corresponds to information of at least a particular one of the data sets, a second search domain comprising only the particular one of the data sets is defined and the second search domain is used for analyzing the first image and/or at least a second image generated by the camera.11-17-2011
20090003653Trajectory processing apparatus and method - A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section.01-01-2009
20090208053AUTOMATIC IDENTIFICATION AND REMOVAL OF OBJECTS IN AN IMAGE, SUCH AS WIRES IN A FRAME OF VIDEO - A wire tracking system is described that provides a method and system for automatically locating wires in a digital image and tracking the located wires through a sequence of digital images. The wire tracking system is particularly good at removing wires from complex shots where background replacement is difficult. The wire tracking system performs complex signal processing to automatically remove the wire from the original image while preserving grain and background detail. Thus, the wire tracking system provides a reliable method of automatically identifying wires and replacing the wires with a reconstructed background image, and frees artists to make other enhancements to the scene.08-20-2009
20120288154Road-Shoulder Detecting Device and Vehicle Using Road-Shoulder Detecting Device - Disclosed is a road-shoulder detecting device including a distance-information calculating portion for calculating the presence of a physical object and the distance from the subject vehicle to the object from input three-dimensional image information relating to an environment around the vehicle, a vehicular road surface detecting portion for detecting a vehicular road surface with the subject vehicle from a distance image, a height difference calculating portion for measuring height difference between the detected vehicular road and an off-road region, and a road shoulder decision portion for deciding height difference as to whether the road shoulder is boundary between the surface and the region in a case where there is an off-road region lower than the vehicular road surface.11-15-2012
20120288152OBJECT RECOGNITION APPARATUS, CONTROL METHOD FOR OBJECT RECOGNITION APPARATUS AND STORAGE MEDIUM - An object recognition apparatus comprises: an extraction unit configured to extract a partial region from an image and extract a feature amount; a recognition unit configured to recognize whether the partial region is a target object based on the feature amount and one of a first recognition model including a feature amount of a positive example indicating the target object and a negative example indicating a background and a second recognition model including that of the positive example; an updating unit configured to update the first recognition model by adding the feature amount; and an output unit configured to output an object region recognized as being the target object, wherein the recognition unit performs recognition based on the first recognition model if the object region was output for a previous image, and based on the second recognition model if not.11-15-2012
20120288145ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment, recognition device and an environment recognition method. The environment, recognition device obtains a luminance of a target portion in a detection area; obtains a height of the target portion; derives a white balance correction value, assuming that white balancing is performed to the obtained luminance; derives the corrected luminance by subtracting the white balance correction value and a color correction value based upon a color correction intensity indicating a degree of an influence of environment light from the obtained luminance; and provisionally determines a specific object corresponding to the target portion from the corrected luminance of the target portion based on an association of a luminance range and the specific object retained in a data retaining unit.11-15-2012
20120288150ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device obtains luminances of a target portion in a detection area of a luminance image, assigns a color identifier to the target portion according to the luminances of the target portion, based on association between a color identifier and a luminance range retained in a data retaining unit, an groups target portions assigned one of one or more color identifiers associated with a same specific object, and of which position differences in the width direction and in the height direction are within a predetermined range, based on association between the color identifier and the luminance range retained in the data retaining unit.11-15-2012
20120288144IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MOTION DETECTION SYSTEM - According to one embodiment, an image processing apparatus includes an integrator and a motion determination unit. The motion determination unit determines movement of an object. The integrator integrates information on a first frame in a unit domain in the image of each frame, and integrates information on a second frame while inverting a sign of a signal level in the integration of the first frame. The motion determination unit makes the motion determination in the unit domain according to the integration result of the integrator.11-15-2012
20120288142OBJECT TRACKING - In general, the subject matter described in this specification can be embodied in methods, systems, and program products. A computing system accesses an indication of a first template that includes a region of a first image. The region of the first image includes a graphical representation of a face. The computing system receives a second image. The computing system identifies indications of multiple candidate templates. Each respective candidate template from the multiple candidate templates includes a respective candidate region of the second image. The computing system compares at least the first template to each of the multiple candidate templates, to identify a matching template from among the multiple candidate templates that includes a candidate region that matches the region of the first image that includes the graphical representation of the face.11-15-2012
20120288141Device, Method and Program for Processing Image - Disclosed herein is a device for processing a moving image, the device including: a selection unit which selects an image group composed of a plurality of still images including a target image from the moving image, according to specified information for specifying the target image among the plurality of still images included in the moving image; an acquisition unit which performs an acquisition process of acquiring the plurality of still images included in the image group from the moving image; and a synthesis unit which performs a synthesis process of synthesizing the plurality of acquired still images and generating a high-resolution image of the target image having a pixel density higher than that of the target image, wherein the selection unit has a function for performing selection by a first mode for selecting the target image and a still image which is located behind the target image in time-series order.11-15-2012
20090041297Human detection and tracking for security applications - A computer-based system for performing scene content analysis for human detection and tracking may include a video input to receive a video signal; a content analysis module, coupled to the video input, to receive the video signal from the video input, and analyze scene content from the video signal and determine an event from one or more objects visible in the video signal; a data storage module to store the video signal, data related to the event, or data related to configuration and operation of the system; and a user interface module, coupled to the content analysis module, to allow a user to configure the content analysis module to provide an alert for the event, wherein, upon recognition of the event, the content analysis module produces the alert.02-12-2009
20110299728AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.12-08-2011
20120189162MOBILE UNIT POSITION DETECTING APPARATUS AND MOBILE UNIT POSITION DETECTING METHOD - The mobile unit position detecting apparatus generates target data by extracting a target from an image shot by the image capturing device, extracts target setting data that best matches the target data, is prerecorded in a recording unit and is shot for each target, obtains a target ID corresponding to the extracted target setting data from the recording unit, detects position data associated with the obtained target ID, tracks the target in the image shot by the image capturing device, and calculates an aspect ratio of the target being tracked in the image. If the aspect ratio is equal to or lower than a threshold value, the mobile unit position detecting apparatus outputs the detected position data.07-26-2012
20130022242IDENTIFYING ANOMALOUS OBJECT TYPES DURING CLASSIFICATION - Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types.01-24-2013
20130022241ENHANCING GMAPD LADAR IMAGES USING 3-D WALLIS STATISTICAL DIFFERENCING - A method for processing XYZ point cloud of a scene acquired by a GmAPD LADAR includes: performing on a computing device a three-dimensional statistical differencing on the XYZ point cloud obtained from the GmAPD LADAR to produce a SD point cloud; and displaying an image of the SD point cloud.01-24-2013
20130022239IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - An image processing apparatus is capable of appropriately extracting a frame of an output target from a moving image. The image processing apparatus includes an analysis unit configured to analyze a plurality of analysis regions in each of a plurality of frames included in the moving image, an extraction unit configured to extract the frame of the output target from among the plurality of frames by comparing analysis results of the plurality of analysis regions in each of the plurality of frames analyzed by the analysis unit for each analysis regions corresponding to each other between the plurality of frames, and an output unit configured to output the frame of the output target extracted by the extraction unit.01-24-2013
20130022240Remote Automated Planning and Tracking of Recorded Data - The invention is a system for the remote automated planning and tracking of recorded data. The inventive system preferably is a software product that is used in conjunction with a foreign object tracking system and a visual inspection tracking system to provide an automated eddy current and visual inspection planning and tracking approach. The system provides a link between, for example, visual inspection of nuclear power plant steam generator secondary sides with eddy current inspection testing of the steam generator tubes from the primary side. This allows for possible loose part indications from the eddy current testing to be available to visual inspectors through foreign object tracking system for subsequent visual inspection and possible retrieval.01-24-2013
20120099765METHOD AND SYSTEM OF VIDEO OBJECT TRACKING - Methods and systems are provided to determine a target tracking box that surrounds a moving target. The pixels that define an image within the target tracking box can be classified as background pixels, foreground pixels, and changing pixels which may include pixels of an articulation, such as a portion of the target that moves relatively to the target tracking box. Identification of background image pixels improves the signal-to-noise ratio of the image, which is defined as the ratio between the number of pixels belonging to the foreground to the number of changing pixels, and which is used to track the moving target. Accordingly, tracking of small and multiple moving targets becomes possible because of the increased signal-to-noise ratio.04-26-2012
20120099764CALCULATING TIME TO GO AND SIZE OF AN OBJECT BASED ON SCALE CORRELATION BETWEEN IMAGES FROM AN ELECTRO OPTICAL SENSOR - A method and a system for calculating a time to go value between a vehicle and an intruding object. A first image of the intruding object at a first point of time retrieved. A second image of the intruding object at a second point of time is retrieved. The first image and the second image are filtered so that the first image and the second image become independent of absolute signal energy and so that edges become enhanced. An X fractional pixel position and a Y fractional pixel position are set to zero. The X fractional pixel position denotes a horizontal displacement at sub pixel level and the Y fractional pixel position denotes a vertical displacement at sub pixel level. A scale factor is selected. The second image is scaled with the scale factor and resampled to the X fractional pixel position and the Y fractional pixel position, which results in a resampled scaled image. Correlation values, are calculated between the first image and the resampled scaled image for different horizontal displacements at pixel level and different vertical displacements at pixel level for the resampled scaled image. A maximum correlation value at a subpixel level is found based on the correlation values. The X fractional pixel position and the Y fractional pixel position are also updated. j is set to j=j+1 and scaling of the second image, calculation of correlation values, finding the maximum correlation value and setting of j to j=j+1 are repeated a predetermined number of times. i is set to i=i+1 and selecting the scale factor, scaling of the second image, calculation of correlation values, finding the maximum correlation value, setting of j to j=j+1, and setting of i to i=i+1 are repeated a predetermined number of times. A largest maximum correlation value is found among the maximum correlation values and the scale factor associated with the largest maximum correlation value. The time to go is calculated based on the scale factor.04-26-2012
20120099763IMAGE RECOGNITION APPARATUS - An image recognition part of an image recognition apparatus recognizes an object based on a target area in an outside-vehicle image obtained by a camera installed in a vehicle. A position identifying part identifies an optical axis position of the camera relative to the vehicle based on the outside-vehicle image, and an area changing part changes a position of the target area in the outside-vehicle image according to the optical axis position of the camera. Therefore, it is possible to recognize an object properly based on the target area in the outside-vehicle image even though the optical axis position of the camera is displaced.04-26-2012
20120099762IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - In a case where detecting a face contained in an image, the face is detected in all directions of the image by combining the rotation of a detector in the face detecting direction, and the rotation of the image itself. If the angle made by the image direction and the detecting direction of the detector is an angle at which image deterioration readily occurs, the detection range of the detector is made narrower than that for an angle at which image deterioration hardly occurs.04-26-2012
20110299733SYSTEM AND METHOD FOR PROCESSING RADAR IMAGERY - The present invention relates to a system and method for processing imagery, such as may be derived from a coherent imaging system e.g. a synthetic aperture radar (SAR). The system processes sequences of SAR images of a region taken in at least two different passes and generates Coherent Change Detection (CCD) base images from corresponding images of each pass. A reference image is formed from one or more of the CCD base images images, and an incoherent change detection image formed by comparison between a given CCD base image and the reference image. The technique is able to detect targets from tracks left in soft ground, or from shadow areas caused by vehicles, and so does not rely on a reflection directly from the target itself. The technique may be implemented on data recorded in real time, or may be done in post-processing on a suitable computer system.12-08-2011
20110299735METHOD OF USING STRUCTURAL MODELS FOR OPTICAL RECOGNITION - A method and system for recognizing all varieties of objects in an image by using structure models are disclosed. Structural elements are sought when comparing a structural model with an image but only within a framework of one or more generated hypotheses. The method for identifying objects includes preliminarily creating a structural model of objects by specifying a plurality of basic geometric structural elements corresponding to one or more portions of the object, recording a spatial characteristic of each identified basic geometric structural element, and recording a relational characteristic for each specified basic geometric structural element. Objects in the image are isolated and a list of hypotheses for each object is provided. Hypotheses are tested by determining if the corresponding group of basic geometric structural elements corresponds to another supposed object described in a classifier. Results of testing of hypotheses may be saved and the results may be used to identify objects.12-08-2011
20110299734METHOD AND SYSTEM FOR DETECTING TARGET OBJECTS - With a method and a system for detecting target objects, which are detected by a sensor device, for example, by radar, laser or passive reception of electromagnetic waves, through an imaging electro-optical sensor with subsequent digital image evaluation, it is proposed for a rapid allocation of the image sensor with changeable direction that takes into account the different importance of the individual target objects to predefine in an assessment device different assessment criteria for a target parameter of the respective target objects and to derive therefrom a prioritization value for each individual target. Based on the prioritization values a ranking is compiled of the target objects for detection by the image sensor, and the target objects are successively detected by the image sensor in the order given by the ranking and evaluated, in particular classified, in an image evaluation device.12-08-2011
20110299732SYSTEM OF DRONES PROVIDED WITH RECOGNITION BEACONS - The present invention relates to a system (12-08-2011
20110299731INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM - An information processing device includes a first calculation unit which calculates a score of each sample image including a positive image in which an object as an identification object is present and a negative image in which the object as the identification object is not present, for each weak identifier of an identifier including a plurality of weak identifiers, a second calculation unit which calculates the number of scores when the negative image is processed, which are scores less than a minimum score among scores when the positive image is processed; and an realignment unit which realigns the weak identifiers in order from a weak identifier in which the number calculated by the second calculation unit is a maximum.12-08-2011
20110299730VEHICLE LOCALIZATION IN OPEN-PIT MINING USING GPS AND MONOCULAR CAMERA - Described herein is a method and system for vehicle localization in an open pit mining environment having intermittent or incomplete GPS coverage. The system comprises GPS receivers associated with the vehicles and providing GPS measurements when available, as well as one or more cameras 12-08-2011
20110299727Specific Absorption Rate Measurement and Energy-Delivery Device Characterization Using Thermal Phantom and Image Analysis - A system for use in characterizing an energy applicator includes a test fixture assembly. The test fixture assembly includes an interior area defined therein. The system also includes a thermally-sensitive medium disposed in the interior area of the test fixture assembly. The thermally-sensitive medium includes a cut-out portion defining a void in the thermally-sensitive medium. The cut-out portion is configured to receive at least a portion of the energy applicator therein.12-08-2011
20120106791IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus such as a surveillance apparatus and method thereof are provided. The image processing apparatus includes: an object detecting unit which detects a plurality of moving objects from at least one of two or more images obtained by photographing a surveillance area from two or more view points, respectively; a depth determination unit which determines depths of the moving objects based on the two or more images, wherein the depth determination unit determines the moving objects as different objects if the moving objects have different depths.05-03-2012
20110286633System And Method For Detecting, Tracking And Counting Human Objects of Interest - A method of identifying, tracking, and counting human objects of interest based upon at least one pair of stereo image frames taken by at least one image capturing device, comprising the steps of: obtaining said stereo image frames and converting each said stereo image frame to a rectified image frame using calibration data obtained for said at least one image capturing device; generating a disparity map based upon a pair of said rectified image frames; generating a depth map based upon said disparity map and said calibration data; identifying the presence or absence of said objects of interest from said depth map and comparing each of said objects of interest to existing tracks comprising previously identified objects of interest; for each said presence of an object of interest, adding said object of interest to one of said existing tracks if said object of interest matches said one existing track, or creating a new track comprising said object of interest if said object of interest does not match any of said existing tracks; updating each said existing track; and maintaining a count of said objects of interest in a given time period based upon said existing tracks created or modified during said given time period.11-24-2011
20110286632ASSEMBLY COMPRISING A RADAR AND AN IMAGING ELEMENT - An assembly comprising a radar and a camera for both deriving data relating to a golf ball and a golf club at launch, radar data relating to the ball and club being illustrated in an image provided by the camera. The data illustrated may be trajectories of the ball/club/club head, directions and/or angles, such as an angle of a face of the golf club striking the ball, the lie angle of the club head or the like. An assembly of this type may also be used for defining an angle or direction in the image and rotating e.g. an image of the golfer to have the determined direction or angle coincide with a predetermined angle/direction in order to be able to compare different images.11-24-2011
20110286630Visualization of Medical Image Data With Localized Enhancement - Systems and methods for visualization of medical image data with localized enhancement. In one implementation, image data of a structure of interest is resampled within a predetermined plane to generate at least one background image of the structure of interest. In addition, at least one local image is reconstructed to visually enhance at least one local region of interest associated with the structure of interest. The local image and the background image are then combined to generate a composite image.11-24-2011
20110286629Method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object and x-ray device - A method for reconstruction of a two-dimensional sectional image corresponding to a sectional plane through a recorded object from two-dimensional projection images recorded along a recording trajectory at different projection angles with an X-ray device is proposed. The sectional plane having at least two intersection points with the imaging trajectory is selected. After selection of the sectional plane, an intermediate function on the sectional plane is determined by backprojection of the projection images processed with a differentiation filter. The object densities forming the sectional image are determined from the intermediate function by a two-dimensional iterative deconvolution method.11-24-2011
20110286628SYSTEMS AND METHODS FOR OBJECT RECOGNITION USING A LARGE DATABASE - A method of organizing a set of recognition models of known objects stored in a database of an object recognition system includes determining a classification model for each known object and grouping the classification models into multiple classification model groups. Each classification model group identifies a portion of the database that contains the recognition models of the known objects having classification models that are members of the classification model group. The method also includes computing a representative classification model for each classification model group. Each representative classification model is derived from the classification models that are members of the classification model group. When a target object is to be recognized, the representative classification models are compared to a classification model of the target object to enable selection of a subset of the recognition models of the known objects for comparison to a recognition model of the target object.11-24-2011
20110286627METHOD AND APPARATUS FOR TRACKING AND RECOGNITION WITH ROTATION INVARIANT FEATURE DESCRIPTORS - Various methods for tracking and recognition with rotation invariant feature descriptors are provided. One example method includes generating an image pyramid of an image frame, detecting a plurality of interest points within the image pyramid, and extracting feature descriptors for each respective interest point. According to some example embodiments, the feature descriptors are rotation invariant. Further, the example method may also include tracking movement by matching the feature descriptors to feature descriptors of a previous frame and performing recognition of an object within the image frame based on the feature descriptors. Related example methods and example apparatuses are also provided.11-24-2011
20110286631REAL TIME TRACKING/DETECTION OF MULTIPLE TARGETS - A mobile platform detects and tracks at least one target in real-time, by tracking at least one target, and creating an occlusion mask indicating an area in a current image to detect a new target. The mobile platform searches the area of the current image indicated by the occlusion mask to detect the new target. The use of a mask to instruct the detection system where to look for new targets increases the speed of the detection task. Additionally, to achieve real-time operation, the detection and tracking is performed in the limited time budget of the (inter) frame duration. Tracking targets is given higher priority than detecting new targets. After tracking is completed, detection is performed in the remaining time budget for the frame duration. Detection for one frame, thus, may be performed over multiple frames.11-24-2011
20090202108ASSAYING AND IMAGING SYSTEM IDENTIFYING TRAITS OF BIOLOGICAL SPECIMENS - A method of system is provided for assaying specimens. In connection with such system or method, plural multi-pixel target images of a field of view are obtained at different corresponding points in time over a given sample period. A background image is obtained using a plural set of the plural target images. For a range of points in time, the background image is removed from the target images to produce corresponding background-removed target images. Analysis is performed using at least a portion of the corresponding background-removed target images to identify visible features of the specimens. A holding structure is provided to hold a set of discrete specimen containers. A positioning mechanism is provided to position a plural subset of the containers to place the moving specimens within the plural subset of the containers within a field of view of the camera.08-13-2009
20110293141DETECTION OF VEHICLES IN AN IMAGE - The invention concerns a traffic surveillance system that is used to detect and track vehicles in video taken of a road from a low mounted camera. The inventors have discovered that even in heavily occluded scenes, due to traffic density or the angle of low mounted cameras capturing the images, at least one horizontal edge of the windshield is least likely to be occluded for each individual vehicle in the image. Thus, it is an advantage of the invention that the direct detection of a windshield on its own can be used to detect a vehicle in a single image. Multiple models are projected (12-01-2011
20120014561IMAGE TAKING APPARATUS AND IMAGE TAKING METHOD - An image taking apparatus according to an aspect of the invention comprises: an image pickup device which picks up an object image and outputs the picked-up image data; a face detection device which detects human faces in the image data; a face-distance calculating device which calculates the distance between the faces among a plurality of faces detected by the face detection device; and a controlling device which controls the image pickup device to start shooting, after a shooting instruction is issued, in the case where the distance between the faces calculated by the face-distance calculating device is not greater than a first predetermined threshold value. The image taking apparatus allows shooting the moment the distance between the faces is close enough not be greater than to a predetermined threshold value.01-19-2012
20120014559Method and System for Semantics Driven Image Registration - A method and system for automatic semantics driven registration of medical images is disclosed. Anatomic landmarks and organs are detected in a first image and a second image. Pathologies are also detected in the first image and the second image. Semantic information is automatically extracted from text-based documents associated with the first and second images, and the second image is registered to the first image based the detected anatomic landmarks, organs, and pathologies, and the extracted semantic information.01-19-2012
20120014558POSITION-DEPENDENT GAMING, 3-D CONTROLLER, AND HANDHELD AS A REMOTE - Methods and systems for using a position of a mobile device with an integrated display as an input to a video game or other presentation are presented. Embodiments include rendering an avatar on a mobile device such that it appears to overlay a competing user in the real world. Using the mobile device's position, view direction, and the other user's mobile device position, an avatar (or vehicle, etc.) is depicted at an apparently inertially stabilized location of the other user's mobile device or body. Some embodiments may estimate the other user's head and body positions and angles and reflect them in the avatar's gestures.01-19-2012
20110293145DRIVING SUPPORT DEVICE, DRIVING SUPPORT METHOD, AND PROGRAM - Provided are a driving support device, a driving support method, and a program, in which the driver can more intuitively and accurately determine the distance to another vehicle in the side rear. A driving support device (12-01-2011
20110293144Method and System for Rendering an Entertainment Animation - Systems and methods for rendering an entertainment animation. The system can comprise a user input unit for receiving a non-binary user input signal; an auxiliary signal source for generating an auxiliary signal; a classification unit for classifying the non-binary user input signal with reference to the auxiliary signal; and a rendering unit for rendering the entertainment animation based on classification results from the classification unit.12-01-2011
20110293143FUNCTIONAL IMAGING - A method includes generating a kinetic parameter value for a VOI in a functional image of a subject based on motion corrected projection data using an iterative algorithm, including determining a motion correction for projection data corresponding to the VOI based on the VOI, motion correcting the projection data corresponding to the VOI to generate the motion corrected projection data, and estimating the at least one kinetic parameter value based on the motion corrected projection data or image data generated with the motion corrected projection data. In another embodiment, a method includes registering functional image data indicative of tracer uptake in a scanned patient with image data from a different imaging modality, identifying a VOI in the image based on the registered images, generating at least one kinetic parameter for the VOI, and generating a feature vector including the at least one generated kinetic parameter and at least one bio- marker.12-01-2011
20110293142METHOD FOR RECOGNIZING OBJECTS IN A SET OF IMAGES RECORDED BY ONE OR MORE CAMERAS - Method for improving the visibly of objects and recognizing objects in a set of images recorded by one or more cameras, the images of said set of images being made from mutual different geometric positions, the method comprising the steps or recording a set or subset of images by means of one camera which is moved rather freely and which makes said images during its movement, thus providing an array of subsequent images, estimating the camera movement between subsequent image recordings, also called ego-motion hereinafter, based on features of those recorded images, registering the camera images using a synthetic aperture method, recognizing said objects.12-01-2011
20110293140Dataset Creation For Tracking Targets With Dynamically Changing Portions - A mobile platform visually detects and/or tracks a target that includes a dynamically changing portion, or otherwise undesirable portion, using a feature dataset for the target that excludes the undesirable portion. The feature dataset is created by providing an image of the target and identifying the undesirable portion of the target. The identification of the undesirable portion may be automatic or by user selection. An image mask is generated for the undesirable portion. The image mask is used to exclude the undesirable portion in the creation of the feature dataset for the target. For example, the image mask may be overlaid on the image and features are extracted only from unmasked areas of the image of the target. Alternatively, features may be extracted from all areas of the image and the image mask used to remove features extracted from the undesirable portion.12-01-2011
20110293139METHOD OF AUTOMATICALLY TRACKING AND PHOTOGRAPHING CELESTIAL OBJECTS AND PHOTOGRAPHIC APPARATUS EMPLOYING THIS METHOD - A method of automatically tracking and photographing a celestial object, includes inputting latitude information, photographing azimuth angle information and photographing elevation angle information of a photographic apparatus; inputting star map data of a certain range including data on a location of a celestial object from the latitude information, the photographing azimuth angle information and the photographing elevation angle information; calculating a deviation amount between a location of the celestial object that is imaged in a preliminary image obtained by the photographic apparatus and the location of the celestial object which is defined in the input star map data; correcting at least one of the photographing azimuth angle information and the photographing elevation angle information using the deviation amount; and performing a celestial-object auto-tracking photographing operation based on the corrected at least one of the photographing azimuth angle information and the photographing elevation angle information.12-01-2011
20110293137ANALYSIS OF THREE-DIMENSIONAL SCENES - A method for processing data includes receiving a depth map of a scene containing a humanoid form. The depth map is processed so as to identify three-dimensional (3D) connected components in the scene, each connected component including a set of the pixels that are mutually adjacent and have mutually-adjacent depth values. Separate, first and second connected components are identified as both belonging to the humanoid form, and a representation of the humanoid form is generated including both of the first and second connected components.12-01-2011
20110293136System and Method for Adapting Generic Classifiers for Object Detection in Particular Scenes Using Incremental Training - A generic classifier is adapted to detect an object in a particular scene, wherein the particular scene was unknown when the classifier was trained with generic training data. A camera acquires a video of frames of the particular scene. A model of the particular scene model is constructed using the frames in the video. The classifier is applied to the model to select negative examples, and new negative examples are added to the training data while removing another set of existing negative examples from the training data based on an uncertainty measure;. Selected positive examples are also added to the training data and the classifier is retrained until a desired accuracy level is reached to obtain a scene specific classifier.12-01-2011
20110085698Measuring Turbulence and Winds Aloft using Solar and Lunar Observable Features - Presented is a system and method for detecting turbulence in the atmosphere comprising an image capturing device for capturing a plurality of images of a visual feature of a celestial object such as the sun, combined with a lens having focal length adapted to focus an image onto image capturing device such that the combination of the lens and the image capturing device are adapted to resolve a distortion caused by a turbule of turbulent air, and an image processor adapted to compare said plurality of images of said visual feature to detect the transit of a turbule of turbulent air in between said image capturing device and said celestial object, and compute a measurement of the angular velocity of the turbule. A second plurality of images is used to triangulate the distance to the turbule and the velocity of the turbule.04-14-2011
20090028384Three-dimensional road map estimation from video sequences by tracking pedestrians - Estimation of a 3D layout of roads and paths traveled by pedestrians is achieved by observing the pedestrians and estimating road parameters from the pedestrian's size and position in a sequence of video frames. The system includes a foreground object detection unit to analyze video frames of a 3D scene and detect objects and object positions in video frames, an object scale prediction unit to estimate 3D transformation parameters for the objects and to predict heights of the objects based at least in part on the parameters, and a road map detection unit to estimate road boundaries of the 3D scene using the object positions to generate the road map.01-29-2009
20110200225ADVANCED BACKGROUND ESTIMATION TECHNIQUE AND CIRCUIT FOR A HYPER-SPECTRAL TARGET DETECTION METHOD - A system, circuit and methods for target detection from hyper-spectral image data are disclosed. Filter coefficients are determined using a modified constrained energy minimization (CEM) method. The modified CEM method can operate on a circuit operable to perform constrained linear programming optimization. A filter comprising the filter coefficients is applied to a plurality of pixels of the hyper-spectral image data to form CEM values for the pixels, and one or more target pixels are identified from the CEM values. The process may be repeated to enhance target recognition by using filter coefficients determined by excluding the identified target pixels from the hyper-spectral image data.08-18-2011
20080232643Bitmap tracker for visual tracking under very general conditions - System and method for visually tracking a target object silhouette in a plurality of video frames under very general conditions. The tracker does not make any assumption about the object or the scene. The tracker works by approximating, in each frame, a PDF (probability distribution function) of the target's bitmap and then estimating the maximum a posteriori bitmap. The PDF is marginalized over all possible motions per pixel, thus avoiding the stage in which optical flow is determined. This is an advantage over other general-context trackers that do not use the motion cue at all or rely on the error-prone calculation of optical flow. Using a Gibbs distribution with a first order neighborhood system yields a bitmap PDF whose maximization may be transformed into that of a quadratic pseudo-Boolean function, the maximum of which is approximated via a reduction to a maximum-flow problem.09-25-2008
20090196461IMAGE CAPTURE DEVICE AND PROGRAM STORAGE MEDIUM - An image capture device includes a capture unit configured to capture an image of an object, an object detection unit configured to detect the object in the image captured by the capture unit, an angle detection unit configured to detect an angle of the object detected by the object detection unit, and a control unit configured to perform a predetermined control operation for the image capture device based on the angle of the object detected by the angle detection unit.08-06-2009
20090196460EYE TRACKING SYSTEM AND METHOD - An eye tracking system and method is provided giving persons with severe disabilities the ability to access a computer through eye movement. A system comprising a head tracking system, an eye tracking system, a display device, and a processor which calculates the gaze point of the user is provided. The eye tracking method comprises determining the location and orientation of the head, determining the location and orientation of the eye, calculating the location of the center of rotation of the eye, and calculating the gaze point of the eye. A method for inputting to an electronic device a character selected by a user through alternate means is provided, the method comprising placing a cursor near the character to be selected by said user, shifting the characters on a set of keys which are closest to the cursor, tracking the movement of the character to be selected with the cursor, and identifying the character to be selected by comparing the direction of movement of the cursor with the direction of movement of the characters of the set of keys which are closest to the cursor.08-06-2009
20090196459Image manipulation and processing techniques for remote inspection device - A remote inspection apparatus has an imager disposed in an imager head and capturing image data. An active display unit receives the image data in digital form and graphically renders the image data on an active display. Movement tracking sensors track movement of the imager head and/or image display unit. In some aspects, a computer processor located in the active display unit employs information from movement tracking sensors tracking movement of the imager head to generate and display a marker indicating a position of the imager head. In additional aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to control movement of the imager head. In other aspects, the computer processor employs information from movement tracking sensors tracking movement of the active display unit to modify the image data rendered on the active display.08-06-2009
20100266162Methods, Systems, And Computer Program Products For Protecting Information On A User Interface Based On A Viewability Of The Information - Methods, systems, and computer program products for protecting information on a user interface based on a viewability of the information are disclosed. According to one method, a viewing position of a person other than a user with respect to information on a user interface is identified. An information viewability threshold is determined based on the information on the user interface. Further, an action associated with the user interface is performed based on the identified viewing position and the determined information viewability threshold.10-21-2010
20110007940AUTOMATED TARGET DETECTION AND RECOGNITION SYSTEM AND METHOD - Methods and apparatus are provided for recognizing particular objects of interest in a captured image. One or more salient features that are correlative to an object of interest are detected within a captured image. The captured image is segmented into one or more regions of interest that include a detected salient feature. A covariance appearance model is generated for each of the one or more regions of interest, and first and second comparisons are conducted. The first comparisons comprise comparing each of the generated covariance appearance models to a plurality of stored covariance appearance models, and the second comparisons comprise comparing each of the generated covariance appearance models to each of the other generated covariance appearance model. Based on the first and second comparisons, a determination is made as to whether each of the one or more detected salient features is a particular object of interest.01-13-2011
20080240496APPROACH FOR RESOLVING OCCLUSIONS, SPLITS AND MERGES IN VIDEO IMAGES - Aspects of the present invention provide a solution for resolving an occlusion in a video image. Specifically, an embodiment of the present invention provides an environment in which portions of a video image in which occlusions have occurred may be determined and analyzed to determine the type of occlusion. Furthermore, regions of the video image may be analyzed to determine which object in the occlusion the region belongs to. The determinations and analysis may use such factors as pre-determined attributes of an object, such as color or texture of the object and/or a temporal association of the object, among others.10-02-2008
20080310677OBJECT DETECTION SYSTEM AND METHOD INCORPORATING BACKGROUND CLUTTER REMOVAL - A method and system for optically detecting an object within a field of view where detection is difficult because of background clutter within the field of view that obscures the object. A camera is panned with movement of the object to motion stabilize the object against the background clutter while taking a plurality of image frames of the object. A frame-by-frame analysis is performed to determine variances in the intensity of each pixel, over time, from the collected frames. From this analysis a variance image is constructed that includes an intensity variance value for each pixel. Pixels representing background clutter will typically vary considerably in intensity from frame to frame, while pixels making up the object will vary little or not at all. A binary threshold test is then applied to each variance value and the results are used to construct a final image. The final image may be a black and white image that clearly shows the object as a silhouette.12-18-2008
20080310676Method and System for Optoelectronic Detection and Location of Objects - Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.12-18-2008
20100061592SYSTEM AND METHOD FOR ANALYZING THE MOVEMENT AND STRUCTURE OF AN OBJECT - A system and method for analyzing the movement and structure of an object (03-11-2010
20090296984System and Method for Three-Dimensional Object Reconstruction from Two-Dimensional Images - A system and method for three-dimensional (3D) acquisition and modeling of a scene using two-dimensional (2D) images are provided. The system and method provides for acquiring first and second images of a scene, applying a smoothing function to the first image to make feature points of objects, e.g., corners and edges of the objects, in the scene more visible, applying at least two feature detection functions to the first image to detect feature points of objects in the first image, combining outputs of the at least two feature detection functions to select object feature points to be tracked, applying a smoothing function to the second image, applying a tracking function on the second image to track the selected object feature points, and reconstructing a three-dimensional model of the scene from an output of the tracking function.12-03-2009
20100034422OBJECT TRACKING USING LINEAR FEATURES - A method of tracking objects within an environment comprises acquiring sensor data related to the environment, identifying linear features within the sensor data, and determining a set of tracked linear features using the linear features identified within the sensor data and a previous set of tracked linear features, the set of tracked linear features being used to track objects within the environment.02-11-2010
20100027841METHOD AND SYSTEM FOR DETECTING A SIGNAL STRUCTURE FROM A MOVING VIDEO PLATFORM - The present invention aims at providing a method for detecting a signal structure from a moving vehicle. The method for detecting signal structure includes capturing an image from a camera mounted on the moving vehicle. The method further includes restricting a search space by predefining candidate regions in the image, extracting a set of features of the image within each candidate region and detecting the signal structure accordingly.02-04-2010
20090296985Efficient Multi-Hypothesis Multi-Human 3D Tracking in Crowded Scenes - System and methods are disclosed to perform multi-human 3D tracking with a plurality of cameras. At each view, a module receives each camera output and provides 2D human detection candidates. A plurality of 2D tracking modules are connected to the CNNs, each 2D tracking module managing 2D tracking independently. A 3D tracking module is connected to the 2D tracking modules to receive promising 2D tracking hypotheses. The 3D tracking module selects trajectories from the 2D tracking modules to generate 3D tracking hypotheses.12-03-2009
20100104135MARKER GENERATING AND MARKER DETECTING SYSTEM, METHOD AND PROGRAM - A marker generating system is characterized in having a special feature extracting element that extracts a portion, as a special feature, including a distinctive pattern in a video image not including a marker; a unique special feature selecting element that, based on the extracted special feature, selects a special feature of an image, as a unique special feature, that does not appear on the video image; and a marker generating element that generates a marker based on the unique special feature.04-29-2010
20090279738Apparatus for image recognition - An image recognition apparatus includes an image recognition unit, an evaluation value calculation unit, and a motion extraction unit. The image recognition unit uses motion vectors that are generated in the course of coding image data into MPEG format data or in the course of decoding the MPEG coded data by the evaluation value calculation unit and the motion extraction unit as well as two dimensional DCT coefficients and encode information such as picture types and block types for generating the evaluation values that represent feature of the image. The apparatus further includes an update unit for recognizing the object in the image based on the determination rules for a unit of macro block. The apparatus can thus accurately detect the motion of the object based on the evaluation values derived from DCT coefficients even when generation of the motion vectors is difficult.11-12-2009
20100027843SURFACE UI FOR GESTURE-BASED INTERACTION - Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image.02-04-2010
20100027842OBJECT DETECTION METHOD AND APPARATUS THEREOF - An object detection method and an apparatus thereof are provided. In the object detection method, a plurality of images in an image sequence is sequentially received. When a current image is received, a latest background image is established by referring to the current image and the M images previous to the current image, so as to update one of N background images, wherein M and N are positive integers. Next, color models of the current image and the background images are analyzed to determine whether a pixel in the current image belongs to a foreground object. Accordingly, the accuracy in object detection is increased by instantly updating the background images.02-04-2010
20120189163APPARATUS AND METHOD FOR RECOGNIZING HAND ROTATION - An apparatus and a method are provided that can intuitively and easily recognize hand rotation. The apparatus for recognizing a hand rotation includes a camera for photographing a plurality of hand image data, a detector for extracting circles through fingers of the hand image data and a controller for recognizing hand rotation through changes in positions and sizes of the circles extracted from each of the plurality of hand image data.07-26-2012
20100054533Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values.03-04-2010
20110262009METHOD AND APPARATUS FOR IDENTIFYING OBSTACLE IN IMAGE - A method for identifying barriers in images is disclosed. In the method, images of a current frame and N frame which is nearest to the current frame are obtained, the obtained images of the frames are divided in the same way, and the image of each frame obtains a plurality of divided block regions; the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame is calculated; whether each block region in the image of the current frame is decided successively according to the motion barrier confidence of each block region corresponding to the current frame and the N frame which is nearest to the current frame; the barriers in the images are determined according to each block region.10-27-2011
20110262008Method for Determining Position Data of a Target Object in a Reference System - A method for determining the position data of a target object in a reference system from an observation position at a distance. A three-dimensional reference model of the surroundings of the target object is provided, the reference model including known geographical location data. An image of the target object and its surroundings, resulting from the observation position for an observer, is matched with the reference model. The position data of the sighted target object in the reference model is determined as relative position data with respect to known location data of the reference model.10-27-2011
20110262004Learning Device and Learning Method for Article Transport Facility - A learning control device performs a positioning process, a first image capturing process, and a first deviation amount calculating process in which a reference position deviation amount in the horizontal direction between the imaging reference position and a detection mark is derived based on image information captured in the first image capturing process to derive a position adjustment amount from the derived reference position deviation amount, and the learning control device further includes a positioning correcting process in which the position adjustment device is operated to adjust a position of the second learn assist member based on the derived movement adjustment amount when the reference position deviation amount derived in the first deviation amount calculating process falls outside a set tolerance range. A second image capturing process, and a second deviation amount calculating process may be further provided.10-27-2011
20090169053COLLABORATIVE TRACKING - Disclosed is a system (07-02-2009
20110262002HAND-LOCATION POST-PROCESS REFINEMENT IN A TRACKING SYSTEM - A tracking system having a depth camera tracks a user's body in a physical space and derives a model of the body, including an initial estimate of a hand position. Temporal smoothing is performed when the initial estimate moves by less than a threshold level from frame to frame, while little or no smoothing is performed when the movement is more than the threshold. The smoothed estimate is used to define a local volume for searching for a hand extremity to define a new hand position. Another process generates stabilized upper body points that can be used as reliable reference positions, such as by detecting and accounting for occlusions. The upper body points and a prior estimated hand position are used to define an arm vector. A search is made along the vector to detect a hand extremity to define a new hand position.10-27-2011
20090154769Moving robot and moving object detecting method and medium thereof - A moving robot and moving object detecting method and medium thereof is disclosed. The moving object detecting method includes transforming an omni-directional image captured in the moving robot to a panoramic image, comparing the panoramic image with a previous panoramic image and estimating a movement region of the moving object based on the comparison, and recognizing that a movement of the moving object exist in the estimated movement region when the area of the estimated movement region exceeds the reference area.06-18-2009
20100177931VIRTUAL OBJECT ADJUSTMENT VIA PHYSICAL OBJECT DETECTION - Various embodiments related to the location and adjustment of a virtual object on a display in response to a detected physical object are disclosed. One disclosed embodiment provides a computing device comprising a multi-touch display, a processor and memory comprising instructions executable by the processor to display on the display a virtual object, to detect a change in relative location between the virtual object and a physical object that constrains a viewable area of the display, and to adjust a location of the virtual object on the display in response to detecting the change in relative location between the virtual object and the physical object.07-15-2010
20130022234OBJECT TRACKING - Methods, devices, and systems for object tracking are described herein. One or more method embodiments include receiving an initial set of track points associated with a trajectory of an object, compressing the initial set of track points into a plurality of track segments, each track segment having a start track point and an end track point, and storing the plurality of track segments to represent the trajectory of the object.01-24-2013
20090147991METHOD, SYSTEM, AND COMPUTER PROGRAM FOR DETECTING AND CHARACTERIZING MOTION - A method for motion detection/characterization is provided including the steps of (a) capturing a series of time lapsed images of the target, wherein the target moves between at least two of such images; (b) generating a motion distribution in relation to the target across the series of images; and (c) identifying motion of the target based on analysis of the motion distribution. In a further aspect of motion detection/characterization in accordance with the invention, motion is detected/characterized based on calculation of a color distribution for a series of images. A system and computer program for presenting an augmented environment based on the motion detection/characterization is also provided. An interface means based on the motion detection/characterization is also provided.06-11-2009
20090147995INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - An information processing apparatus includes information input units which inputs observation information in a real space; an event detection unit which generates event information including estimated position and identification information on users existing in the actual space through analysis of the input information; and an information integration processing unit which sets hypothesis probability distribution data regarding user position and user identification information and generates analysis information including the user position information through hypothesis update and sorting out based on the event information, in which the event detection unit detects a face area from an image frame input from an image information input unit, extracts face attribute information from the face area, and calculates and outputs a face attribute score corresponding to the extracted face attribute information to the information integration processing unit, and the information integration processing unit applies the face attribute score to calculate target face attribute expectation values.06-11-2009
20090097706SYSTEMS AND METHODS FOR DETERMINING IF OBJECTS ARE IN A QUEUE - Systems and methods that determine a position value of a first object and a position value of a second object, and compare the position value of the first object with the position value of the second object to determine if the second object is in a queue with the first object are provided.04-16-2009
20130022243METHODS AND APPARATUSES FOR FACE DETECTION - Methods and apparatuses are provided for face detection. A method may include selecting a face detection parameter subset from a plurality of face detection parameter subsets. Each face detection parameter subset may include a subset of face posture models from a set of face posture models and a subset of image patch scales from a set of image patch scales. The method may further include using the selected face detection parameter subset for performing face detection in an image. Corresponding apparatuses are also provided.01-24-2013
20090041302Object type determination apparatus, vehicle, object type determination method, and program for determining object type - An object type determination apparatus, an object type determination method, a vehicle, and a program for determining an object type, capable of accurately determining the type of the object by appropriately determining periodicity in movement of the object from images, are provided. The object type determination apparatus includes an object area extracting means (02-12-2009
20090190797RECOGNIZING IMAGE ENVIRONMENT FROM IMAGE AND POSITION - A method of recognizing the environment of an image from an image and position information associated with the image includes acquiring the image and its associated position information; using the position information to acquire an aerial image correlated to the position information; identifying the environment of the image from the acquired aerial image; and storing the environment of the image in association with the image for subsequent use.07-30-2009
20110262001VIEWPOINT DETECTOR BASED ON SKIN COLOR AREA AND FACE AREA - In a particular illustrative embodiment, a method of determining a viewpoint of a person based on skin color area and face area is disclosed. The method includes receiving image data corresponding to an image captured by a camera, the image including at least one object to be displayed at a device coupled to the camera. The method further includes determining a viewpoint of the person relative to a display of the device coupled to the camera. The viewpoint of the person may be determined by determining a face area of the person based on a determined skin color area of the person and tracking a face location of the person based on the face area. One or more objects displayed at the display may be moved in response to the determined viewpoint of the person.10-27-2011
20110081044Systems And Methods For Removing A Background Of An Image - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed.04-07-2011
20130022237METHOD FOR STAND OFF INSPECTION OF TARGET IN MONITORED SPACE - This invention addresses remote inspection of target in monitored space. A three dimensional (3D) microwave image of the space is obtained using at least two emitters. The data undergoes coherent processing to obtain maximum intensity of the objects in the area. This image is combined with a 3D video image obtained using two or more video cameras synchronized with the microwave emitters. The images are converted into digital format and transferred into one coordinate system. The distance l is determined between the microwave and the video image. If l01-24-2013
20100119110IMAGE DISPLAY DEVICE, COMPUTER READABLE STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM, AND IMAGE PROCESSING METHOD - An image processing apparatus includes an area dividing unit that divides an image obtained by capturing inside of a body lumen into one or more areas by using a value of a specific wavelength component that is specified in accordance with a degree of absorption or scattering in vivo from a plurality of wavelength components included in the image or wavelength components obtained by conversion of the plurality of wavelength components; and a target-of-interest site specifying unit that specifies a target-of-interest site in the area by using a discriminant criterion in accordance with an area obtained by the division.05-13-2010
20130022232CUSTOMIZED AUDIO CONTENT RELATING TO AN OBJECT OF INTEREST - A device/system and method for creating customized audio segments related to an object of interest are disclosed. The device and/or system can create an additional level of interaction with the object of interest by creating customized audio segments based on the identity of the object of interest and/or the user's interaction with the object of interest. Thus, the mobile device can create an interactive environment for a user interacting with an otherwise inanimate object.01-24-2013
20080267451System and Method for Tracking Moving Objects - A method for tracking an object that is embedded within images of a scene, including: in a sensor unit that includes movable sensor, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, receiving the succession of images. Receiving a user command for selecting an object of interest in a given image of the received succession of images and determining object data associated with the object and transmitting through the link to the sensor unit the object data. In the sensor unit, identifying the given image of the stored succession of images and the object of interest using the object data, and tracking the object in other image of the stored succession of images. The other image being later than the given image. In the case that the object cannot be located in the latest image of the stored succession of images, using information of at images in which the object was located to predict estimated real-time location of the object and generating direction command to the movable sensor for generating realtime image of the scene and locking on the object.10-30-2008
20120140982IMAGE SEARCH APPARATUS AND IMAGE SEARCH METHOD - According to one embodiment, an image search apparatus includes, an image input module which is input with an image, an event detection module which detects events from the input image input by the image input module, and determines levels, depending on types of the detected events, an event controlling module which retains the events detected by the event detection module, for each of the levels, and an output module which outputs the events retained by the event controlling module, for each of the levels.06-07-2012
20100119113METHOD AND APPARATUS FOR DETECTING OBJECTS - A method for detecting an object on an image representable by picture elements includes: “determining first and second adaptive thresholds for picture elements of the image, depending on an average intensity in a region around the respective picture element”, “determining partial objects of picture elements of a first type that are obtained based on a comparison with the first adaptive threshold”, “determining picture elements of a second type that are obtained based on a comparison with the second adaptive threshold” and “combining a first and a second one of the partial objects to an extended partial object by picture elements of the second type, when a minimum distance exists between the first and the second of the partial objects, wherein the object to be detected can be described by a sum of the partial objects of picture elements of the first type and/or the obtained extended partial objects”.05-13-2010
200802674493-D MODELING - A system comprising an imaging device adapted to capture images of a target object at multiple angles. The system also comprises storage coupled to the imaging device and adapted to store a generic model of the target object. The system further comprises processing logic coupled to the imaging device and adapted to perform an iterative process by which the generic model is modified in accordance with the target object. During each iteration of the iterative process, the processing logic obtains structural and textural information associated with at least one of the captured images and modifies the generic model with the structural and textural information. The processing logic displays the generic model.10-30-2008
20080267450Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System - The present invention has a simpler structure than before and is designed to precisely detect the position of a real environment's target object on a screen. The present invention generates a special marker image MKZ including a plurality of areas whose brightness levels gradually change in X and Y directions, displays the special marker image MKZ on the screen of a liquid crystal display 10-30-2008
20080240504Integrating Object Detectors - An N-object detector comprises an N-object decision structure incorporating decision sub-structures of N object detectors. Some decision sub-structures have multiple different versions composed of the same classifiers with the classifiers rearranged. Said multiple versions associated with an object detector are arranged in the N-object decision structure so that the order in which the classifiers are evaluated is dependent upon the results of the evaluation of a classifier of another object detector. Each version of the same decision sub-structure produces the same logical behaviour as the other versions. Such an N-object decision structure is generated by generating multiple candidate N-object decision structures and analysing the expected computational cost of these candidates to select one of them.10-02-2008
20100278386VIDEOTRACKING - A method for tracking an object in a sequence of video frames includes the following steps: creating a model with characteristic features for the object to be tracked; and performing a template matching algorithm in individual frames on the basis of the created model for determining a position of the object in the respective frame. An apparatus arrangement for performing the method includes at least one video camera (11-04-2010
20100119109MULTI-CORE MULTI-THREAD BASED KANADE-LUCAS-TOMASI FEATURE TRACKING METHOD AND APPARATUS - A multi-core multi-thread based Kanade-Lucas-Tomasi (KLT) feature tracking method includes subdividing an input image into regions and allocating a core to each region; extracting KLT features for each region in parallel and in real time; and tracking the extracted features in the input image. Said extracting the features is carried out based on single-region/multi-thread/single-core architecture, while said tracking the features is carried out based on multi-feature/multi-thread/single-core architecture.05-13-2010
20090232358Method and apparatus for processing an image - There is provided an efficient, fast image processing apparatus with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. There is further provided an efficient fast image processing method with low error probability for rapidly scrutinizing a digitized video image frame and processing said image frame to detect and characterize features of interest while ignoring other features of said image frame. In a first embodiment of the invention an image processing apparatus comprises an imaging device coupled to a digital electronic image processor. Video data from the imaging device is linked to a location data source. Objects of interest in a scene are identified by comparing computed Maximally Stable Extremal Regions (MSERs) of captured images with MSERs of images of objects contained in a object template database.09-17-2009
20090316951MOBILE IMAGING DEVICE AS NAVIGATOR - Embodiments of the invention are directed to obtaining information based on directional orientation of a mobile imaging device, such as a camera phone. Visual information is gathered by the camera and used to determine a directional orientation of the camera, to search for content based on the direction, to manipulate 3D virtual images of a surrounding area, and to otherwise use the directional information. Direction and motion can be determined by analyzing a sequence of images. Distance from a current location, inputted search parameters, and other criteria can be used to expand or filter content that is tagged with such criteria. Search results with distance indicators can be overlaid on a map or a camera feed. Various content can be displayed for a current direction, or desired content, such as a business location, can be displayed only when the camera is oriented toward the desired content.12-24-2009
20100086176Learning Apparatus and Method, Recognition Apparatus and Method, Program, and Recording Medium - A learning apparatus includes an image generator, a feature point extractor, a feature value calculator, and a classifier generator. The image generator generates, from an input image, images having differing scale coefficients. The feature point extractor extracts feature points from each image generated by the image generator. The feature value calculator calculates feature values for the feature points by filtering the feature points using a predetermined filter. The classifier generator generates one or more classifiers for detecting a predetermined target object from an image by means of statistical learning using the feature values.04-08-2010
20100128928IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing apparatus including a dynamic body detecting unit for detecting a dynamic body contained in a moving image, a dynamic body region setting unit for, during a predetermined time from a time point the dynamic body is detected by the dynamic body detecting unit, setting a region containing the dynamic body at the detection time point as a dynamic body region, and a fluctuation removable processing unit for performing a fluctuation removal process on a region other than the dynamic body region set by the dynamic body region setting unit.05-27-2010
20090310820IMPROVEMENTS RELATING TO TARGET TRACKING - A method and system are disclosed for tracking a target imaged in video footage. The target may, for example, be a person moving through a crowd The method comprises the steps of: identifying a target in a first frame; generating a population of sub-templates by sampling from a template area defined around the target position; and searching for instances of the sub-templates in a second frame so as to locate the target in the second frame. Sub-templates whose instances are not consistent with the new target position are removed from the population and replaced by newly sampled sub-templates. The method can then be repeated so as to find the target in further frames. It can be implemented in a system comprising video imaging means, such as a CCTV camera, and processing means operable to carry out the method.12-17-2009
20090310821DETECTION OF AN OBJECT IN AN IMAGE - The invention provides a method, system, and program product for detecting an object in a digital image. In one embodiment, the invention includes: deriving an initial object indication mask based on pixel-wise differences between a first digital image and a second digital image, at least one of which includes the object; performing an edge finding operation on both the first and second digital images, wherein the edge finding operation includes marking added edges; generating a plurality of straight linear runs of pixels across an image containing the object, wherein each of the plurality of straight linear runs starts and ends on an added edge and is contained within the initial object indication mask; and forming a final object indication mask by retaining only pixels that are part of at least one of the plurality of straight linear runs.12-17-2009
20110135153IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes a facial region extraction unit extracting a facial region, an identification information acquisition unit acquiring identification information for identifying a face in the facial region, and first and second integrated processing units performing integrated processing. The first and second integrated processing units determine a threshold value on the basis of a relationship between an estimated area and a position of the face being tracked, calculate a similarity between a face being tracked and a face pictured in an image to be stored in a predetermined storage period, and determine if the face being tracked and the stored face image are the face of the same person.06-09-2011
20110135150METHOD AND APPARATUS FOR TRACKING OBJECTS ACROSS IMAGES - A method and apparatus for tracking objects across images. The method includes retrieving object location in a current frame, determining the appearance and motion signatures of the object in the current frame, predicting the new location of the object based on object dynamics, searching for a location with similar appearance and motion signatures in a next frame, and utilizing the location with similar appearance and motion signatures to determine the final location of the object in the next frame.06-09-2011
20100119112GRAPHICAL REPRESENTATIONS FOR AGGREGATED PATHS - Techniques for displaying path-related information. Techniques are provided for generating and displaying graphical representations for a path. For example, radial histograms, radial vector plots, and other graphical representations may be rendered for multiple paths aggregated together.05-13-2010
20100080418PORTABLE SUSPICIOUS INDIVIDUAL DETECTION APPARATUS, SUSPICIOUS INDIVIDUAL DETECTION METHOD, AND COMPUTER-READABLE MEDIUM - Cameras provided to glasses successively take subject images around a wearer of the glasses. The subject images are searched to detect human face regions, and if human face regions are detected, feature quantities of each face are calculated to detect the face direction and the eye direction, and an eye-gaze direction is detected based on them. Whether or not each person with the detected human face region is looking at the cameras is determined from the eye-gaze direction, and if there is a human face looking at the cameras for a given period of time or more, a person with the human face is determined as being a suspicious individual, and a warning message indicating the detection of the suspicious individual is output to the wearer. Furthermore, the detection information and images can be provided to a device in a remote location.04-01-2010
20120033856SYSTEM AND METHOD FOR ENABLING MEANINGFUL INTERACTION WITH VIDEO BASED CHARACTERS AND OBJECTS - The present disclosure provides a system and method for enabling meaningful body-to-body interaction with virtual video-based characters or objects in an interactive imaging environment including: capturing a corpus of video-based interaction data, processing the captured video using a segmentation process that corresponds to the capture setup in order to generate binary video data, labeling the corpus by assigning a description to clips of silhouette video, processing the labeled corpus of silhouette motion data to extract horizontal and vertical projection histograms for each frame of silhouette data, and estimating the motion state automatically from each frame of segmentation data using the processed model. Virtual characters or objects are represented using video captured from video-based motion, thereby creating the illusion of real characters or objects in an interactive imaging experience.02-09-2012
20120033855PREDICTIVE FLIGHT PATH AND NON-DESTRUCTIVE MARKING SYSTEM AND METHOD - Systems and methods for acquiring and targeting an object placed in motion, tracking the object's movement, and while tracking, measuring the object's characteristics and marking the object with an external indicator until the object comes to rest is provided. The systems and methods include an acquisition and tracking system, a data capture system, and a marking control system. Through the components of the system, an object moving through two or three dimensional space can be externally marked to assist with improving the performance of striking the object.02-09-2012
20120033852SYSTEM AND METHOD TO FIND THE PRECISE LOCATION OF OBJECTS OF INTEREST IN DIGITAL IMAGES - The present invention is a method and system to precisely locate objects of interest in any given image scene space, which finds the presence of objects based upon pattern matching geometric relationships to a master, known set. The method and system prepares images for feature and attribute detection and identifies the presence of potential objects of interest, then narrows down the objects based upon how well they match a pre designated master template. The method by which matching takes place is done through finding all objects, plotting its area, juxtaposing a sweet spot overlap of its area on master objects, which in turn forms a glyph shape. The glyph shape is recorded, along with all other formed glyphs in an image's scene space and then mapped to form sets using a classifier and finally a pattern matching algorithm. The resulting objects of interest matches are then refined to plot the contour boundaries of the object's grouped elements (arrangement of contiguous pixels of the given object called a Co-Glyph) and finally snapped to its component actual dimensions e.g., x, y of a character or individual living cell.02-09-2012
20090208057VIRTUAL CONTROLLER FOR VISUAL DISPLAYS - Virtual controllers for visual displays are described. In one implementation, a camera captures an image of hands against a background. The image is segmented into hand areas and background areas. Various hand and finger gestures isolate parts of the background into independent areas, which are then assigned control parameters for manipulating the visual display. Multiple control parameters can be associated with attributes of multiple independent areas formed by two hands, for advanced control including simultaneous functions of clicking, selecting, executing, horizontal movement, vertical movement, scrolling, dragging, rotational movement, zooming, maximizing, minimizing, executing file functions, and executing menu choices.08-20-2009
20090208058IMAGING SYSTEM FOR VEHICLE - An imaging system for a vehicle includes an imaging device having a field of view exteriorly and forward of the vehicle in its direction of travel, and an image processor operable to process the captured images in accordance with an algorithm. The algorithm comprises a sign recognition routine and a character recognition routine. The image processor processes the image data captured by the imaging device to detect signs in the field of view of the imaging device and applies the sign recognition routine to determine a sign type of the detected sign. The image processor is operable to apply the character recognition routine to the image data to determine information on the detected sign. The image processor applies the character recognition routine to the captured images in response to an output of the sign recognition routine being indicative of the detected sign being a sign type of interest.08-20-2009
20090208054MEASURING A COHORT'S VELOCITY, ACCELERATION AND DIRECTION USING DIGITAL VIDEO - A computer implemented method, apparatus, and computer program product for identifying positional data for an object moving in an area of interest. Positional data for each camera in a set of cameras associated with the object is retrieved. The positional data identifies a location of each camera in the set of cameras within the area of interest. The object is within an image capture range of each camera in the set of cameras. Metadata describing video data captured by the set of cameras is analyzed using triangulation analytics and the positional data for the set of cameras to identify a location of the object. The metadata is generated in real time as the video data is captured by the set of cameras. The positional data for the object is identified based on locations of the object over a given time interval. The positional data describes motion of the object.08-20-2009
20090087024CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.04-02-2009
20110200229Object Detecting with 1D Range Sensors - Moving objects are classified based on maximum margin classification and discriminative probabilistic sequential modeling of range data acquired by a scanner with a set of one or more 1D laser line scanner. The range data in the form of 2D images is pre-processed and then classified. The classifier is composed of appearance classifiers, sequence classifiers with different inference techniques, and state machine enforcement of a structure of the objects.08-18-2011
20100086174METHOD OF AND APPARATUS FOR PRODUCING ROAD INFORMATION - An embodiment of the present invention discloses a method of producing road information for use in a map database including: acquiring a source image from an image sequence obtained by means of a terrestrial based camera mounted on a moving vehicle; determining a road color sample from pixels associated with a predefined area in the source image representative of the road surface in front of or behind the moving vehicle; generating a road surface image from the source image in dependence of the road color sample; and, producing road information in dependence of the road surface image and position and orientation data associated with the source image.04-08-2010
20120294479IMAGE IDENTIFICATION APPARATUS AND METHOD - According to one embodiment, an image identification apparatus comprises an image pickup unit, an illumination unit, an illumination control unit and an identification unit. The image pickup unit configured to pickup an image of an identified object. The illumination unit configured to irradiate light towards the image pickup area of the image pickup unit. The illumination control unit configured to change the irradiation condition of the illumination unit in accordance with the image pickup timing of the image pickup unit. The identification unit configured to identify the identified object according to the image picked-up by the image pickup unit.11-22-2012
20100080415OBJECT-TRACKING SYSTEMS AND METHODS - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device.04-01-2010
20090087030Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values.04-02-2009
20090087027ESTIMATOR IDENTIFIER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - An estimator/identifier component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The estimator/identifier component may be configured to classify an object being one of two or more classification types, e.g., as being a vehicle or a person. Once classified, the estimator/identifier may evaluate the object to determine a set of kinematic data, static data, and a current pose of the object. The output of the estimator/identifier component may include the classifications assigned to a tracked object, as well as the derived information and object attributes.04-02-2009
20090087026METHOD AND SYSTEM OF MATERIAL IDENTIFICATION USING BINOCULAR STEROSCOPIC AND MULTI-ENERGY TRANSMISSION IMAGES - The present invention provides a method and system of material identification using binocular steroscopic and multi-energy transmission image. With the method, any obstacle that dominates the ray absorption can be peeled off from the objects that overlap in the direction of a ray beam. The object that is unobvious due to a relatively small amount of ray absorption will thus stand out, and the material property of the object, such as organic, mixture, metal and the like can be identified. This method lays a fundament for automatic identification of harmful objects, such as explosive, drugs, etc., concealed in a freight container.04-02-2009
20100080417Object-Tracking Systems and Methods - A system and method for tracking, identifying, and labeling objects or features of interest is provided. In some embodiments, tracking is accomplished using unique signature of the feature of interest and image stabilization techniques. According to some aspects a frame of reference using predetermined markers is defined and updated based on a change in location of the markers and/or specific signature information. Individual objects or features within the frame may also be tracked and identified. Objects may be tracked by comparing two still images, determining a change in position of an object between the still images, calculating a movement vector of the object, and using the movement vector to update the location of an image device.04-01-2010
20090296989Method for Automatic Detection and Tracking of Multiple Objects - A method for automatically detecting and tracking objects in a scene. The method acquires video frames from a video camera; extracts discriminative features from the video frames; detects changes in the extracted features using background subtraction to produce a change map; uses the change map to use a hypothesis to estimate of an approximate number of people along with uncertainty in user specified locations; and using the estimate, track people and update the hypotheses for a refinement of the estimation of people count and location.12-03-2009
20120269396Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.10-25-2012
20120269391ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - An environment recognition device obtains a luminance of a target portion existing in a detection area, obtains a height of the target portion, and provisionally determines a specific object corresponding to the target portion or determines a specific object corresponding to grouped target objects, according to the luminance and the height of the target portion based on the association (specific object table) of a range of luminance and a range of height from a road surface with the specific object which is retained in a data retaining unit.10-25-2012
20120269393ARTICULATION REGION DISPLAY APPARATUS, ARTICULATION REGION DETECTING APPARATUS, ARTICULATION REGION BELONGINGNESS CALCULATING APPARATUS, ARTICULATED OBJECT REGION BELONGINGNESS CALCULATING APPARATUS, AND ARTICULATION REGION DISPLAY METHOD - An articulation region display apparatus includes: an articulatedness calculating unit calculating an articulatedness, based on a temporal change in a point-to-point distance and a temporal change in a geodetic distance between given trajectories; an articulation detecting unit detecting, as an articulation region, a region corresponding to a first trajectory based on the articulatedness between the trajectories, the first trajectory being in a state where the regions corresponding to the first trajectory and a second trajectory are present on the same rigid body, the regions corresponding to the first trajectory and third trajectory are present on the same rigid body, and the region corresponding to the second trajectory is connected with the region corresponding to the third trajectory via the same joint; and a display control unit transforming the articulation region into a form visually recognized by a user, and output the transformed articulation region.10-25-2012
20120269394SYSTEMS AND METHODS FOR GENERATING ENHANCED SCREENSHOTS - Systems and methods for generating and providing enhanced screenshots may include executing instructions stored in memory to evaluate at least a portion of a viewing frustum generated by the instructions to determine one or more objects included therein, obtain metadata associated with the one or more objects, and generate at least one enhanced screenshot indicative of the at least a portion of the viewing frustum by associating the metadata of each of the one or more objects with a location of each of the one or more objects within the at least one enhanced screenshot to create hotspots indicative of each of the one or more objects such that selection at least one hotspot by a computing system causes at least a portion of the metadata associated with the at least one hotspot to be displayed on a display device of a computing system.10-25-2012
20120269392IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A plurality of images obtained by capturing a recognition target object from different viewpoint positions is acquired, and a portion set on the recognition target object in the image is received as a set portion for each of the images. A plurality of feature points is set in each of the images so as to set a larger number of feature points at the set portion than at an unset portion other than the set portion. The recognition target object is learned using image feature amounts at the feature points.10-25-2012
20120269395Automated Service Measurement, Monitoring and Management - In a method and system of service management, a radiative sensor is positioned to observe an area of interest. At least one frame of data of the area of interest is electronically acquired from the radiative sensor. The acquired frame of data is electronically processing to determine the presence or absence of at least one object in the area of interest. Based on the presence or absence of the object in the area of interest, (1) an alert is electronically caused to be generated in response to also electronically detecting another object in another area of interest, and/or (2) a timer is electronically caused to initiate or terminate counting a period of time.10-25-2012
20120269387SYSTEMS AND METHODS FOR DETECTING THE MOVEMENT OF AN OBJECT - Systems and methods are provided for detecting a movement of an object marked with a marker. The system includes a sensor configured to capture a first image of the marker and to capture a second image of the marker after the first image, each of the first and second images having pixels each having a visual intensity. A controller is configured to compare the first image and the second image by comparing the visual intensity of each of the pixels of the first image and the second image, determine an area of overlap between the first image and the second image based on the comparison, calculate a change in position of the marker in the second image relative to the marker in the first image based on the area of overlap, and detect the movement of the object based on the change in position of the marker.10-25-2012
20120269383RELIABILITY IN DETECTING RAIL CROSSING EVENTS - A method, data processing system, apparatus, and computer program product for monitoring objects. A plurality of images of an area is received. An object in the area is identified from the plurality of images. A plurality of points in a region within the area is identified from a first image in the plurality of images. The plurality of points has a fixed relationship with each other and the region. The object in the area is monitored to determine whether the object has entered the region. A determination that the object has not entered the region is made in response to identifying an absence of a number of the plurality of points in a second image in the plurality of images.10-25-2012
20100266158SYSTEM AND METHOD FOR OPTICALLY TRACKING A MOBILE DEVICE - A system and method for optically tracking a mobile device uses a first displacement value along a first direction and a second displacement value along a second direction, which are produced using frames of image data of a navigation surface, to compute first and second tracking values that indicate the current position of the mobile device. The first tracking value is computed using the second displacement value and the sine of a tracking angle value, while the second tracking value is computed using the second displacement value and the cosine of the tracking angle value. The tracking angle value is an angle value derived using at least one previous second displacement value.10-21-2010
20090141940Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking - The present disclosure relates to systems and methods for modeling, recognizing, and tracking object images in video files. In one embodiment, a video file, which includes a plurality of frames, is received. An image of an object is extracted from a particular frame in the video file, and a subsequent image is also extracted from a subsequent frame. A similarity value is then calculated between the extracted images from the particular frame and subsequent frame. If the calculated similarity value exceeds a predetermined similarity threshold, the extracted object images are assigned to an object group. The object group is used to generate an object model associated with images in the group, wherein the model is comprised of image features extracted from optimal object images in the object group. Optimal images from the group are also used for comparison to other object models for purposes of identifying images.06-04-2009
20090141936Object-Tracking Computer Program Product, Object-Tracking Device, and Camera - A computer performs following steps according to a program for tracking an object. Template matching of each frame of an input image to a plurality of template images is performed, a template image having a highest similarity with an image within a predetermined region of the input image is selected as a selected template among the plurality of template images and the predetermined region of the input image is extracted as a matched region. With reference to an image within the matched region thus extracted, by tracking motion between frames, motion of an object is tracked between the images of the plurality of frames. It is determined as to whether or not a result of template matching satisfies an update condition for updating the plurality of template images. In a case that the update condition is determined to be satisfied, at least one of the plurality of template images.06-04-2009
20100098292Image Detecting Method and System Thereof - An image detecting method and a system thereof are provided. The image detecting method includes the following steps. An original image is captured. A moving-object image of the original image is created. An edge-straight-line image of the original image is created, wherein the edge-straight-line image comprises a plurality of edge-straight-lines. Whether the original image has a mechanical moving-object image is detected according to the length, the parallelism and the gap of the part of the edge-straight-lines corresponding to the moving-object image.04-22-2010
20100098293Structure and Motion with Stereo Using Lines - A system and method are disclosed for estimating camera motion and structure reconstruction of a scene using lines. The system includes a line detection module, a line correspondence module, a temporal line tracking module and structure and motion module. The line detection module is configured to detect lines in visual input data comprising a plurality of image frames. The line correspondence module is configured to find line correspondence between detected lines in the visual input data. The temporal line tracking module is configured to track the detected lines temporally across the plurality of the image frames. The structure and motion module is configured to estimate the camera motion using the detected lines in the visual input data and to reconstruct three-dimensional lines from the estimated camera motion.04-22-2010
20100098295CLEAR PATH DETECTION THROUGH ROAD MODELING - A method for detecting a clear path of travel for a vehicle including fusion of clear path detection by image analysis and road geometry data describing road geometry includes monitoring an image from a camera device on the vehicle, analyzing the image through clear path detection analysis to determine a clear path of travel within the image, monitoring the road geometry data, analyzing the road geometry data to determine an impact of the data to the clear path, modifying the clear path based upon the analysis of the road geometry data, and utilizing the clear path in navigation of the vehicle.04-22-2010
20080285799APPARATUS AND METHOD FOR DETECTING OBSTACLE THROUGH STEREOVISION - According to an apparatus and method for detecting an obstacle through stereovision, an image capturing module comprises a plurality of cameras and is used for capturing a plurality of images; an image processing module edge-detecting the image to generate a plurality of edge objects and object information corresponding to each edge object; an object detection module matches a focus and a horizontal spacing interval of the camera according to the object information to generate a relative object distance corresponding to each edge object; a group module compares the relative object distance with a threshold distance and groups the edge objects with the relative object distance smaller than the threshold distance to be an obstacle and obtains a relative obstacle distance corresponding to the obstacle.11-20-2008
20120106797IDENTIFICATION OF OBJECTS IN A VIDEO - Techniques related to identifying objects in a video are generally described. One example method for identifying a moving object in a video may include generating a background frame and a foreground frame based on the video, comparing the foreground and the background frames at each corresponding location, acquiring an object area based on the comparison, determining if object area contains a moving object based on size and shape of the object area, identifying the moving object against templates of target objects, and updating the background frame according to the comparison.05-03-2012
20120106795SYSTEM AND METHOD FOR OPTIMIZING CAMERA SETTINGS - There is provided a recognition system. The recognition system is coupled to an image capturing device, and determines a first matching percentage by comparing a first live image with a first reference image, determines a second matching percentage by comparing a second live image with the first reference image, compares the first matching percentage with the second matching percentage to determine a direction of adjustment of a setting of the image capturing device, and generates a feedback signal to adjust the setting based on the direction of adjustment. The first live image and second live image are captured by the image capturing device.05-03-2012
20120106794METHOD AND APPARATUS FOR TRAJECTORY ESTIMATION, AND METHOD FOR SEGMENTATION - A trajectory estimation apparatus includes: an image acceptance unit which accepts images that are temporally sequential and included in the video; a hierarchical subregion generating unit which generates subregions at hierarchical levels by performing hierarchical segmentation on each of the images accepted by the image acceptance unit such that, among subregions belonging to hierarchical levels different from each other, a spatially larger subregion includes spatially smaller subregions; and a representative trajectory estimation unit which estimates, as a representative trajectory, a trajectory, in the video, of a subregion included in a certain image, by searching for a subregion that is most similar to the subregion included in the certain image, across hierarchical levels in an image different from the certain image.05-03-2012
20120106790Face or Other Object Detection Including Template Matching - A template matching module is configured to program a processor to apply multiple differently-tuned object detection classifier sets in parallel to a digital image to determine one or more of an object type, configuration, orientation, pose or illumination condition, and to dynamically switch between object detection templates to match a determined object type, configuration, orientation, pose, blur, exposure and/or directional illumination condition.05-03-2012
20120106789IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - An image processing apparatus includes an image input configured to receive image data, a target extraction device configured to extract an object from the image data as a target object based on recognizing a first movement by the object, and a gesture recognition device configured to issue a command based on recognizing a second movement by the target object.05-03-2012
20120106788Image Measuring Device, Image Measuring Method, And Computer Program - Provided are an image measuring device, an image measuring method, and a computer program, capable of performing accurate calibration and accurately measure a desired physical quantity even in a case of an object to be measured having a shape in which selection and tracking of target points are difficult or an object to be measured moving as time elapses. Frame images are played back frame by frame, and selection of a plurality of frame images is accepted from frame images played back frame by frame. A synthesized image in which the selected and accepted frame images are superimposed is generated. The generated synthesized image is displayed, and a predetermined physical quantity is measured on the displayed synthesized image.05-03-2012
20120106787APPARATUS AND METHODS FOR ANALYSING GOODS PACKAGES - An apparatus for constructing a data model of a goods package from a series of images, one of the series of images comprising an image of the goods package, comprises a processor and a memory for storing one or more routines. When the one or more routines are executed under control of the processor the apparatus extracts element data from goods package elements in the series of images and constructs the data model by associating element data from a number of visible sides of the goods package with the goods package. The apparatus may also analyse a candidate character string read in an OCR process from one of the series of images of the goods package. The apparatus may also analyse a barcode read from an image of a goods package.05-03-2012
20120106786OBJECT DETECTING DEVICE - An object detecting device includes a camera ECU that detects an object from image data for a predetermined area has been captured by a monocular camera, a fusion processing portion that calculates the pre-correction horizontal width of the detected object, a numerical value calculating portion that estimates the length in the image depth direction of the calculated pre-correction horizontal width, and a collision determining portion that corrects the pre-correction horizontal width calculated by the fusion processing portion, based on the estimated length in the image depth direction.05-03-2012
20120106785METHODS AND SYSTEMS FOR PRE-PROCESSING TWO-DIMENSIONAL IMAGE FILES TO BE CONVERTED TO THREE-DIMENSIONAL IMAGE FILES - Disclosed herein are methods and systems of efficiently, effectively, and accurately preparing images for a 2D to 3D conversion process by pre-treating occlusions and transparencies in original 2D images. A single 2D image, or a sequence of images, is ingested, segmented into discrete elements, and the discrete elements are individually reconstructed. The reconstructed elements are then re-composited and ingested into a 2D to 3D conversion process.05-03-2012
20120106781SIGNATURE BASED DRIVE-THROUGH ORDER TRACKING SYSTEM AND METHOD - A system and method for providing signature-based drive-through order tracking. An image with respect to a vehicle at a POS unit can be captured at an order point and a delivery point (e.g., a payment point and a pick-up point) utilizing an image capturing unit by detecting the presence of the vehicle at each point utilizing a vehicle presence sensor. The captured image can be processed in order to extract a small region of interest and can be reduced to a unique signature. The extracted signature of the vehicle at the order point can be stored into a database together with the corresponding order and the vehicle image. The signature extracted at the delivery point can be matched with the signature stored in the database. If a match is found, the order associated with the vehicle together with the images captured at the delivery point and the order point can be displayed in a user interface at the delivery point to ensure that the right order is delivered to a customer.05-03-2012
20090169052Object Detector - An object position area (07-02-2009
20090262982Determining a Location of a Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data.10-22-2009
20090262980Method and Apparatus for Determining Tracking a Virtual Point Defined Relative to a Tracked Member - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data.10-22-2009
20090262979Determining a Material Flow Characteristic in a Structure - An volume of a patient can be mapped with a system operable to identify a plurality of locations and save a plurality of locations of a mapping instrument. The mapping instrument can include one or more electrodes that can sense a voltage that can be correlated to a three dimensional location of the electrode at the time of the sensing or measurement. Therefore, a map of a volume can be determined based upon the sensing of the plurality of points without the use of other imaging devices. An implantable medical device can then be navigated relative to the mapping data.10-22-2009
20110200226CUSTOMER BEHAVIOR COLLECTION METHOD AND CUSTOMER BEHAVIOR COLLECTION APPARATUS - According to one embodiment, a computer selects trajectory data on a person positioned in an image monitoring area from trajectory data on relevant persons. The computer selects a selling space image data obtained when the person corresponding to the trajectory data is positioned in the image monitoring area. The computer analyzes the selling space image data to extract a person image. The computer checks the person image extracted from the selling space image data against image data on each customer to search for customer image data obtained by taking an image of the person in the person image. The computer stores, upon detecting the customer image data obtained by taking an image of the person in the person image, identification information on transaction data stored in association with the customer image data, in association with identification information on the trajectory data.08-18-2011
20090262977VISUAL TRACKING SYSTEM AND METHOD THEREOF - The present invention provides a visual tracking system and its method comprising: a sensor unit, for capturing monitored scenes continuously; an image processor unit, for detecting when a target enters into a monitored scene, and extracting its characteristics to establish at least one model, and calculating the matching scores of the models; a hybrid tracking algorithm unit, for combining the matching scores to produce optimal matching results; a visual probability data association filter, for receiving the optimal matching results to eliminate the interference and output a tracking signal; an active moving platform, for driving the platform according to the tracking signal to situate the target at the center of the image. Therefore, the visual tracking system of the present invention can help a security camera system to record the target in details and maximize the visual information of the intruding target.10-22-2009
20090262978Automatic Detection Of Fires On Earth's Surface And Of Atmospheric Phenomena Such As Clouds, Veils, Fog Or The Like, Using A Satellite System - A method for automatically detecting fires on Earth's surface using a satellite system is provided. The method includes acquiring multi-spectral images of the Earth at different times, using a multi-spectral satellite sensor, each multi-spectral image being a collection of single-spectral images each associated with a respective wavelength (λ), and each single-spectral image being made up of pixels each indicative of a spectral radiance (R10-22-2009
20100124356DETECTING OBJECTS CROSSING A VIRTUAL BOUNDARY LINE - An approach that detects objects crossing a virtual boundary line is provided. Specifically, an object detection tool provides this capability. The object detection tool comprises a boundary component configured to define a virtual boundary line in a video region of interest, and establish a set of ground patch regions surrounding the virtual boundary line. The object detection tool further comprises an extraction component configured to extract a set of attributes from each of the set of ground patch regions, and update a ground patch history model with the set of attributes from each of the set of ground patch regions. An analysis component is configured to analyze the ground patch history model to detect whether an object captured in at least one of the set of ground patch regions is crossing the virtual boundary line in the video region of interest.05-20-2010
20100124357SYSTEM AND METHOD FOR MODEL BASED PEOPLE COUNTING - An approach that allows for model based people counting is provided. In one embodiment, there is a generating tool configured to generate a set of person-shape models based on results of a cumulative training process; a detecting tool configured to detect persons in a camera field-of-view by using the set of person-shape models, and a counting tool configured to track detected persons upon crossing by the detected persons of a previously established virtual boundary.05-20-2010
20100124360METHOD AND APPARATUS FOR RECORDING EVENTS IN VIRTUAL WORLDS - A method and an apparatus for recording an event in a virtual world. The method includes acquiring camera view regions of avatars joining the event; identifying one or more key avatars and/or key objects based on information about the targets in the camera view regions of the avatars; setting one or more recorders for the identified one or more key avatars and/or key objects for recording the event such that the one or more key avatars and/or key objects are located in the camera view regions of the one or more recorders. The apparatus includes devices configured to perform the steps of the method.05-20-2010
20100124359METHOD AND SYSTEM FOR AUTOMATIC DETECTION OF A CLASS OF OBJECTS - An apparatus and method for providing automatic threat detection using passive millimeter wave detection and image processing analysis.05-20-2010
20090141935MOTION COMPENSATED CT RECONSTRUCTION OF HIGH CONTRAST OBJECTS - Cardiac CT imaging using gated reconstruction is currently limited in its temporal and spatial resolution. According to an exemplary embodiment of the present invention, an examination apparatus is provided in which an identification of a high contrast object is performed. This high contrast object is then followed through the phases, resulting in a motion vector field of the high contrast object, on the basis of which a motion compensated reconstruction is then performed.06-04-2009
20110170747Interactivity Via Mobile Image Recognition - Systems and methods of interacting with a virtual space, in which a mobile device is used to electronically capture image data of a real-world object, the image data is used to identify information related to the real-world object, and the information is used to interact with software to control at least one of: (a) an aspect of an electronic game; and (b) a second device local to the mobile device. Contemplated systems and methods can be used to gaming, in which the image data can be used to identify a name of the real-world object, to classify the real-world object, identify the real-world object as a player in the game, to identify the real-world Object as a goal object or as having some other value in the game, to use the image data to identify the real-world object as a goal object in the game.07-14-2011
20110170740Automatic image capture - A method of automatically capturing images with precision uses an intelligent mobile device having a camera loaded with an appropriate image capture application. When a use initializes the application, the camera starts taking images of the object. Each image is qualified to determine whether it is in focus and entirely within the field of view of the camera: Two or more qualified images are captured and stored for subsequent processing. The qualified images are aligned with each other by an appropriate perspective transformation so they each fill a common frame. Averaging of the aligned images reduces noise and a sharpening filter enhances edges, which produces a sharper image. The processed image is then converted into a two-level, black and white image which may be presented to the user for approval prior to submission via wireless or WiFi to a remote location.07-14-2011
20110170745Body Gesture Control System for Operating Electrical and Electronic Devices - A body gesture control system for operating electrical and electronic devices includes an image sensor device and an image processor device to process body gesture images captured by the image sensor device for recognizing the body gesture. The image processor device includes an image calculation unit and a gesture change detection unit electrically connected therewith. The image calculation unit is used to calculate gesture regions of the captured body gesture images and the gesture change detection unit is operated to detect changes of the captured body gesture images and to thereby determine a body gesture recognition signal.07-14-2011
20110170742IMAGE PROCESSING DEVICE, OBJECT SELECTION METHOD AND PROGRAM - There is provided an image processing device including: a data storage unit that stores object identification data for identifying an object operable by a user and feature data indicating a feature of appearance of each object; an environment map storage unit that stores an environment map representing a position of one or more objects existing in a real space and generated based on an input image obtained by imaging the real space using an imaging device and the feature data stored in the data storage unit; and a selecting unit that selects at least one object recognized as being operable based on the object identification data, out of the objects included in the environment map stored in the environment map storage unit, as a candidate object being a possible operation target by a user.07-14-2011
20110170741IMAGE PROCESSING DEVICE AND STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - There is provided an image processing device that includes a processor configured to execute instructions that cause the processor to provide functional units including: a setting unit that sets a plurality of extraction target ranges in a motion image configured of a plurality of frame images that are chronologically in succession with one another, each extraction target range being configured of a group of frame images that are selected from among the plurality of frame images constituting the motion image and that are chronologically in succession with one another, and the plurality of extraction target ranges being set such that there is no common frame image shared among the extraction target ranges; a selecting unit that selects a representative frame image from among the group of frame images in an extraction target range, the representative frame image being such a frame image whose difference from another representative frame image is the largest among differences of the frame images belonging to the extraction target range from the another representative frame image, the another representative frame image being selected from one of the extraction target ranges that is positioned chronologically adjacent to the extraction target range from which the representative frame image is selected; and a layout image generating unit that generates a layout image in which the selected representative frame images are laid out in such a pattern that indicates a chronological relationship among the representative frame images.07-14-2011
20090285449SYSTEM FOR OPTICAL RECOGNITION OF THE POSITION AND MOVEMENT OF AN OBJECT ON A POSITIONING DEVICE - The optical recognition system determines the position and/or movement of an object (11-19-2009
20090290756METHODS AND APPARATUS FOR DETECTING A COMPOSITION OF AN AUDIENCE OF AN INFORMATION PRESENTING DEVICE - Methods and apparatus for detecting a composition of an audience of an information presenting device are disclosed. A disclosed example method includes: capturing at least one image of the audience; determining a number of people within the at least one image; prompting the audience to identify its members if a change in the number of people is detected based on the number of people determined to be within the at least one image; and if a number of members identified by the audience is different from the determined number of people after a predetermined number of prompts of the audience, adjusting a value to avoid excessive prompting of the audience.11-26-2009
20110170744VIDEO-BASED VEHICLE DETECTION AND TRACKING USING SPATIO-TEMPORAL MAPS - Systems and methods for detecting and tracking objects, such as motor vehicles, within video data. The systems and method analyze video data, for example, to count objects, determine object speeds, and track the path of objects without relying on the detection and identification of background data within the captured video data. The detection system uses one or more scan lines to generate a spatio-temporal map. A spatio-temporal map is a time progression of a slice of video data representing a history of pixel data corresponding to a scan line. The detection system detects objects in the video data based on intersections of lines within the spatio-temporal map. Once the detection system has detected an object, the detection system may record the detection for counting purposes, display an indication of the object in association with the video data, determine the speed of the object, etc.07-14-2011
20110170743METHOD FOR DETECTING OBJECT MOVEMENT AND DETECTION SYSTEM - This invention relates to a method for detecting object movement by dynamically updating a reference image data. By dynamically updating the reference image data, the impact of the ambient light change can be reduced and the detection error of object movement caused by using fixed reference image data under varying ambient light can also be avoided. The present invention further provides a detection system.07-14-2011
20090274339Behavior recognition system - A system for recognizing various human and creature motion gaits and behaviors is presented. These behaviors are defined as combinations of “gestures” identified on various parts of a body in motion. For example, the leg gestures generated when a person runs are different than when a person walks. The system described here can identify such differences and categorize these behaviors. Gestures, as previously defined, are motions generated by humans, animals, or machines. Multiple gestures on a body (or bodies) are recognized simultaneously and used in determining behaviors. If multiple bodies are tracked by the system, then overall formations and behaviors (such as military goals) can be determined. 11-05-2009
20100128929IMAGE PROCESSING APPARATUS AND METHOD FOR TRACKING A LOCATION OF A TARGET SUBJECT - A digital image processing apparatus has a tracking function for tracking a location variation of a set tracking area on a plurality of frame images. The digital image processing apparatus includes a similarity calculation unit that calculates a similarity by varying a location of a template on one frame image. The similarity calculation unit calculates a second direction similarity by fixing a first direction location of the template in a first direction on the one frame image and by varying a second direction location of the template in a second direction which is perpendicular to the first direction, and then calculates a first direction similarity by fixing the second direction location of the template at a location where the second direction similarity is the highest and by varying the first direction location of the template in the first direction on the one frame image.05-27-2010
20090279737PROCESSING METHOD FOR CODED APERTURE SENSOR - A method of processing for a coded aperture imaging apparatus which is useful for target identification and tracking. The method uses a statistical scene model and, preferably using several frames of data, determines a likelihood of the position and/or velocity of one or more targets assumed to be in the scene. The method preferably applies a recursive Bayesian filter or Bayesian batch filter to determine a probability distribution of likely state parameters. The method acts upon the acquired data directly without requiring any processing to form an image.11-12-2009
20090279736MAGNETIC RESONANCE EYE TRACKING SYSTEMS AND METHODS - Embodiments of magnetic resonance eye tracking systems and methods are disclosed. One embodiment, among others, comprises a method that receives magnetic resonance based data and determines direction of a subject's gaze based on the data.11-12-2009
20090285450IMAGE-BASED SYSTEM AND METHODS FOR VEHICLE GUIDANCE AND NAVIGATION - A method of estimating position and orientation of a vehicle using image data is provided. The method includes capturing an image of a region external to the vehicle using a camera mounted to the vehicle, and identifying in the image a set of feature points of the region. The method further includes subsequently capturing another image of the region from a different orientation of the camera, and identifying in the image the same set of feature points. A pose estimation of the vehicle is generated based upon the identified set of feature points and corresponding to the region. Each of the steps are repeated at with respect to a different region at least once so as to generate at least one succeeding pose estimation of the vehicle. The pose estimations are then propagated over a time interval by chaining the pose estimation and each succeeding pose estimation one with another according to a sequence in which each was generated.11-19-2009
20110200227ANALYSIS OF DATA FROM MULTIPLE TIME-POINTS - Described herein is a technology for facilitating analysis of data across multiple time-points. In one implementation, first and second images acquired at respective first and second different time-points are received. In addition, first and second findings associated with the first and second images respectively are also received. The first and second findings are associated with at least one region of interest. A correspondence between the first and second findings may be automatically determined by aligning the first and second findings. A longitudinal analysis result may then be generated by correlating the first and second findings.08-18-2011
20090103779MULTI-SENSORIAL HYPOTHESIS BASED OBJECT DETECTOR AND OBJECT PURSUER - The invention relates to a method for multi-sensorial object detection, wherein sensor information is evaluated together from several different sensor signal flows having different sensor signal properties. For said evaluation, the at least two sensor signal flows are not adapted to each other and/or projected onto each other, but object hypotheses are generated in each of the at least two sensor signal flows and characteristics for at least one classifier are generated based of said object hypotheses. Said object hypotheses are subsequently evaluated by means of a classifier and are associated with one or more categories. At least two categories are identified and the object is associated with one of the two categories.04-23-2009
20090103776Method of Non-Uniformity Compensation (NUC) of an Imager - The present invention provides for simple and streamlined boresight correlation of FLIR-to-missile video. Boresight correlation is performed with un-NUCed missile video, which allows boresight correlation and NUC to be performed simultaneously thereby reducing the time required to acquire a target and fire the missile. The current approach uses the motion of the missile seeker for NUCing to produce spatial gradient filtering in the missile image by differencing images as the seeker moves. This compensates DC non-uniformities in the image. A FLIR image is processed with a matching displace and subtract spatial filter constructed based on the tracked scene motion. The FLIR image is resampled to match the missile image resolution, and the two images are preprocessed and correlated using conventional methods. Improved NUC is provided by cross-referencing multiple measurements of each area of the scene as viewed by different pixels in the imager. This approach is based on the simple yet novel premise that every pixel in the array that looks at the same thing should see the same thing. As a result, the NUC terms adapt to non-uniformities in the imager and not the scene.04-23-2009
20100128927IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - By a method such as foreground extraction or facial extraction, the area of a target object is detected from an input image, and the feature amount such as the center of gravity, size, and inclination is acquired. Using the value of a temporarily-set internal parameter, edge image generation, particle generation, and transition are carried out, and a contour is estimated by obtaining the probability density distribution by observing the likelihood. Comparing a feature amount obtained from the estimated contour and a feature amount of the area of the target object, the temporarily setting is reset by determining that the value for the temporary setting is not appropriate when the degree of matching of the both is smaller than a reference value. When the degree of matching is larger than the reference value, the value of the parameter is determined to be the final value.05-27-2010
20120294476Salient Object Detection by Composition - A computing device configured to determine, for each of a plurality of locations in an image, a saliency measure based at least on a cost of composing parts of the image in the location from parts of the image outside of the location is described herein. The computing device is further configured to select one or more of the locations as representing salient objects of the image based at least on the saliency measures.11-22-2012
20120294486DETECTING STEREOSCOPIC IMAGES - To detect the presence of the left and right constituent images of a stereoscopic image packed within an image frame or within a sequence of image frames, images are unpacked according to each one of said known formats; a candidate measure is formed according to each unpacking and the candidate measures are compared to identify the presence of left and right images packed according to an identified format. The candidate measure may be a low pass filtered measure of the difference between the left and right images and may be a high pass filtered measure of the activity in either the left or the right image.11-22-2012
20080212831REMOTE CONTROL OF AN IMAGE CAPTURING UNIT IN A PORTABLE ELECTRONIC DEVICE - A method and computer program product are described herein for remotely controlling a first image capturing unit in a portable electronic device as well as to such a portable electronic device. The portable electronic device may include a first and a second image capturing unit. The device detects and tracks an object via the second capturing unit and detects changes in an area of the object. These changes are then used for controlling the first image capturing unit remotely. When the control involves capturing of images an improved image quality can be obtained. Also the time it takes to capture an image is reduced.09-04-2008
20110200230METHOD AND DEVICE FOR ANALYZING SURROUNDING OBJECTS AND/OR SURROUNDING SCENES, SUCH AS FOR OBJECT AND SCENE CLASS SEGMENTING - The invention relates to a method and an object detection device for analysing objects in the environment and/or scenes in the environment. The object detection device includes a data processing and/or evaluation device. In the data processing and/or evaluation device, image data (x08-18-2011
20080212830Efficient Calculation of Ensquared Energy in an Imaging System - Systems and methods are provided for determining an ensquared energy associated with an imaging system. In one embodiment of the invention, a focal plane array captures an image of a target comprising a plurality of point sources, each point source being associated with a pixel within the focal plane array. An image analysis component estimates an ensquared energy value for the imaging system from respective intensity values of the associated pixels and known relative positions of the plurality of point sources.09-04-2008
20080212833ENHANCEMENT OF AIMPOINT IN SIMULATED TRAINING SYSTEMS - Embodiments of the invention, therefore, provide improved systems and methods for tracking targets in a simulation environment. Merely by way of example, an exemplary embodiment provides a reflected laser target tracking system that tracks a target with a video camera and associated computational logic. In certain embodiments, a closed loop algorithm may be used to predict future positions of targets based on formulas derived from prior tracking points. Hence, the target's next position may be predicted. In some cases, targets may be filtered and/or sorted based on predicted positions. In certain embodiments, equations (including without limitation, first order equations and second order equations) may be derived from one or more video frames. Such equations may also be applied to one or more successive frames of video received and/or produced by the system. In certain embodiments, these formulas also may be used to compute predicted positions for targets; this prediction may, in some cases, compensate for inherent delays in the processing pipeline.09-04-2008
20080212834User interface using camera and method thereof - A user interface using a camera and a method thereof, wherein two or more images that were shot in time sequence are preprocessed to form N×M matrices, and then each element of the matrices are compared. The comparison is thus made (N+1)(M+1) times to select a result of the highest similarity and produce a motion vector. The interface and method help to produce more accurate motion vectors and to obviate inaccuracy that is yielded throughout low-pass filtering.09-04-2008
20080212832DISCRIMINATOR GENERATING APPARATUS AND OBJECT DETECTION APPARATUS - A discriminator generating apparatus includes a learning unit (09-04-2008
20080212835Object Tracking by 3-Dimensional Modeling - Disclosed a method for tracking 3-dimensional objects, or some of these objects' features, using range imaging for depth-mapping merely a few points on the surface area of each object, mapping them onto a geometrical 3-dimensional model, finding the object's pose, and deducing the spatial positions of the object's features, including those not captured by the range imaging.09-04-2008
20110206239INPUT APPARATUS, REMOTE CONTROLLER AND OPERATING DEVICE FOR VEHICLE - An input apparatus for a vehicle includes: an operation element operable by an occupant of the vehicle; a biological information acquisition element acquiring biological information of the occupant; an unawakened state detection element detecting an unawakened state of the occupant based on the biological information, wherein the unawakened state is defined by a predetermined state different from an awakened state; and an operation disabling element disabling an operation input from the operation element when the unawakened state detection element detects the unawakened state.08-25-2011
20110206238PHARMACEUTICAL RECOGNITION AND IDENTIFICATION SYSTEM AND METHOD OF USE - An electronic pharmaceutical recognition and identification system is provided along with a method of use. In certain example embodiments a user can take a digital picture of a pharmaceutical with a portable appliance comprising a telephone, then text that picture to a predetermined telephone number, wait a short period of time for a pharmaceutical identification server system to electronically recognize and identify the pharmaceutical in question, and then automatically receive a text message back from the server system that includes various predetermined information regarding the pharmaceutical in question, such as its name, pictures of it, warnings, whether or not a prescription is required, as well as usage and interaction information. Fixed appliances are also provided that can passively interface with a pharmaceutical dispensing system to ensure that the prescribed pharmaceutical is being dispensed.08-25-2011
20110206237RECOGNITION APPARATUS AND METHOD THEREOF, AND COMPUTER PROGRAM - A recognition apparatus for recognizing a position and an orientation of a target object, inputs a captured image of the target object captured by an image capturing apparatus; detects a plurality of feature portions from the captured image, and to extract a plurality of feature amounts indicating image characteristics in each of the plurality of feature portions; inputs property information indicating respective physical properties in the plurality of feature portions on the target object; inputs illumination information indicating an illumination condition at the time of capturing the captured image; determines respective degrees of importance of the plurality of extracted feature amounts based on the respective physical properties indicated by the property information and the illumination condition indicated by the illumination information; and recognizes the position and the orientation of the target object based on the plurality of feature amounts and the respective degrees of importance thereof.08-25-2011
20100278383SYSTEM AND METHOD FOR RECOGNITION OF A THREE-DIMENSIONAL TARGET - A system for recognition of a target three-dimensional object is disclosed. The system may include a photon-counting detector and a three-dimensional integral imaging system. The three-dimensional integral imaging system may be positioned between the photon-counting detector and the target three-dimensional object.11-04-2010
20100278388SYSTEM AND METHOD FOR GENERATING A DYNAMIC BACKGROUND - A system and methodology that counts a number of moving objects including the pedestrians within predetermined areas. According to certain embodiments, a system comprises an image sensing device and a data processing device. The image sensing device is situated at a predetermined area. The image sensing device retrieves a series of images of the moving objects within the predetermined area. The data processing device is coupled to the image sensing device. The data processing device processes the retrieved image to generate a dynamic background of the first predetermined area and determine a flow of the moving objects thereon.11-04-2010
20100278384Human body pose estimation - Techniques for human body pose estimation are disclosed herein. Depth map images from a depth camera may be processed to calculate a probability that each pixel of the depth map is associated with one or more segments or body parts of a body. Body parts may then be constructed of the pixels and processed to define joints or nodes of those body parts. The nodes or joints may be provided to a system which may construct a model of the body from the various nodes or joints.11-04-2010
20120294490Secondary Market And Vending System For Devices - A recycling kiosk for recycling and financial remuneration for submission of a mobile telephone is disclosed herein. The recycling kiosk includes an inspection area with at least one camera and a plurality of electrical connectors in order to perform a visual analysis and an electrical analysis of the mobile telephone for determination of a value of the mobile telephone. The recycling kiosk also includes a processor, a display and a user interface.11-22-2012
20120294491System and Method for Automatic Registration Between an Image and a Subject - A patient defines a patient space in which an instrument can be tracked and navigated. An image space is defined by image data that can be registered to the patient space. A tracking device can be connected to a member in a known manner that includes imageable portions that generate image points in the image data. The tracking device can be tracked to register patient space to image space.11-22-2012
20120294489METHOD FOR AUTOMATICALLY FOLLOWING HAND MOVEMENTS IN AN IMAGE SEQUENCE - A method for following hand movements in an image flow, includes receiving an image flow in real time, locating in each image in the received image flow a hand contour delimiting an image zone of the hand, extracting the postural characteristics from the image zone of the hand located in each image, and determining the hand movements in the image flow from the postural characteristics extracted from each image. The extraction of the postural characteristics of the hand in each image includes locating in the image zone of the hand the center of the palm of the hand by searching for a pixel of the image zone of the hand the furthest from the hand contour.11-22-2012
20120294488Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.11-22-2012
20120294483Image Analysis for Disposal of Explosive Ordinance and Safety Inspections - Hazardous objects in the field of explosives ordnance disposal or safety controls are identified using a sensor and image data generating arrangement and a comparison unit. The sensor and image data generating arrangement examines the object and produces an image thereof, which is compared by the comparison unit to known stored reference images. These reference images are digital images of reference objects. In this manner safety controls and explosives ordnance disposals can be organized safely and efficiently.11-22-2012
20120294485ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The device obtains position information of a target portion in a detection area, including a relative distance from a subject vehicle; groups continuous target portions into a target object of which position differences in a width direction vertical to an advancing direction of the vehicle and in a depth direction parallel to the advancing direction fall within a first distance; determines that the target object is a candidate of a wall, when the target portions forming the target object forms a tilt surface tilting at a predetermined angle or more with respect to a plane vertical to the advancing direction; and determines that the continuous wall candidates of which position differences in the width and depth directions among the wall candidates fall within a second predetermined distance longer than the first predetermined distance are a wall.11-22-2012
20120294484ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. the environment recognition device retains beforehand shape information that is information on a shape of a specific object; obtains a luminance of each of target portions, formed by dividing a detection area, and extracting a target portion including an edge; obtains a relative distance of the target portion including an edge; and determines a specific object indicated with the shape information by performing a Hough transform on the target portion having the edge based on the shape information according to the relative distance.11-22-2012
20120294481ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device obtains a luminance of each of a plurality of blocks formed by dividing a detection area; derives an edge direction based on a direction in which an edge of the luminance of each block extends; associates the blocks with each other based on the edge direction so as to generate an edge trajectory; groups a region enclosed by the plurality of edge trajectories as a target object; and determines the target object as a specific object.11-22-2012
20120294478SYSTEMS AND METHODS FOR IDENTIFYING GAZE TRACKING SCENE REFERENCE LOCATIONS - A system is provided for identifying reference locations within the environment of a device wearer. The system includes a scene camera mounted on eyewear or headwear coupled to a processing unit. The system may recognize objects with known geometries that occur naturally within the wearer's environment or objects that have been intentionally placed at known locations within the wearer's environment. One or more light sources may be mounted on the headwear that illuminate reflective surfaces at selected times and wavelengths to help identify scene reference locations and glints projected from known locations onto the surface of the eye. The processing unit may control light sources to adjust illumination levels in order to help identify reference locations within the environment and corresponding glints on the surface of the eye. Objects may be identified substantially continuously within video images from scene cameras to provide a continuous data stream of reference locations.11-22-2012
20110268319DETECTING AND TRACKING OBJECTS IN DIGITAL IMAGES - There is provided an improved solution for detecting and tracking objects in digital images. The solution comprises selecting a neighborhood for each pixel under observation, the neighborhood being of known size and form, and reading pixel values of the neighborhood. Further the solution comprises selecting at least one set of coefficients for weighting each neighborhood such that each pixel value of each neighborhood is weighted with at least one coefficient; searching for an existence of at least one object feature at each pixel under observation on the basis of a combination of weighted pixel values at each neighborhood; and verifying the existence of the object in the digital image on the basis of the searches of existence of at least one object feature at a predetermined number of pixels.11-03-2011
20110268321PERSON-JUDGING DEVICE, METHOD, AND PROGRAM - A person-judging device comprises: an obstruction storage which stores information indicating an area of an obstruction which is extracted from an image based on a video signal from an external camera, the obstruction being extracted from the image; head portion range calculation means which, when a portion of an object which is extracted from the image is hidden by the obstruction, assumes that a potential range of grounding points where the object touches a reference face on the image is the area of the obstruction which is stored in the obstruction storage, and which, based on the assumed range and the correlation between the height of a person and the size and position of the head portion that are previously provided, calculates the potential range of the head portion on the image by assuming that a portion farthest from the grounding points of the object is the head portion of the person; and head portion detection means that judges whether an area including a shape corresponding to the head portion exists in the calculated range of the head portion.11-03-2011
20110268316MULTIPLE CENTROID CONDENSATION OF PROBABILITY DISTRIBUTION CLOUDS - Systems and methods are disclosed for identifying objects captured by a depth camera by condensing classified image data into centroids of probability that captured objects are correctly identified entities. Output exemplars are processed to detect spatially localized clusters of non-zero probability pixels. For each cluster, a centroid is generated, generally resulting in multiple centroids for each differentiated object. Each centroid may be assigned a confidence value, indicating the likelihood that it corresponds to a true object, based on the size and shape of the cluster, as well as the probabilities of its constituent pixels.11-03-2011
20100220892DRIVER IMAGING APPARATUS AND DRIVER IMAGING METHOD - An imaging mechanism captures an image of a face of a driver of a vehicle. A first image processor performs image processing on a wide portion of the face of the driver in a first image using a first image captured by the imaging mechanism. A second image processor performs image processing on a part of the face of the driver in a second image captured by the imaging mechanism at a higher exposure than the exposure of the first image, using the second image.09-02-2010
20090003652REAL-TIME FACE TRACKING WITH REFERENCE IMAGES - A method of tracking a face in a reference image stream using a digital image acquisition device includes acquiring a full resolution main image and an image stream of relatively low resolution reference images each including one or more face regions. One or more face regions are identified within two or more of the reference images. A relative movement is determined between the two or more reference images. A size and location are determined of the one or more face regions within each of the two or more reference images. Concentrated face detection is applied to at least a portion of the full resolution main image in a predicted location for candidate face regions having a predicted size as a function of the determined relative movement and the size and location of the one or more face regions within the reference images, to provide a set of candidate face regions for the main image.01-01-2009
20090003651OBJECT SEGMENTATION RECOGNITION - A system for segmenting radiographic images of a cargo container can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output.01-01-2009
20100135529Systems and methods for tracking images - Image tracking as described herein can include: segmenting a first image into regions; determining an overlap of intensity distributions in the regions of the first image, and segmenting a second image into regions such that an overlap of intensity distributions in the regions of the second image is substantially similar to the overlap of intensity distributions in the regions of the first image. In certain embodiments, images can depict a heart at different points in time and the tracked regions can be the left ventricle cavity and the myocardium. In such embodiments, segmenting the second image can include generating first and second curves that track the endocardium and epicardium boundaries, and the curves can be generated by minimizing functions containing a coefficient based on the determined overlap of intensity distributions in the regions of the first image.06-03-2010
20080240505Feature information collecting apparatuses, methods, and programs - Apparatuses, methods, and programs acquire vehicle position information that represents a current position of a vehicle, acquire image information of a vicinity of the vehicle, and carry out image recognition processing of a target feature that is included in the image information to determine a position of the target feature. The apparatuses, methods, and programs store recognition position information that is based on the acquired vehicle position information and that represents the determined recognition position of the target feature. The apparatuses, methods, and programs determine an estimated position of the target feature based on a set of a plurality of stored recognition position information for the target feature, the plurality of stored recognition position information for the target feature being stored due to the target feature being subject to image recognition processing a plurality of times.10-02-2008
20080240498RUNWAY SEGMENTATION USING VERTICES DETECTION - Methods and apparatus are provided for locating a runway by detecting an object (or blob) within data representing a region of interest provided by a vision sensor. The vertices of the object are determined by finding points on the contour of the object nearest for the four corners of the region of interest. The runway can then be identified to the pilot of the aircraft by extending lines between the vertices to identify the location of the runway.10-02-2008
20090034791Image processing for person and object Re-identification - A device and method for processing an image to create appearance and shape labeled images of a person or object captured within the image. The appearance and shape labeled images are unique properties of the person or object and can be used to re-identify the person or object in subsequent images. The appearance labeled image is an aggregate of pre-stored appearance labels that are assigned to image segments of the image based on calculated appearance attributes of each image segment. The shape labeled image is an aggregate of pre-stored shape labels that are assigned to image segments of the image based on calculated shape attributes of each image segment. An identifying descriptor of the person or object can be computed based on both the appearance labeled image and the shape labeled image. The descriptor can be compared with other descriptors of later captured images to re-identify a person or object.02-05-2009
20090034790Method for customs inspection of baggage and cargo - A method and system of inspecting baggage to be transported from a location of origin to a destination is provided that includes generating scan data representative of a piece of baggage while the baggage is at the location of origin, and storing the scan data in a database. A rendered view representative of a content of the baggage is provided where the rendered views are based on the scan data retrieved from the database over a network. The rendered views are presented at a destination different from the origin.02-05-2009
20100128926ITERATIVE MOTION SEGMENTATION - An image processing device which simultaneously secures and extracts a background image, at least two object images, a shape of each object image and motion of each object image, from among plural images, the image processing device including an image input unit (05-27-2010
20120294477Searching for Images by Video - Techniques describe submitting a video clip as a query by a user. A process retrieves images and information associated with the images in response to the query. The process decomposes the video clip into a sequence of frames to extract the features in a frame and to quantize the extracted features into descriptive words. The process further tracks the extracted features as points in the frame, a first set of points to correspond to a second set of points in consecutive frames to construct a sequence of points. Then the process identifies the points that satisfy criteria of being stable points and being centrally located in the frame to represent the video clip as a bag of descriptive words for searching for images and information related to the video clip.11-22-2012
20120294482ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. The environment recognition device includes: a position information obtaining unit that obtains position information of a target portion in a detection area, the position information including a relative distance to a subject vehicle; a grouping unit that groups the target portions as a target object based on the position information; a luminance obtaining unit that obtains a luminance of an image of the target object; a luminance distribution generating unit that generates a histogram of the luminance of the image of the target object; and a floating substance determining unit that determines whether or not the target object is a floating substance based on a statistical analysis on the histogram.11-22-2012
20090169054METHOD OF ADJUSTING SELECTED WINDOW SIZE OF IMAGE OBJECT - A method of adjusting selected window size of an image object is applicable for tracking a target object in a video. The video includes a plurality of frames, and the target object has a display range changing with the playback of each frame. According to a variation trend of the display range of the target object, whether a variation times corresponding to the variation trend reaches a threshold value or not is recorded, and then the selected window size is reset, such that the target object is enclosed with a selected window having a size closer to the target object.07-02-2009
20080273753System for Detecting Image Abnormalities - An image capture system for capturing images of an object, the image capture system comprising a moving platform such as an airplane, one or more image capture devices mounted to the moving platform, and a detection computer. The image capture device has a sensor for capturing an image. The detection computer executes an abnormality detection algorithm for detecting an abnormality in an image immediately after the image is captured and then automatically and immediately causing a re-shoot of the image. Alternatively, the detection computer sends a signal to the flight management software executed on a computer system to automatically schedule a re-shoot of the image. When the moving platform is an airplane, the detection computer schedules a re-shoot of the image such that the image is retaken before landing the airplane.11-06-2008
20080273756POINTING DEVICE AND MOTION VALUE CALCULATING METHOD THEREOF - A pointing device is provided. A sensor generates a motion detection signal by sensing motion. A calculator receives the motion detection signal, calculates a motion value based on the motion detection signal, calculates a conversion motion value base on an angle of the motion value, and outputs the conversion motion value. An interface outputs the motion conversion value inputted from the calculator. By limiting a motion angle, the pointing device can provide a positioning operation suitable for a motion intended by a user. The user can optionally use a motion control method in all directions according to need.11-06-2008
20080273755CAMERA-BASED USER INPUT FOR COMPACT DEVICES - A camera is used to detect a position and/or orientation of an object such as a user's finger as an approach for providing user input, for example to scroll through data, control a cursor position, and provide input to control a video game based on a position of a user's finger. Input may be provided to a handheld device, including, for example, cell phones, video games systems, portable music (MP3) players, portable video players, personal data assistants (PDAs), audio/video equipment remote controls, and consumer digital cameras, or other types of devices.11-06-2008
20080273750Apparatus and Method For Automatically Detecting Objects - A device automatically detects boundary lines on the road from an image captured by a camera mounted on the vehicle. The device includes a controller that performs image processing on the image to compute the velocity information for each pixel in the image, and, on the basis of the computed velocity information for each pixel in the image, extracts the pixels that contain velocity information, detects the oblique lines formed by the extracted pixels, and detects the boundary lines on the road on the basis of the detected oblique lines.11-06-2008
20080273754APPARATUS AND METHOD FOR DEFINING AN AREA OF INTEREST FOR IMAGE SENSING - A method for defining an area of interest or a trip line using a camera by tracking the movement of a person within a field of view of the camera. The area of interest is defined by a path or boundary indicated by the person's movement. Alternatively, a trip line comprising a path between a starting point and a stopping point may be defined by tracking the movement of the person within the camera's field of view. An occupancy sensor may be structured to sense the movement of an occupant within an area, and to adjust the lighting in the area accordingly if the occupant enters the area of interest or crosses the trip line. The occupancy sensor includes an image sensor coupled to a processor, an input facility such as a pushbutton to receive input, and an output facility such as an electronic beeper to provide feedback to the person defining the area of interest or the trip line.11-06-2008
20080273752SYSTEM AND METHOD FOR VEHICLE DETECTION AND TRACKING - A method for vehicle detection and tracking includes acquiring video data including a plurality of frames, comparing a first frame of the acquired video data against a set of one or more vehicle detectors to form vehicle hypotheses, pruning and verifying the vehicle hypotheses using a set of course-to-fine constraints to detect a vehicle, and tracking the detected vehicle within one or more subsequent frames of the acquired video data by fusing shape template matching with one or more vehicle detectors.11-06-2008
20080273751Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax - Among other things, methods, systems and computer program products are described for detecting and tracking a moving object in a scene. One or more residual pixels are identified from video data. At least two geometric constraints are applied to the identified one or more residual pixels. A disparity of the one or more residual pixels to the applied at least two geometric constraints is calculated. Based on the detected disparity, the one or more residual pixels are classified as belonging to parallax or independent motion and the parallax classified residual pixels are filtered. Further, a moving object is tracked in the video data. Tracking the object includes representing the detected disparity in probabilistic likelihood models. Tracking the object also includes accumulating the probabilistic likelihood models within a number of frames during the parallax filtering. Further, tracking the object includes based on the accumulated probabilistic likelihood models, extracting an optimal path of the moving object.11-06-2008
20080279421OBJECT DETECTION USING COOPERATIVE SENSORS AND VIDEO TRIANGULATION - Methods and apparatus are provided for detecting and tracking a target. Images are captured from a field of view by at least two cameras mounted on one or more platforms. These images are analyzed to identify landmarks with the images which can be used to track the targets position from frame to frame. The images are fused (merged) with information about the target or platform position from at least one sensor to detect and track the target. The targets position with respect to the position of the platform is displayed or the position of the platform relative to the target is displayed.11-13-2008
20080279420VIDEO AND AUDIO MONITORING FOR SYNDROMIC SURVEILLANCE FOR INFECTIOUS DISEASES - We present, in exemplary embodiments of the present invention, novel systems and methods for syndromic surveillance that can automatically monitor symptoms that may be associated with the early presentation of a syndrome (e.g., fever, coughing, sneezing, runny nose, sniffling, rashes). Although not so limited, the novel surveillance systems described herein can be placed in common areas occupied by a crowd of people, in accordance with local and national laws applicable to such surveillance. Common areas may include public areas (e.g., an airport, train station, sports arena) and private areas (e.g., a doctor's waiting room). The monitored symptoms may be transmitted to a responder (e.g., a person, an information system) outside of the surveillance system, such that the responder can take appropriate action to identifying, treat and quarantine potentially infected individuals, as necessary.11-13-2008
20080285798Obstacle detection apparatus and a method therefor - An apparatus of detecting an object on a road surface includes a stereo set of video cameras mounted on a vehicle to produce right and left images, a storage to store the right and left images, a parameter computation unit to compute a parameter representing road planarity constraint based on the images of the storage, a corresponding point computation unit to compute correspondence between a first point on one of the right and left images and a second point on the other, which corresponds to the first point, based on the parameter, an image transformation unit to produce a transformed image from the one image using the correspondence, and a detector to detect an object having a dimension larger than a given value in a vertical direction with respect to the road surface, using the correspondence and the transformed image.11-20-2008
20080285801Visual Tracking Eye Glasses In Visual Head And Eye Tracking Systems - The invention relates to the application area of camera-based head and eye tracking systems. The performance of such systems typically suffers when eye glasses are worn, as the frames of the glasses interfere with the tracking of the facial features utilized by the system. This invention describes how the appearance of the glasses can be utilized by such a tracking system, not only eliminating the interference of the glasses with the tracking but also aiding the tracking of the facial features. The invention utilizes a shape model of the glasses which can be tracked by a specialized tracker to derive 3D pose information.11-20-2008
20080285800INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes an obtaining unit configured to obtain feature quantities of an image; and a detector configured to detect a gazing point at which a user gazes within the image, wherein the gazing point detected by the detector among the feature quantities obtained by the obtaining unit or the feature quantities extracted from the image in a predetermined range containing the gazing point is stored.11-20-2008
20080285797METHOD AND SYSTEM FOR BACKGROUND ESTIMATION IN LOCALIZATION AND TRACKING OF OBJECTS IN A SMART VIDEO CAMERA - Aspects of a method and system for change detection in localization and tracking of objects in a smart video camera are provided. A programmable surveillance video camera comprises processors for detecting objects in a video signal based on an object mask. The processors may generate a textual representation of the video signal by utilizing a description language to indicate characteristics of the detected objects, such as shape, texture, color, and/or motion, for example. The object mask may be based on a detection field value generated for each pixel in the video signal by comparing a first observation field and a second observation field associated with each of the pixels. The first observation field may be based on a difference between an input video signal value and an estimated background value while the second observation field may be based on a temporal difference between first observation fields.11-20-2008
20080304707Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map.12-11-2008
20080304705SYSTEM AND METHOD FOR SIDE VISION DETECTION OF OBSTACLES FOR VEHICLES - This invention provides a system and method for object detection and collision avoidance for objects and vehicles located behind the cab or front section of an elongated, and possibly tandem, vehicle. Through the use of narrow-baseline stereo vision that can be vertically oriented relative to the ground/road surface, the system and method can employ relatively inexpensive cameras, in a stereo relationship, on a low-profile mounting, to perform reliable detection with good range discrimination. The field of detection is sufficiently behind and aside the rear area to assure an adequate safety zone in most instances. Moreover, this system and method allows all equipment to be maintained on the cab of a tandem vehicle, rather than the interchangeable, and more-prone-to-damage cargo section and/or trailer. One or more cameras can be mounted on, or within, the mirror on each side, on aerodynamic fairings or other exposed locations of the vehicle. Image signals received from each camera can be conditioned before they are matched and compared for disparities viewed above the ground surface, and according to predetermined disparity criteria.12-11-2008
20080304706INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided an information processing apparatus, comprising: an obtaining unit which obtains video data captured by an image capturing apparatus disposed in a monitored space, location information regarding a location of a moving object in the monitored space, and existence information regarding a capturing period of the moving object in the video data; and a display processing unit which processes a display of a trajectory of the moving object in the monitored space based on the location information, the display processing unit processing a display of the trajectory so that the portion of the trajectory that corresponds to the capturing period is distinguishable from the other portions of the trajectory, based on the existence information.12-11-2008
20080240500IMAGE PROCESSING METHODS - A method of image processing, the method comprising receiving an image frame including a plurality of pixels, each of the plurality of pixels including an image information, conducting a first extraction based on the image information to identify foreground pixels related to a foreground object in the image frame and background pixels related to a background of the image frame, scanning the image frame in regions, identifying whether each of the regions includes a sufficient number of foreground pixels, identifying whether each of regions including a sufficient number of foreground pixels includes a foreground object, clustering regions including a foreground object into at least one group, each of the at least one group corresponding to a different foreground object in the image frame, and conducting a second extraction for each of at least one group to identify whether a foreground pixel in the each of the at least one group is to be converted to a background pixel.10-02-2008
20110007939Image-based tracking - A method of image-tracking by using an image capturing device. The method comprises: performing an image-capture of a scene by using an image capturing device; and tracking movement of the image capturing device by analyzing a set of images by using an image processing algorithm.01-13-2011
20110007938Thermal and short wavelength infrared identification systems - A method and apparatus for preventing fratricide including an emitter that emits a signaling code at a wavelength, the signaling code representing a coded message; a receiver that captures an image of a field of view including the emitter and generates image information corresponding to the captured image; a translation system that: receives the image information, and decodes the coded message from the image information; and a output device that outputs the decoded message.01-13-2011
20080212836Visual Tracking Using Depth Data - Real-time visual tracking using depth sensing camera technology, results in illumination-invariant tracking performance. Depth sensing (time-of-flight) cameras provide real-time depth and color images of the same scene. Depth windows regulate the tracked area by controlling shutter speed. A potential field is derived from the depth image data to provide edge information of the tracked target. A mathematically representable contour can model the tracked target. Based on the depth data, determining a best fit between the contour and the edge of the tracked target provides position information for tracking. Applications using depth sensor based visual tracking include head tracking, hand tracking, body-pose estimation, robotic command determination, and other human-computer interaction systems.09-04-2008
20080205702BACKGROUND IMAGE GENERATION APPARATUS - The information of a detection area is obtained by using radar, the information is sent to a mobile body detection unit, the position of a mobile body existing within the detection area is detected, a zone excluding a predetermined range surrounding the mobile body is identified by using a nonexistence zone identification unit, the information of the detection area at the time is obtained, a zone which does not include a mobile body is accurately generated by a background image generation unit, then the information of the detection area is obtained by a camera, and the difference between the generated background image and the aforementioned information is detected by a difference process unit; thereby an accurate position of the mobile body is detected.08-28-2008
20130216094SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR IDENTIFYING OBJECTS IN VIDEO DATA - Image based operating systems and methods are provided that identify objects in video data and then take appropriate action in a wide variety of environments. In some embodiments, the image based operating systems and methods allow a user to activate other devices and systems by making a gesture.08-22-2013
20130216092Image Capture - An apparatus including a processor configured to move automatically a sub-set of pixels defining a target captured image within a larger set of available pixels in a direction of an edge of the target captured image when a defined area of interest within the target captured image approaches the edge of the target captured image and configured to provide a pre-emptive user output when the sub-set of pixels approaches an edge of the set of available pixels.08-22-2013
20080199043Image Enhancement in Sports Recordings - A video signal representing rapid ball movement is produced from a series of source images. An initial image position for the moving ball is identified by, for each image, producing a difference image between sequential images. In the difference image, image elements representing a contents alteration below a threshold are allocated a first value, and those representing a contents alteration above or equal to the threshold are allocated a second value. A set of candidates is then identified, where each candidate is represented by a group of neighboring image elements that all contain the second value. The group must fulfill a ball size criterion. A ball selection algorithm selects an initial image position from the set of ball candidates. The ball is tracked, and a composite image sequence is generated wherein a synthetic trace representing the path of the moving ball is shown as successively added image data.08-21-2008
20080205700Apparatus and Method for Assisted Target Designation - A method for assisting a user to designate a target as viewed on a video image displayed on a video display by use of a user operated pointing device. The method includes the steps of evaluating prior to target designation one or more tracking function indicative of a result which would be generated by designating a target at a current pointing direction of the pointing device, and providing to the user, prior to target designation, an indication indicative of the result.08-28-2008
20110268318PHOTO DETECTING APPARATUS AND SYSTEM HAVING THE SAME - A photo detecting apparatus may include a signal processing unit, a control register unit, and a register data changing unit. The signal processing unit is configured to process electric signals converted from incident light to generate image data. The control register unit supplies a set value to the signal processing unit, the set value controlling operation of the signal processing unit. The control register unit stores a first set value supplied through a first bus, the first set value corresponding to an initial set value based on a decoded external control signal. In addition, the register data changing unit supplies a second set value to the control register unit through a second bus, separate from the first bus, when the first set value is to be changed.11-03-2011
20100135530METHODS AND SYSTEMS FOR CREATING A HIERARCHICAL APPEARANCE MODEL - A method for creating an appearance model of an object includes receiving an image of the object and creating a hierarchical appearance model of the object from the image of the object. The hierarchical appearance model has a plurality of layers, each layer including one or more nodes. Nodes in each layer contain information of the object with a corresponding level of detail. Nodes in different layers of the hierarchical appearance model correspond to different levels of detail.06-03-2010
20100135531Position Alignment Method, Position Alignment Device, and Program - A position alignment method, a position alignment device, and a program in which processing load can be reduced are proposed. A group of some points in a first set of points extracted from an object appearing in one image and a group of some points in a second set of points extracted from an object appearing in another image are used as a reference, and the second set of points is aligned with respect to the first set of points. Thereafter, all the points in the first set of points and all the points in the aligned second set of point are used as a reference, and the second set of points is aligned with respect to the first set of points.06-03-2010
20100135528ANALYZING REPETITIVE SEQUENTIAL EVENTS - Techniques for analyzing one or more sequential events performed by a human actor to evaluate efficiency of the human actor are provided. The techniques include identifying one or more segments in a video sequence as one or more components of one or more sequential events performed by a human actor, integrating the one or more components into one or more sequential events by incorporating a spatiotemporal model and one or more event detectors, and analyzing the one or more sequential events to analyze behavior of the human actor.06-03-2010
20080240503Image Processing Apparatus And Image Pickup Apparatus Mounting The Same, And Image Processing Method - A coding unit codes a moving image. An object detector detects an object from within a picture contained in the moving image, and generates, for each picture, object detection information containing at least the number of objects detected within an identical picture. When a codestream is generated from coded data generated by the coding unit, a stream generator describes the object detection information in a prescribed region of the codestream.10-02-2008
20110206240DETECTING CONCEALED THREATS - Potential threat items may be concealed inside objects, such as portable electronic devices, that are subject to imaging for example, at a security checkpoint. Data from an imaged object can be compared to pre-determined object data to determine a class for the imaged object. Further, an object can be identified inside a container (e.g., a laptop inside luggage). One-dimensional Eigen projections can be used to partition the imaged object into partitions, and feature vectors from the partitions and the object image data can be used to generate layout feature vectors. One or more layout feature vectors can be compared to training data for threat versus non-threat-containing items from the imaged object's class to determine if the imaged object contains a potential threat item.08-25-2011
20090123030Method For The Autostereoscopic Presentation Of Image Information With Adaptation To Suit Changes In The Head Position Of The Observer - For continuous tracking without noticeable skips during physical changes in head position, the intensities of all subpixels of the matrix screen are reduced in order to form intensity focuses for subpixel groups behind barrier elements, which comprise a number n of subpixels, including a subpixel reserve, in the image lines. In the case of parallel alterations, these intensity focuses are then displaced by a constant absolute value continuously through directly adjacent subpixels and also through subpixel group boundaries with different stereo image views. Distance changes involve the intensity focuses being increasingly widened or compressed relative to the screen edges. The intensities of the individual subpixels can be altered by means of simple multiplication by standardized constant or variable intensity factors which can be ascertained as a function of motion.05-14-2009
20090304233RECOGNITION APPARATUS AND RECOGNITION METHOD - A barcode recognition apparatus includes an image interface, an image analysis unit, an image conversion unit, and a bar recognition unit. The image interface acquires an image including a barcode captured by a camera. The image analysis unit analyzes a characteristic of an input image acquired from the camera, and decides an image conversion method for the conversion from the input image into an image for recognition processing on the basis of the analysis result. The image conversion unit converts the input image into an image for recognition processing by the image conversion method decided by the image analysis unit. The bar recognition unit performs barcode recognition processing for the image for recognition processing obtained by the image conversion unit.12-10-2009
20090304232VISUAL AXIS DIRECTION DETECTION DEVICE AND VISUAL LINE DIRECTION DETECTION METHOD - Provided is a visual axis direction detection device capable of obtaining a highly accurate visual axis direction detection result without performing a particular calibration for each of examinees. The device (12-10-2009
20090087023Method and System for Detecting and Tracking Objects in Images - Invention describes a method and system for detecting and tracking an object in a sequence of images. For each image the invention determines an object descriptor from a tracking region in a current image in a sequence of images, in which the tracking region corresponds to a location of an object in a previous image. A regression function is applied to the descriptor to determine a motion of the object from the previous image to the current image, in which the motion has a matrix Lie group structure. The location of the tracking region is updated using the motion of the object.04-02-2009
20090080697Imaging position analyzing method - The imaging position of each of the frames in image data of a plurality of frames captured while a vehicle is traveling is accurately determined.03-26-2009
20090080700PROJECTILE TRACKING SYSTEM - A system and method for determining the track of a projectile use a thermal signature of the projectile. Sequential infrared image frames are acquired from a sensor at a given position. A set of frames containing spots with characteristics consistent with a projectile in flight are identified. A possible projectile track solution for said spots is identified. A thermal signature value for each pixel of each spot of the possible solution is determined. The determined thermal signature is then compared to an actual thermal signature for a substantially similar projectile track to ascertain whether the determined thermal signature substantially matches the actual thermal signature, which indicates that the possible projectile track solution is the correct solution.03-26-2009
20090129631Method of Tracking the Position of the Head in Real Time in a Video Image Stream - The invention relates to a method of tracking the position of the bust of a user on the basis of a video image stream, said bus comprising the user's torso and head, the method comprising the determination of the position of the torso on a first image, in which method a virtual reference frame is associated with the torso on said first image, and in which method, for a second image, a new position of the virtual reference frame is determined on said second image, and, a relative position of the head with respect to said new position of the virtual reference frame is measured by comparison with the position of the virtual reference frame on said first image, so as to determine independently the movements of the head and the torso.05-21-2009
20120140988OBSTACLE DETECTION DEVICE AND METHOD AND OBSTACLE DETECTION SYSTEM - An obstacle region candidate point relating unit assumes that a pixel in an image corresponds to a point on a road surface, and associates pixels between images at two times on the basis of the amount of movement of a vehicle in question, a road plane, and a flow of the image estimated. When a pixel corresponds to a shadow of the vehicle in question or the moving object therearound appearing on the road surface, the ratio of intensities of the pixel values of the spectral images between two images should be approximately the same as the ratio of the spectral characteristics of the sunshine in the sun and the shade. Therefore, when the ratio of intensities is approximately the same as the ratio of the spectral characteristics, the obstacle determining unit does not determine that the pixel in question is a point corresponding to the obstacle. Only when the ratio of intensities is not approximately the same as the ratio of the spectral characteristics, the obstacle determining unit determines that the pixel in question is a point corresponding to the obstacle.06-07-2012
20120140983METHOD FOR DETECTION OF SPECIMEN REGION, APPARATUS FOR DETECTION OF SPECIMEN REGION, AND PROGRAM FOR DETECTION OF SPECIMEN REGION - A method for detecting the specimen region includes the first step for the first region detecting unit to detect the first region which is a region with contrast in the first image of an object for observation which is photographed under illumination with visible light, the second step for the second region detecting unit to detect the second region which is a region with contrast in the second image of the object for observation which is photographed under illumination with ultraviolet light, and the third step for the specimen region defining unit to define, based on the first and second regions mentioned above, the specimen region where there exists the specimen in the object for observation.06-07-2012
20130216099IMAGING SYSTEM AND IMAGING METHOD - An imaging system comprises a whole image read out unit for reading out a whole image in a first resolution from an imaging device, a partial image region selecting unit for selecting a region of a partial image in a part of the whole image which is read out, a partial image read out unit for reading out the partial image in the selected region in a second resolution from the imaging device, a characteristic region setting unit for setting a characteristic region, in which a characteristic object exists, within the partial image, a characteristic region image read out unit for reading out an image of the characteristic region, which is set, in a third resolution from the imaging device, and a resolution setting unit for setting such that the first resolution08-22-2013
20090080698Image display apparatus and computer program product - A comprehensive degree of relevance of other moving picture contents with respect to a moving-picture content to be processed is calculated by using any one of or all of content information, frame information, and image characteristics, to display a virtual space in which a visualized content corresponding to a moving picture content to be displayed, which is selected based on the degree of relevance, is located at a position away from a layout position of the visualized content corresponding to the moving picture content to be processed, according to the degree of relevance.03-26-2009
20090097711Detecting apparatus of human component and method thereof - Disclosed are an apparatus and a method of detecting a human component from an input image. The apparatus includes a training database (DB) to store positive and negative samples of a human component, an image processor to calculate a difference image for the input image, a sub-window processor to extract a feature population from a difference image that is calculated by the image processor for the positive and negative samples of a predetermined human component stored in the training DB, and a human classifier to detect a human component corresponding to a human component model using the human component model that is learned from the feature population.04-16-2009
20090161912 METHOD FOR OBJECT DETECTION - In one aspect, the present invention is directed to a method for object detection, the method comprising the steps of: dividing a digital image into a plurality of sub-windows of substantially the same dimensions; processing the image of each of the sub-windows by a cascade of homogeneous classifiers (each of the homogenous classifiers produces a CRV, which is a value relative to the likelihood of a sub-window to comprise an image of the object of interest, and wherein each of the classifiers has an increasing accuracy in identifying features associated with the object of interest); and upon classifying by all of the classifiers of the cascade a sub-window as comprising an image of the object of interest, applying a post-classifier on the cascade CRVS, for evaluating the likelihood of the sub-window to comprise an image of the object of interest, wherein the post-classifier differs from the homogenous classifiers.06-25-2009
20090129629Method And Apparatus For Adaptive Object Detection - Disclosed is a method and apparatus for adaptive object detection, which may be applied in detecting an object having an ellipse feature. The method for adaptive object detection comprises performing an object shape detection based on the extracted foreground from the object; determining whether the object being occluded according to the detected feature statistic information of the object; if the object being not occluded, determining whether to switching object shape detection to ellipse detection; if the object being occluded or necessary to switch to ellipse detection, performing ellipse detection on the foreground; when the foreground being detected to have ellipse features, the object is continuously tracked; and when the current detection being ellipse detection, determining whether the ellipse detection being able to switch back to object shape detection.05-21-2009
20090136090House Displacement Judging Method, House Displacement Judging Device - To attain a house change judging method and device which can judge a change with high precision and is capable of fully automating the judgment, the present invention provides a house change judging method for judging a change of a house (05-28-2009
20120070033METHODS FOR OBJECT-BASED IDENTIFICATION, SORTING AND RANKING OF TARGET DETECTIONS AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that provides object-based identification, sorting and ranking of target detections includes determining a target detection score for each pixel in each of one or more images for each of one or more targets. A region around one or more of the pixels with the determined detection scores which are higher than the determined detection scores for the remaining pixels in each of the one or more of images is identified. An object based score for each of the identified regions in each of the one or more images is determined. The one or more identified regions with the determined object based score for each region is provided.03-22-2012
20090185716DUST DETECTION SYSTEM AND DIGITAL CAMERA - A dust detection system, comprising a receiver, a dust extraction block, a memory and an image correction block, is provided. The receiver receives an image signal. The dust extraction block generates a dust image signal on the basis of the image signal. The memory stores an intrinsic-flaw image signal corresponding to an intrinsic-flaw image including sub-images of dust that the dust extraction block extracts in initializing. The image correction block generates a corrected dust-image signal on the basis of the intrinsic-flaw image signal and a normal dust-image signal. The normal dust-image signal corresponds to a normal dust image including sub-image of dust that the dust extraction block extracts after initializing. The corrected dust image is the normal dust image that sub-images of dust in the intrinsic-flaw image are deleted from.07-23-2009
20090185717OBJECT DETECTION SYSTEM WITH IMPROVED OBJECT DETECTION ACCURACY - In a system for detecting a target object, a similarity determining unit sets a block in a picked-up image, and compares a part of the picked-up image contained in the block with a pattern image data while changes a location of the block in the picked-up image to determine a similarity of each part of the picked-up image contained in a corresponding one of the different-located blocks with respect to the pattern image data. A specifying unit extracts some different-located blocks from all of the different-located blocks. The determined similarity of the part of the picked-up image contained in each of some different-located blocks is equal to or greater than a predetermined threshold similarity. The specifying unit specifies, in the picked-up image, a target area based on a frequency distribution of some different-located blocks therein.07-23-2009
20080317284Face tracking device - A face tracking device for tracking an orientation of a person's face with using a cylindrical head model, the face tracking device comprises: an image means for continuously shooting the person's face and for obtaining a first image data based on a shot of the person's face; an extraction means for extracting a second image data from the first image data, the second image data corresponding to a facial area of the person's face; a determination means for determining whether the second image is usable as an initial value required for the cylindrical head model; and a face orientation detection means for detecting the orientation of the person's face with using the cylindrical head model and with using the initial value determined to be usable by the determination means.12-25-2008
20080317283SIGNAL PROCESSING METHOD AND DEVICE FOR MULTI APERTURE SUN SENSOR - The disclosure relates to a signal processing method for multi aperture sun sensor comprising the following steps: reading the information of sunspots in a row from a centroid coordinate memory, judging the absence of sunspots in that row, identifying the row and column index of the sunspots in the complete row, selecting the corresponding calibration parameter based on the row and column index, calculating attitude with the attitude calculation module the corresponding to identified sunspots, averaging the accumulated attitude of all sunspots and outputting the final attitude. At the same time, a signal processing device for multi aperture sun sensor is also presented. It is comprised of a sunspot absence judgment and an identification module and an attitude calculation module. The disclosure implements the integration of sun sensors without additional image processor or attitude processor, reduces field programmable gate array resource and improves the reliability of sun sensors.12-25-2008
20080317287Image processing apparatus for reducing effects of fog on images obtained by vehicle-mounted camera and driver support apparatus which utilizies resultant processed images - Kalman filter processing is applied to each of successive images of a scene obscured by fog, captured by an onboard camera of a vehicle. The measurement matrix for the Kalman filter is established based on currently estimated characteristics of the fog, and intrinsic luminance values of a scene portrayed by a current image constitute the state vector for the Kalman filter. Adaptive filtering for removing the effects of fog from the images is thereby achieved, with the filtering being optimized in accordance with the degree of image deterioration caused by the fog.12-25-2008
20080317282Vehicle-Use Image Processing System, Vehicle-Use Image Processing Method, Vehicle-Use Image Processing Program, Vehicle, and Method of Formulating Vehicle-Use Image Processing System - A system or the like capable of detecting lane marks more accurately by preventing false lane marks from being erroneously detected as true lane marks. A vehicle-use image processing system (12-25-2008
20080317285IMAGING DEVICE, IMAGING METHOD AND COMPUTER PROGRAM - With a digital still camera, a user freely detects a smiling face on a touchpanel displaying a through image and selects a subject having that smiling face. The digital still camera displays the smiling face as a smiling face detection target and a non-target detected face on the through image in a distinctly different manner to discriminate the smiling face detection target from the non-target detected face. For example, when persons in an event such as a party are photographed in a relatively large viewing angle, an auto photographing operation may be performed in response to smiling face detections on condition that at least two members in the party are smiling.12-25-2008
20110222727Object Localization Using Tracked Object Trajectories - A method of processing a video sequence is provided that includes tracking a first object and a second object for a specified number of frames, determining similarity between a trajectory of the first object and a trajectory of the second object over the specified number of frames, and merging the first object and the second object into a single object when the trajectory of the first object and the trajectory of the second object are sufficiently similar, whereby an accurate location and size for the single object is obtained.09-15-2011
20090080701Method for object tracking - The present invention relates to a method for the recognition and tracking of a moving object, in particular of a pedestrian, from a motor vehicle, at which a camera device is arranged. An image of the environment including picture elements is taken in the range of view of the camera device (03-26-2009
200900806993D Beverage Container Localizer - Objects placed on a flat surface are identified and localized by using a single view image. The single view image in the perspective projection is transformed to a normalized image in a pseudo plan to view to enhance detection of the bottom or top shapes of the objects. One or more geometric features are detected from the normalized image by processing the normalized image. The detected geometric features are analyzed to determine the identity and the location the objects on the flat surface.03-26-2009
20130216098MAP GENERATION APPARATUS, MAP GENERATION METHOD, MOVING METHOD FOR MOVING BODY, AND ROBOT APPARATUS - Performing map construction under a crowded environment where there are a lot of people. It includes a successive image acquisition unit that obtains images that are taken while a robot is moving, a local feature quantity extraction unit that extracts a quantity at each feature point from the images, a feature quantity matching unit that performs matching among the quantities in the input images, where quantities are extracted by the extraction unit, an invariant feature quantity calculation unit that calculates an average of the matched quantities among a predetermined number of images by the matching unit as an invariant feature quantity, a distance information acquisition unit that calculates distance information corresponding to each invariant feature quantity based on a position of the robot at times when the images are obtained, and a map generation unit that generates a local metrical map as a hybrid map.08-22-2013
20130216096INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM - There is provided an image management apparatus, in which an image conversion section is configured to generate a setup image, in a setup image format, within a first image pyramid structure for display, the setup image being converted from an object image, in an object image format, included within a second image pyramid structure in response to a request for the setup image.08-22-2013
20110228984SYSTEMS, METHODS AND ARTICLES FOR VIDEO ANALYSIS - A video analysis system including a video output device monitoring an area for activity, a video analyzer processing output of the video output device and identifying an event in near-real-time, and a persistent database archiving the event for an operational lifetime of the video analysis system and accessible in near-real-time.09-22-2011
20110228979Moving-object detection apparatus, moving-object detection method and moving-object detection program - Disclosed herein is a moving-object detection apparatus having a plurality of moving-object detection processing devices configured to detect a moving object on the basis of a motion vector computed by making use of a present image and a past image wherein the moving-object detection processing devices are set to operate differently from each other in at least one of the resolution of the present and past images, the time distance between the present and past images and the search area of the motion vector in order to detect the moving object.09-22-2011
20090097707Method of controlling digital image processing apparatus for face detection, and digital image processing apparatus employing the method - Provided is a method of controlling a digital image processing apparatus for detecting a face from continuously input images, the method comprising operations (a) to (c). In (a), if a face is detected, image information of a body area is stored. In (b), if the face is not detected, a body having the image information stored in (a) is detected. In (c), if a current body is detected after a previous body was detected in (b), an image characteristic of the previously detected body is compared to an image characteristic of the currently detected body, and a movement state of the face is determined according to the comparison result.04-16-2009
20120288148IMAGE RECOGNITION APPARATUS, METHOD OF CONTROLLING IMAGE RECOGNITION APPARATUS, AND STORAGE MEDIUM - An image recognition apparatus comprising: an obtaining unit configured to obtain one or more images; a detection unit configured to detect a target object image from each of one or more images; a cutting unit configured to cut out one or more local regions from the target object image; a feature amount calculation unit configured to calculate a feature amount from each of one or more local regions to recognize the target object; a similarity calculation unit configured to calculate, for each of one or more local regions, a similarity between the feature amounts; and a registration unit configured to, if there is a pair of feature amounts whose similarity is not less than a threshold, register, for each of one or more regions, one of the feature amounts as dictionary data for the target object.11-15-2012
20120288149ENVIRONMENT RECOGNITION DEVICE AND ENVIRONMENT RECOGNITION METHOD - There are provided an environment recognition device and an environment recognition method. An environment recognition device 11-15-2012
20110228985APPROACHING OBJECT DETECTION SYSTEM - An approaching object detection system, approaching object can be accurately detected while reducing the load on a calculation processing. A first moving region detection unit (09-22-2011
20110142281CONVERTING AIRCRAFT ENHANCED VISION SYSTEM VIDEO TO SIMULATED REAL TIME VIDEO - A method for overcoming image latency issues of a synthetic vision system include generating (06-16-2011
20110142283APPARATUS AND METHOD FOR MOVING OBJECT DETECTION - An apparatus and method for moving object detection computes a corresponding frame difference for every two successive image frames of a moving object, and segments a current image frame of the two successive image frames into a plurality of homogeneous regions. At least a candidate region is further detected from the plurality of homogeneous regions. The system gradually merges the computed frame differences via a morphing-based technology and interests with the at least a candidate region, thereby obtains the location and a complete outline of the moving object.06-16-2011
20110142282VISUAL OBJECT TRACKING WITH SCALE AND ORIENTATION ADAPTATION - A method of tracking an object that appears in a plurality of image frames is provided. The method includes (a) dividing an identified object of one of the plurality of image frames into a plurality of object segments and (b) tracking a location of each of the plurality of object segments in the image frame. The method also includes (c) estimating at least one of scale and orientation of the object using the location of each of the plurality of object segments and (d) obtaining position of the object using the estimated scale and orientation.06-16-2011
20090232354ADVERTISEMENT INSERTION SYSTEMS AND METHODS FOR DIGITAL CAMERAS BASED ON OBJECT RECOGNITION - Digital cameras include an image capture system, an object recognition system and an advertisement insertion system. The image capture system captures a visible image as a digital image. The object recognition system recognizes visible objects in the digital image. The advertisement insertion system inserts an advertising-related image into the digital image in response to a visible object in the digital image that was recognized. The user of the digital camera may be compensated for exposure to the advertising-related image.09-17-2009
20090097704ON-CHIP CAMERA SYSTEM FOR MULTIPLE OBJECT TRACKING AND IDENTIFICATION - Apparatus and methods provide multiple object identification and tracking using an object recognition system, such as a camera system. One method of tracking multiple objects includes constructing a first set of objects in real time as a camera scans an image of a first frame row by row. A second set of objects is constructed concurrently in real time as the camera scans an image of a second frame row by row. The first and second sets of objects are stored separately in memory and the sets of objects are compared. Based on the comparison between the first frame (previous frame) and the second frame (current frame), a unique ID is assigned to an object in the second frame (current frame).04-16-2009
20090097710METHODS AND SYSTEM FOR COMMUNICATION AND DISPLAYING POINTS-OF-INTEREST - A method for displaying point-of-interest coordinate locations in perspective images and for coordinate-based information transfer between perspective images on different platforms includes providing a shared reference image of a region overlapping the field of view of the perspective view. The perspective view is then correlated with the shared reference image so as to generate a mapping between the two views. This mapping is then used to derive a location of a given coordinate from the shared reference image within the perspective view and the location is indicated in the context of the perspective view on a display.04-16-2009
20090097708Image-Processing System and Image-Processing Method - A vehicle-periphery-image-providing system may include an image-capturing unit, a viewpoint-change unit, an image-composition unit, an object-decttion unit, a line-width-setting unit, and a line-selection unit. The image-capturing units, such as cameras, capture images outside a vehicle periphery and generate image-data items. The viewpoint-change unit generates a bird's-eye-view image for each image-data item based on the image-data item so that end portions of the real spaces corresponding to two adjacent bird's-eye-view images overlap each other. The image-composition unit generates a bird's-eye-view-composite image by combining the bird's-eye-view images according to a predetermined layout. The object-detection unit detects an object existing in the real space corresponding to a portion where the bird's-eye-view images of the bird's-eye-composite image are joined to each other. The line-width-setting unit sets the width of the line image corresponding to the joining portion. The line-selection unit adds a line image having the set width to an overlap portion of one of the bird's-eye-view images.04-16-2009
20110228978FOREGROUND OBJECT DETECTION SYSTEM AND METHOD - A foreground object detection system and method establishes a background model by reading N frames of a video stream generated by a camera. The detection system further reads each frame of the video stream, detects the pixel value difference and the brightness value difference for each pair of two corresponding pixels of two consecutive frames for each of the N frames of the video stream. In detail, by comparing the pixel value difference with a pixel threshold and by comparing the brightness value difference with a brightness threshold, the detection system may determine a foreground or background pixel.09-22-2011
20130121527SYSTEMS AND METHODS FOR ANALYSIS OF VIDEO CONTENT, EVENT NOTIFICATION, AND VIDEO CONTENT PROVISION - A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment.05-16-2013
20090129628METHOD FOR DETERMINING THE POSITION OF AN OBJECT FROM A DIGITAL IMAGE - Method for determining the position of an object point in a scene from a digital image thereof acquired through an optical system is presented. The image comprises a set of image points corresponding to object points and the position of the object points are determined by means of predetermined vectors associated with the image points. The predetermined vector represents the inverted direction of a light ray in the object space that will produce this image point through the optical system comprising all distortion effects of the optical system.05-21-2009
20090208056REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives a new acquired image from the image stream, the image potentially including one or more face regions. The acquired image is sub-sampled (08-20-2009
20090208052INTERACTIVE DEVICE AND METHOD FOR TRANSMITTING COMMANDS FROM A USER - According to the present invention, it is provided an interactive device comprising a display, a camera, an image analysing means, said interactive device comprising means to acquire an image with the camera, the analysing means detecting at least a human face on the acquired image and displaying on the display at least a pattern where the human face was detected wherein the interactive device further comprises means to determine a halo region extending at least around the pattern and means to add into the halo region at least one interactive zone related to a command, means to detect movement onto the interactive zone and means to execute the command by said device.08-20-2009
20090245577Tracking Processing Apparatus, Tracking Processing Method, and Computer Program - A tracking processing apparatus includes: first state-variable-sample-candidate generating means for generating state variable sample candidates at first present time; plural detecting means each for performing detection concerning a predetermined detection target related to a tracking target; sub-information generating means for generating sub-state variable probability distribution information at present time; second state-variable-sample-candidate generating means for generating state variable sample candidates at second present time; a state-variable-sample acquiring means for selecting state variable samples out of the state variable sample candidates at the first present time and the state variable sample candidates at the second present time at random according to a predetermined selection ratio set in advance; and estimation-result generating means for generating main state variable probability distribution information at the present time as an estimation result.10-01-2009
20090245574OPTICAL POINTING DEVICE AND METHOD OF DETECTING CLICK EVENT IN OPTICAL POINTING DEVICE - A method of detecting a click event for sensing a motion of a finger corresponding to a click on a sensing area of an optical pointing device, the method including: obtaining an image of the finger from the sensing area; sensing a change in the image of the finger; analyzing a horizontal movement of the finger based on the change in the image of the finger; and generating a click signal when the horizontal movement of the finger is within a predetermined range is provided.10-01-2009
20090220125IMAGE RECONSTRUCTION BY POSITION AND MOTION TRACKING - A system, method, and apparatus provide the ability to reconstruct an image from an object. A hand-held image acquisition device is configured to acquire local image information from a physical object. A tracking system obtains displacement information for the hand-held acquisition device while the device is acquiring the local image information. An image reconstruction system computes the inverse of the displacement information and combines the inverse with the local image information to transform the local image information into a reconstructed local image information. A display device displays the reconstructed local image information.09-03-2009
20090220123APPARATUS AND METHOD FOR COUNTING NUMBER OF OBJECTS - An image processing apparatus includes a first detecting unit configured to detect an object based on an upper body of a person and a second detecting unit configured to detect an object based on a face of a person. The image processing apparatus determines a level of congestion of objects contained in an input image, selects the first detecting unit when the level of congestion is low, and selects the second detecting unit when the level of congestion is high. The image processing apparatus counts the number of objects detected by the selected first or second detecting unit from the image. Thus, the image processing apparatus can detect an object and count the number of objects with high precision even when the level of congestion is high and the objects tend to overlap one another.09-03-2009
20090252374IMAGE SIGNAL PROCESSING APPARATUS, IMAGE SIGNAL PROCESSING METHOD, AND PROGRAM - An image signal processing apparatus includes a detecting unit configured to detect a motion vector of a tracking point provided in an object in a moving image, a computing unit configured to compute a reliability parameter representing the reliability of the detected motion vector, a determining unit configured to determine whether the detected motion vector is adopted by comparing the computed reliability parameter with a boundary, an accumulating unit configured to accumulate the reliability parameter, and a changing unit configured to change the boundary on the basis of the accumulated reliability parameters.10-08-2009
20110228982INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing device includes a learning image input unit configured to input a learning image, in which a tracked object is captured on different shooting conditions, together with the shooting conditions, a feature response calculation unit configured to calculate a response of one or more integrated features, with respect to the learning image while changing a parameter in accordance with the shooting conditions, a feature learning unit configured to recognize spatial distribution of the one or more integrated features in the learning image based on a calculation result of the response and evaluate a relationship between the shooting conditions and the parameter and a spatial relationship among the integrated features so as to learn a feature of the tracked object, and a feature storage unit configured to store a learning result of the feature.09-22-2011
20110228981METHOD AND SYSTEM FOR PROCESSING IMAGE DATA - A method for processing image data representing a segmentation mask, comprises generating two-dimensional shape representations of a three-dimensional object on the basis of a plurality of parameter sets; and matching motion blocks of the segmentation mask with the two-dimensional shape representations to obtain a best fit parameter set. Thereby, for example, a distance between the three-dimensional object and a camera position may be determined.09-22-2011
20110228980CONTROL APPARATUS AND VEHICLE SURROUNDING MONITORING APPARATUS - A control apparatus that improves the usability of a vehicle surrounding monitoring apparatus without confusing the monitoring party while monitoring the surroundings of a vehicle. A detection area setting section (09-22-2011
20110228975METHODS AND APPARATUS FOR ESTIMATING POINT-OF-GAZE IN THREE DIMENSIONS - Methods for determining a point-of-gaze (POG) of a user in three dimensions are disclosed. In particular embodiments, the methods involve: presenting a three-dimensional scene to both eyes of the user; capturing image data including both eyes of the user; estimating first and second line-of-sight (LOS) vectors in a three-dimensional coordinate system for the user's first and second eyes based on the image data; and determining the POG in the three-dimensional coordinate system using the first and second LOS vectors.09-22-2011
20090245580MODIFYING PARAMETERS OF AN OBJECT DETECTOR BASED ON DETECTION INFORMATION - Embodiments of an object detection unit configured to modify parameters for one or more object detectors based on detection information are provided.10-01-2009
20090245573OBJECT MATCHING FOR TRACKING, INDEXING, AND SEARCH - A camera system comprises an image capturing device, object detection module, object tracking module, and match classifier. The object detection module receives image data and detects objects appearing in one or more of the images. The object tracking module temporally associates instances of a first object detected in a first group of the images. The first object has a first signature representing features of the first object. The match classifier matches object instances by analyzing data derived from the first signature of the first object and a second signature of a second object detected in a second image. The second signature represents features of the second object derived from the second image. The match classifier determine whether the second signature matches the first signature. A training process automatically configures the match classifier using a set of possible object features.10-01-2009
20090245578METHOD OF DETECTING PREDETERMINED OBJECT FROM IMAGE AND APPARATUS THEREFOR - In an object detecting method, an imaging condition of an image pickup unit is determined, a detecting method is selected based on the determined imaging condition, and at least one predetermined object is detected from an image picked up through the image pickup unit according to the selected detecting method.10-01-2009
20090245576METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - The invention relates to an object detecting method for detecting a specific kind of object such as a human head and a human face from an image expressed by two-dimensionally arrayed pixels, the object detecting method including an image group producing step of producing an image group including an original image of the object detecting target and at least one thinned-out image by thinning out pixels constituting the original image at a predetermined rate or by thinning out the pixels at the predetermined rate in a stepwise manner; and a stepwise detection step of detecting the specific kind of object from the original image by sequentially repeating plural extraction processes from an extraction process of applying a filter acting on a relatively small region to a relatively small image toward an extraction process of applying a filter acting on a relatively wide region to a relatively large image.10-01-2009
20090245575METHOD, APPARATUS, AND PROGRAM STORAGE MEDIUM FOR DETECTING OBJECT - In an object detecting method according to an aspect of the invention, a specific kind of object such as a human head can be detected with high accuracy even if the detecting target object appears in various shapes. The object detecting method includes a primary evaluated value computing step of applying plural filters to an image of an object detecting target to compute plural feature quantities and of obtaining a primary evaluated value corresponding to each-feature quantity; a secondary evaluated value computing step of obtaining a secondary evaluated value by integrating the plural primary evaluated values obtained in the primary evaluated value computing step; and a region extracting step of comparing the secondary evaluated value obtained in the secondary evaluated value computing step and a threshold to extract a region where an existing probability of the specific kind of object is higher than the threshold.10-01-2009
20100034423SYSTEM AND METHOD FOR DETECTING AND TRACKING AN OBJECT OF INTEREST IN SPATIO-TEMPORAL SPACE - The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm.02-11-2010
20120300984RECORDING THE LOCATION OF A POINT OF INTEREST ON AN OBJECT - A method of recording the location of a point of interest on an object, the method comprising capturing a digital image of an object having a point of interest, accessing a three-dimensional virtual model of the object, aligning the image with the model, calculating the location of the point of interest with respect to the model, and recording the calculated point of interest location. Also, a system for performing the method.11-29-2012
20090257622 METHOD FOR REMOTE SPECTRAL ANALYSIS OF GAS PLUMES - A method for reducing the effects of background radiation introduced into gaseous plume spectral data obtained by an aerial imaging sensor, includes capturing spectral data of a gaseous plume with its obscured background along a first line of observation and capturing a second image of the previously obscured background along a different line of observation. The parallax shift of the plume enables the visual access needed to capture the radiometric data emanating exclusively from the background. The images are then corresponded on a pixel-by-pixel basis to produce a mapping. An image-processing algorithm is applied to the mapped images to reducing the effects of background radiation and derive information about the content of the plume.10-15-2009
20090257621Method and System for Dynamic Feature Detection - Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods.10-15-2009
20100150400INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - A first movement control section sequentially moves a first image to multiple first positions. A first comparison section compares the moved first image with a second image. A target first position selection section selects a target first position based on the result of said comparison. After the target first position is selected, the second movement control section sequentially moves the first image to multiple second positions located in the periphery of the target first position. The second comparison section compares the moved first image with the second image. A target second position selection section selects a target second position based on the result of said comparison. A second position alignment execution section performs geometric transformation based on the difference between the position of the first image and the target second position and aligns the positions of the first and second images.06-17-2010
20120195469FORMATION OF A TIME-VARYING SIGNAL REPRESENTATIVE OF AT LEAST VARIATIONS IN A VALUE BASED ON PIXEL VALUES - A method of forming a time-varying signal representative of at least variations in a value based on pixel values from a sequence of images, the signal corresponding in length to the sequence of images, includes obtaining the sequence of images. A plurality of groups (08-02-2012
20120195470HIGH CONTRAST RETROREFLECTIVE SHEETING AND LICENSE PLATES - The present disclosure relates to the formation of high contrast, wavelength independent retroreflective sheeting made by including a light scattering material on at least a portion of the retroreflective sheeting. The light scattering material reduces the brightness of the retroreflective sheeting without substantially changing the appearance of the retroreflective sheeting when viewed under scattered light.08-02-2012
20120195468Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image.08-02-2012
20120195461CORRELATING AREAS ON THE PHYSICAL OBJECT TO AREAS ON THE PHONE SCREEN - A mobile platform renders an augmented reality graphic to indicate selectable regions of interest on a captured image or scene. The region of interest is an area that is defined on the image of a physical object, which when selected by the user can generate a specific action. The mobile platform captures and displays a scene that includes an object and detects the object in the scene. A coordinate system is defined within the scene and used to track the object. A selectable region of interest is associated with one or more areas on the object in the scene. An indicator graphic is rendered for the selectable region of interest, where the indicator graphic identifies the selectable region of interest.08-02-2012
20120195459CLASSIFICATION OF TARGET OBJECTS IN MOTION - A method for classifying objects in motion that includes providing, to a processor, feature data for one or more classes of objects to be classified, wherein the feature data is indexed by object class, orientation, and sensor. The method also includes providing, to the processor, one or more representative models for characterizing one or more orientation motion profiles for the one or more classes of objects in motion. The method also include acquiring, via a processor, feature data for a target object in motion from multiple sensors and/or for multiple times and trajectory of the target object in motion to classify the target object based on the feature data, the one or more orientation motion profiles and the trajectory of the target object in motion.08-02-2012
20100002910Method and Apparatus for Developing Synthetic Three-Dimensional Models from Imagery - A method and apparatus for modeling an object in software are disclosed. The method includes generating a three-dimensional geometry of the object from a plurality of points obtained from a plurality of images of the object, the images having been acquired from a plurality of perspectives; and generating a three-dimensional model from the three-dimensional geometry for integration into an object recognition system. The apparatus may be a program storage medium encoded with instructions that, when executed by a computer, perform such a method or a computer programmed to perform such a method.01-07-2010
20100002909Method and device for detecting in real time interactions between a user and an augmented reality scene - The invention consists in a system for detection in real time of interactions between a user and an augmented reality scene, the interactions resulting from the modification of the appearance of an object present in the image. After having created (01-07-2010
20100150401Target tracker - For a tracking of a target object in a time series of frames of image data, a tracking object designation acceptor accepts a designation of a tracking object, a target color setter sets a color of the designated tracking object as a target color, and a particle filter processor employs particles for measurements to determine color likelihoods by comparison between the target color and colors in vicinities of particles, works, as the color likelihoods meet a criterion, to estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and as the color likelihoods fails to meet the criterion, to use particles, for measurements to determine luminance likelihoods based on luminance differences between frames of image data in a time series of frames of image data, and estimate a region of the tracking object in a frame of image data in accordance with results of the measurements, and updates the target color by a color in either estimated region.06-17-2010
20110228976PROXY TRAINING DATA FOR HUMAN BODY TRACKING - Synthesized body images are generated for a machine learning algorithm of a body joint tracking system. Frames from motion capture sequences are retargeted to several different body types, to leverage the motion capture sequences. To avoid providing redundant or similar frames to the machine learning algorithm, and to provide a compact yet highly variegated set of images, dissimilar frames can be identified using a similarity metric. The similarity metric is used to locate frames which are sufficiently distinct, according to a threshold distance. For realism, noise is added to the depth images based on noise sources which a real world depth camera would often experience. Other random variations can be introduced as well. For example, a degree of randomness can be added to retargeting. For each frame, the depth image and a corresponding classification image, with labeled body parts, are provided. 3-D scene elements can also be provided.09-22-2011
20100266160Image Sensing Apparatus And Data Structure Of Image File - An image sensing apparatus includes an image sensing portion which generates image data of an image by image sensing, and a record control portion which records image data of a main image generated by the image sensing portion together with main additional information obtained from the main image in a recording medium, in which the record control portion records sub additional information obtained from a sub image taken at a timing different from that of the main image in the recording medium in association with the image data of the main image and the main additional information.10-21-2010
20090316952GESTURE RECOGNITION INTERFACE SYSTEM WITH A LIGHT-DIFFUSIVE SCREEN - One embodiment of the invention includes a gesture recognition interface system. The interface system may comprise at least one light source positioned to illuminate a first side of a light-diffusive screen. The interface system may also comprise at least one camera positioned on a second side of the light-diffusive screen, the second side being opposite the first side, and configured to receive a plurality of images based on a brightness contrast difference between the light-diffusive screen and an input object. The interface system may further comprise a controller configured to determine a given input gesture based on changes in relative locations of the input object in the plurality of images. The controller may further be configured to initiate a device input associated with the given input gesture.12-24-2009
20090316953Adaptive match metric selection for automatic target recognition - An automatic target recognition system with adaptive metric selection. The novel system includes an adaptive metric selector for selecting a match metric based on the presence or absence of a particular feature in an image and a matcher for identifying a target in the image using the selected match metric. In an illustrative embodiment, the adaptive metric selector is designed to detect a shadow in the image and select a first metric if a shadow is detected and not cut off, and select a second metric otherwise. The system may also include an automatic target cuer for detecting targets in a full-scene image and outputting one or more target chips, each chip containing one target. The adaptive metric selector adaptively selects the match metric for each chip separately, and may also adaptively select an appropriate chip size such that a shadow in the chip is not unnecessarily cut off.12-24-2009
20090316955IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM - An image processing system includes: an object detecting unit that detects a moving body object from image data of an image of a predetermined area; an object-occurrence-position detecting unit that detects an occurrence position of the object detected by the object detecting unit; and a valid-object determining unit that determines that the object detected by the object detecting unit is a valid object when the object is present in a mask area set as a non-detection target in the image of the predetermined area and the occurrence position of the object in the mask area detected by the object-occurrence-position detecting unit is outside the mask area.12-24-2009
20090316954INPUT APPARATUS AND IMAGE FORMING APPARATUS - An input apparatus for enabling a user to enter an instruction into a main apparatus has high durability and offers superior operability. The input apparatus includes a table device having a table with a variable size. An image of plural virtual keys that is adapted to the size of the table is projected by a projector unit onto the table. Position information about a finger of the user that is placed on the table is detected by a position detecting device contactlessly. One of the plural virtual keys that corresponds to the position of the finger of the user detected by the position detecting device is detected by a key detecting device based on information about the image of the plural virtual keys and a result of the detection made by the position detecting device.12-24-2009
20100150399APPARATUS AND METHOD FOR OPTICAL GESTURE RECOGNITION - An optical gesture recognition system is shown having a first light source and a first optical receiver configured to receive reflected light from an object when the first light source is activated and output a first measured reflectance value corresponding to an amplitude of the reflected light. A processor is configured to receive the first measured reflectance value and to compare the first measured reflectance value at first and second points in time to track motion of the object and identify a gesture of the object corresponding to the tracked motion of the object.06-17-2010
20090116691METHOD FOR LOCATING AN OBJECT ASSOCIATED WITH A DEVICE TO BE CONTROLLED AND A METHOD FOR CONTROLLING THE DEVICE - The invention describes a method for locating an object (B05-07-2009
20120033857SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure.02-09-2012
20090092285METHOD OF LOCAL TRACING OF CONNECTIVITY AND SCHEMATIC REPRESENTATIONS PRODUCED THEREFROM - A schematic diagram detailing a circuit that was reverse engineered from a plurality of images taken of the circuit is provided. The schematic diagram includes at least one circuit element that was represented as an object in at least one of the plurality of images, such that signal continuity information was determined through local tracing of connectivity between a first image and a second image of the plurality of images. A method of tracing the connectivity within the plurality of images to produce the schematic diagram is also disclosed.04-09-2009
20120195462FLAME IDENTIFICATION METHOD AND DEVICE USING IMAGE ANALYSES IN HSI COLOR SPACE - In a flame identification method and device for identifying any flame image in a plurality of frames captured consecutively from a monitored area, for each image frame, intensity foreground pixels are obtained based on intensity values of pixels, a fire-like image region containing the intensity foreground pixels is defined when an intensity foreground area corresponding to the intensity foreground pixels is greater than a predetermined intensity foreground area threshold, and saturation foreground pixels are obtained from all pixels in the fire-like image region based on saturation values thereof to obtain a saturation foreground area corresponding to the saturation foreground pixels. Linear regression analyses are performed on two-dimensional coordinates each formed by the intensity and saturation pixel areas associated with a corresponding image frame to generate a determination coefficient. Whether a flame image exists in the image frames is determined based on the determination coefficient and a predetermined identification threshold.08-02-2012
20120195465PERSONNEL SECURITY SCREENING SYSTEM WITH ENHANCED PRIVACY - The present invention is directed towards processing security images of people subjected to X-ray radiation. The present invention processes a generated image by dividing the generated image into at least two regions or mask images, separately processing the at least two regions of the image, and viewing the resultant processed region images either alone or as a combined image.08-02-2012
20120195464AUGMENTED REALITY SYSTEM AND METHOD FOR REMOTELY SHARING AUGMENTED REALITY SERVICE - An augmented reality (AR) system and method for remotely sharing an AR service is provided. The AR system includes a plurality of client devices and a host device. The AR system allows information related to a marker and information related an AR object to be shared between client devices participating in an AR session, which may be separated by a reference distance, through a host device. Accordingly, an AR service may be shared between the client devices.08-02-2012
20100183193IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND INTEGRATED CIRCUIT FOR PROCESSING IMAGES - This image processing apparatus, for photographed images taken at a predetermined time interval and input sequentially, specifies an image area as the target of predetermined processing. The apparatus (i) has processing capability to generate, in accordance with a particular input photographed image, reduced images at K (K≧1) ratios within the predetermined time interval, (ii) selects, for each photographed image that is input, M (M≦K) or fewer ratios from among L (L>K) different ratios in accordance with ratios indicated for a photographed image input prior to the photographed image, (iii) compares each of the reduced images generated at the selected M or fewer ratios with template images, and (iv) in accordance with the comparison results, specifies the image area.07-22-2010
20100183192SYSTEM AND METHOD FOR OBJECT MOTION DETECTION BASED ON MULTIPLE 3D WARPING AND VEHICLE EQUIPPED WITH SUCH SYSTEM - The present invention relates to a technique for detecting dynamic (i.e., moving) objects using sensor signals with 3D information and can be deployed e.g. in driver assistance systems.07-22-2010
20120140984DRIVING SUPPORT SYSTEM, DRIVING SUPPORT PROGRAM, AND DRIVING SUPPORT METHOD - Provided is a driving support system that includes an image recognition unit that performs image recognition processing to recognize if a recognition object associated with any of the support processes is included in image data captured by an on-vehicle camera and a recognition area information storage unit that stores information regarding a set recognition area in the image data that is set depending on a recognition accuracy of the recognition object set for execution of the support process. A candidate process extraction unit is also included for extracting at least one execution candidate support process from the plurality of support processes and a support process execution management unit that allows execution of the extracted execution candidate support process on a condition that a position in the image data of the recognition object recognized by the image recognition processing is included in the set recognition area.06-07-2012
20120140985IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREFOR - A parameter for each of a plurality of images captured in time series is computed based on information obtained from the image, and a normal reference image (an image captured before an image targeted for processing is stored). A degree of similarity between the image targeted for processing and the normal reference image is computed, and a parameter to be used in image processing applied to the image targeted for processing is computed by performing weighted addition such that a parameter computed from the normal reference image has a higher weight than a parameter computed from the image targeted for processing the higher the degree of similarity.06-07-2012
20090116693IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing method is provided for an image processing apparatus which executes processing by allocating a plurality of weak discriminators to form a tree structure having branches corresponding to types of objects so as to detect objects included in image data. Each weak discriminator calculates a feature amount to be used in a calculation of an evaluation value of the image data, and discriminates whether or not the object is included in the image data by using the evaluation value. The weak discriminator allocated to a branch point in the tree structure further selects a branch destination using at least some of the feature amounts calculated by weak discriminators included in each branch destination.05-07-2009
20090296988CHARACTER INPUT APPARATUS AND CHARACTER INPUT METHOD - A character input apparatus includes a liquid crystal monitor 12-03-2009
20110110557Geo-locating an Object from Images or Videos - The present invention discloses a novel method, computer program product, and system for determining a spatial location of a target object from the selection of points in multiple images that correspond to the object location within the images. In one aspect, the method includes collecting location and orientation information of one or more image sensors producing the images; the collected location and orientation information is then used to determine the spatial location of the target object.05-12-2011
20100183195Method and Apparatus for Object Detection in an Image - A method and apparatus for detecting at least one of a location and a scale of an object in an image. The method comprising distinguishing the trailing and leading edges of a moving object in at least one portion of the image, applying a symmetry detection filter to at least a portion of the image to produce symmetry scores relating to the at least one portion of the image, and identifying at least one location corresponding to locally maximal symmetry scores of the symmetry scores relating to the at least one portion of the image, and utilizing the at least one location of the locally maximal symmetry scores to detect at least one of a location and a scale of the object in the image, wherein the scale relates to the size of the symmetry detection filter.07-22-2010
20110129120PROCESSING CAPTURED IMAGES HAVING GEOLOCATIONS06-02-2011
20110235862FIELD OF IMAGING - Embodiments of the present invention provide a computer-based method for providing image data of a region of a target object (09-29-2011
20100215215OBJECT DETECTING APPARATUS, INTERACTIVE SYSTEM, OBJECT DETECTING METHOD, INTERACTIVE SYSTEM REALIZING METHOD, AND RECORDING MEDIUM - This is provided with a plurality of retroreflective sheets each of which is attached to a screen and retroreflectively reflects received light, an imaging unit which photographs the retroreflective sheets, and an MCU which analyzes a differential picture obtained by photographing. The MCU detects, from the differential picture, a shade area corresponding to a part of the retroreflective sheet which is covered by a foot of a player. The detection of the shade area corresponds to the detection of the foot of the player. Because, in the case where the foot is placed on the retroreflective sheet, the part corresponding thereto is not captured in the differential picture, and is present as a shade area. It is possible to detect a foot without attaching and fixing a reflecting sheet to the foot.08-26-2010
20080205701ENHANCED INPUT USING FLASHING ELECTROMAGNETIC RADIATION - Enhanced input using flashing electromagnetic radiation, in which first and second images, captured on a first side of a screen, of an object and an ambient electromagnetic radiation emitter disposed on a second side of the screen, are accessed. The first image being captured while the object is illuminated with projected electromagnetic radiation, and the second image being captured while the projected electromagnetic radiation is extinguished. A position of the object relative to the screen based on coniparing the first and second images is determined. An application is controlled based on the determined position.08-28-2008
20090245572Control apparatus and method - The invention discloses a control apparatus for a user to control an electronic apparatus. The control apparatus of the invention includes a monitoring module, a sensing module, a first processing module, and a first transmitting module. The monitoring module is used to monitor the user's eyeball(s), and generates related eyeball-movement information. The sensing module is used to monitor a body portion of the user, and generates related body portion-movement information. The first processing module is connected to the monitoring module and the sensing module respectively, for calculating the control information in accordance with the eyeball-movement information and the body portion-movement information. Additionally, the first transmitting module is connected to the first processing module, for transmitting the control information to the electronic device, which can act according to the control information.10-01-2009
20090245571Digital video target moving object segmentation method and system - A digital video target moving object segmentation method and system is designed for processing a digital video stream for segmentation of every target moving object that appears in the video content. The proposed method and system is characterized by the operations of a multiple background imagery extraction process and a background imagery updating process for extracting characteristic background imagery whose content includes the motional background objects in addition to the static background scenes; and wherein the multiple background imagery extraction process is based on a background difference threshold comparison method, while the background imagery updating process is based on a background-matching and weight-counting method. This feature allows an object mask to be defined based on the characteristic background imagery, which can mask both the motional background objects as well as the static background scenes.10-01-2009
20100183194THREE-DIMENSIONAL MEASURING DEVICE - A three-dimensional measuring device includes an irradiation device configured to irradiate and switch among a multiplicity of light patterns having different periods and having a striped light intensity distribution on at least a measurement object, a camera having an imaging element capable of imaging reflected light from the measurement object irradiated by the light pattern, a rack configured to cause relative change in positional relationship between the imaging element and the measurement object, and a control device configured to perform three-dimensional measurements based on image data imaged by the camera. The control device performs the three-dimensional measurements by performing a phase shift method calculation of height data as a first height data for each pixel unit of image data based on a multiply phase-shifted image data obtained by irradiating on a first position a multiply phase-shifted first light pattern having a first period.07-22-2010
20100226531MAKEUP SIMULATION SYSTEM, MAKEUP SIMULATOR, MAKEUP SIMULATION METHOD, AND MAKEUP SIMULATION PROGRAM - According to the present invention, a makeup simulation system applying makeup to a video having an image of the face of a user captured thereon is characterized by image capturing means for capturing the image of the face of the user and outputting the video, control means for receiving the video output from the image capturing means, performing image processing on the video, and outputting the video; and display means for displaying the video output from the control means, wherein the control means includes face recognition means for recognizing the face of the user from the video based on predetermined tracking points; and makeup processing means for applying a predetermined makeup on the face of the user included in the video based on the tracking points and outputting the video to the display means.09-09-2010
20100226536VIDEO SIGNAL DISPLAY DEVICE, VIDEO SIGNAL DISPLAY METHOD, STORAGE MEDIUM, AND INTEGRATED CIRCUIT - A technical problem is to inhibit variation in the correction between frames of a moving image while maintaining a correction amount of the overall image. The video signal display device has an attraction point determination portion (09-09-2010
20100177930METHODS FOR DETERMINING A WAVEFRONT POSITION - The present disclosure relates to methods for determining a wavefront position of a liquid on a surface of an assay test strip placing a liquid on the surface of the test strip; and acquiring one or more signals from the surface of the test strip at one or more times, comparing the one or more acquired signals to a threshold, wherein the wavefront position is a position on the surface of the test strip where a signal is greater than or less than a threshold (e.g., fixed or dynamic threshold). Such methods may be used to determine the wavefront velocity of a liquid on a surface of an assay test strip and the transit time of a liquid sample to traverse the one or more positions on the surface of the assay test strip.07-15-2010
20100177932OBJECT DETECTION APPARATUS AND OBJECT DETECTION METHOD - An object detection apparatus includes an image acquisition unit that acquires image data, a reading unit that reads the acquired image data in a predetermined image area at predetermined resolution, an object area detection unit that detects an object area from first image data read by the reading unit, an object discrimination unit that discriminates a predetermined object from the object area detected by the object area detection unit, and a determination unit that determines an image area and resolution used to read second image data which is captured later than the first image data from the object area detected by the object area detection unit, wherein the reading unit reads the second image data from the image area at the resolution determined by the determination unit.07-15-2010
20110110558Apparatus, System, and Method for Automatic Airborne Contaminant Analysis - An apparatus, system, and method are disclosed for locating, classifying, and quantifying airborne contaminants. In one embodiment, the apparatus contains an air sampler, an imaging device, a processing module, and a user interface. The air sampler may contain at least one opening into which ambient air is flowable. The imaging device may produce images of the ambient air within an interior volume of the air sampler. The processing module may receive the images produced by the imaging device and may locate, classify, and quantify specific airborne contaminants, such as mold and pollen spores. Data concerning the airborne contaminants can be output to a user at a user interface.05-12-2011
20110110559Optical Positioning Apparatus And Positioning Method Thereof - An optical positioning apparatus and method are adapted for determining a position of an object in a three-dimensional coordinate system which has a first axis, a second axis and a third axis perpendicular to one another. The optical positioning apparatus includes a host device which has a first optical sensor and a second optical sensor located along the first axis with a first distance therebetween, and a processor connected with the optical sensors, and a calibrating device placed in the sensitivity range of the optical sensors with a second distance between an origin of the second axis and a coordinate of the calibrating device projected in the second axis. The optical sensors sense the calibrating device to make the processor execute a calibrating procedure, and then sense the object to make the processor execute a positioning procedure for determining the position of the object in the three-dimensional coordinate system.05-12-2011
20120170804Method and apparatus for tracking target object - A method and apparatus for tracking a target object are provided. A plurality of images is received, and one of the images is selected as a current image. A specific color of the current image is extracted. And the current image is compared with a template image to search a target object in the current image. If the target object is not found in the current image, a previous image with the target object is searched in the images received before the current image. And the target object is searched in the current image according to an object feature of the previous image. The object feature and an object location are updated into a storage unit when the target object is found.07-05-2012
20100239121METHOD AND SYSTEM FOR ASCERTAINING THE POSITION AND ORIENTATION OF A CAMERA RELATIVE TO A REAL OBJECT - The invention relates to a method for ascertaining the position and orientation of a camera (09-23-2010
20120195467Object Information Derived from Object Images - Search terms are derived automatically from images captured by a camera equipped cell phone, PDA, or other image capturing device, submitted to a search engine to obtain information of interest, and at least a portion of the resulting information is transmitted back locally to, or nearby, the device that captured the image.08-02-2012
20120195463IMAGE PROCESSING DEVICE, THREE-DIMENSIONAL IMAGE PRINTING SYSTEM, AND IMAGE PROCESSING METHOD AND PROGRAM - The image processing device includes a three-dimensional image data input unit which enters three-dimensional image data representing a three-dimensional image, a subject extractor which extracts a subject from the three-dimensional image data, a spatial vector calculator which calculates a spatial vector of the subject from a plurality of planar image data having different viewpoints contained in the three-dimensional image data, and a three-dimensional image data recorder which records the spatial vector and the three-dimensional image data in association with each other.08-02-2012
20130216093WALKING ASSISTANCE SYSTEM AND METHOD - An example walking assistance method includes obtaining an image captured by a camera. The image includes distance information indicating distances between the camera and objects captured by the camera. Next, the method determines whether one or more objects appear in the captured image. If yes, the method then creates a 3D scene model according to the captured image and the distances between the camera and objects captured by the camera. Next, the method determines whether one or more specific objects appear in the created 3D scene model, and further determines one or more obstacles appear when no specific object appears in the captured image. The method then creates an obstacle audio file based on the determined one or more obstacles, and further outputs the created obstacle audio file through an audio output device, to prompt one or more obstacles appear ahead.08-22-2013
20130216095VERIFICATION OBJECT SPECIFYING APPARATUS, VERIFICATION OBJECT SPECIFYING PROGRAM, AND VERIFICATION OBJECT SPECIFYING METHOD - In a verification object specifying apparatus that specifies a verification object for biometric authentication, a biometric information acquisition unit acquires biometric information from a biometric information source part. An abnormality detection unit detects an abnormal portion in the biometric information source part based on the biometric information. A verification object specifying unit determines whether biometric information located in the abnormal portion is to be included in a verification object, and specifies biometric information to be used as the verification object based on the determination result. The verification object specifying apparatus causes a registration unit to register the biometric information as registration information when serving as a registration apparatus, and causes a verification unit to verify the biometric information against registration information when serving as a verification apparatus.08-22-2013
20130216097IMAGE-FEATURE DETECTION - An embodiment is a method for detecting image features, the method including extracting a stripe from a digital image, the stripe including of a plurality of blocks; processing the plurality of blocks for localizing one or more keypoints; and detecting one or more image features based on the one or more localized keypoints.08-22-2013
20130216100OBJECT IDENTIFICATION USING SPARSE SPECTRAL COMPONENTS - One or more systems and/or techniques are provided to identify and/or classify objects of interest (e.g., potential granular objects) from a radiographic examination of the object. Image data of the object is transformed using a spectral transformation, such as a Fourier transformation, to generate image data in a spectral domain. Using the image data in the spectral domain, one or more one-dimensional spectral signatures can be generated and features of the signatures can be extracted and compared to features of one or more known objects. If one or more features of the signatures correspond (e.g., within a predetermined tolerance) to the features of a known object to which the feature(s) is compared, the object of interest may be identified and/or classified based upon the correspondence.08-22-2013
20100239123METHODS AND SYSTEMS FOR PROCESSING OF VIDEO DATA09-23-2010
20100226533METHOD OF IMAGE PROCESSING - The present invention relates to a method of identifying a target object in an image using image processing. It further relates to apparatus and computer software implementing the method. The method includes storing template data representing a template orientation field indicative of an orientation of each of a plurality of features of a template object; receiving image data representing the image; processing the image data to generate an image orientation field indicating an orientation corresponding to the plurality of image features; processing the image orientation field using the template orientation field to generate a match metric indicative of an extent of matching between at least part of the template orientation field and at least part of the image orientation field; and using the match metric to determine whether or not the target object has been identified in the image. Image and/or template confidence data is used to generate the match metric.09-09-2010
20100226532Object Detection Apparatus, Method and Program - An object detection apparatus for detecting an object from an image obtained by taking a front view picture of a road in a traveling direction of a vehicle includes a camera unit for taking the front view picture of the road and inputting the image; a dictionary modeling the object; a search unit for searching the image with a search window; a histogram production unit for producing a histogram by comparing the image in the search window with the dictionary and counting a detection frequency in a direction parallel to a road plane; and a detection unit for detecting the detection object by detecting a unimodal distribution from the histogram.09-09-2010
20100226538OBJECT DETECTION APPARATUS AND METHOD THEREFOR - An image processing apparatus includes a moving image input unit configured to input a moving image, an object likelihood information storage unit configured to store object likelihood information in association with a corresponding position in an image for each object size in each frame included in the moving image, a determination unit configured to determine a pattern clipping position where a pattern is clipped out based on the object likelihood information stored in the object likelihood information storage unit, and an object detection unit configured to detect an object in an image based on the object likelihood information of the pattern clipped out at the pattern clipping position determined by the determination unit.09-09-2010
20100226534FUSION FOR AUTOMATED TARGET RECOGNITION - A method of predicting a target type in a set of target types from at least one image is provided. At least one image is obtained. A first and second set of confidence values and associated azimuth angles are determined for each target type in the set of target types from the at least one image. The first and second set of confidence values are fused for each of the azimuth angles to produce a fused curve for each target type in the set of target types. When multiple images are obtained, first and second set of possible detections are compiled corresponding to regions of interest in the multiple images. The possible detections are associated by regions of interest. The fused curves are produced for every region of interest. In the embodiments, the target type is predicted from the set of target types based on criteria concerning the fused curve.09-09-2010
20100226537DETECTION AND TRACKING OF INTERVENTIONAL TOOLS - The present invention relates to minimally invasive X-ray guided interventions, in particular to an image processing and rendering system and a method for improving visibility and supporting automatic detection and tracking of interventional tools that are used in electrophysiological procedures. According to the invention, this is accomplished by calculating differences between 2D projected image data of a preoperatively acquired 3D voxel volume showing a specific anatomical region of interest or a pathological abnormality (e.g. an intracranial arterial stenosis, an aneurysm of a cerebral, pulmonary or coronary artery branch, a gastric carcinoma or sarcoma, etc.) in a tissue of a patient's body and intraoperatively recorded 2D fluoroscopic images showing the aforementioned objects in the interior of said patient's body, wherein said 3D voxel volume has been generated in the scope of a computed tomography, magnet resonance imaging or 3D rotational angiography based image acquisition procedure and said 2D fluoroscopic images have been co-registered with the 2D projected image data. After registration of the projected 3D data with each of said X-ray images, comparison of the 2D projected image data with the 2D fluoroscopic images—based on the resulting difference images—allows removing common patterns and thus enhancing the visibility of interventional instruments which are inserted into a pathological tissue region, a blood vessel segment or any other region of interest in the interior of the patient's body. Automatic image processing methods to detect and track those instruments are also made easier and more robust by this invention. Once the 2D-3D registration is completed for a given view, all the changes in the system geometry of an X-ray system used for generating said fluoroscopic images can be applied to a registration matrix. Hence, use of said method as claimed is not limited to the same X-ray view during the whole procedure.09-09-2010
20100226535AUGMENTING A FIELD OF VIEW IN CONNECTION WITH VISION-TRACKING - The claimed subject matter relates to an architecture that can employ vision-monitoring techniques to enhance an experience associated with elements of a local environment. In particular, the architecture can establish gaze- or eye-tracking attributes in connection with a user. In addition, a location and a head or face-based perspective of the user can also be obtained. By aggregating this information, the architecture can identify a current field of view of the user, and then map that field of view to a modeled view in connection with a geospatial model of the environment. In addition, the architecture can select additional content that relates to an entity in the view or a modeled entity in the modeled view, and further present the additional content to the user.09-09-2010
20110058709VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.03-10-2011
20110058708OBJECT TRACKING APPARATUS AND OBJECT TRACKING METHOD - Candidate contour curves for a tracking object in the current frame are determined using a particle filter, based on the existence probability distribution of the tracking object in a frame which is one frame previous to the current frame. To match a candidate curve against a contour image of the current frame, a processing to search for the closest contour to the candidate curves is divided for each knot constituting the candidate contour curve and is executed in parallel by a plurality of processors. Each image data on a search region for each knot to be processed are copied from a contour image stored in an image storage to the respective local memories.03-10-2011
20100239119SYSTEM FOR IRIS DETECTION TRACKING AND RECOGNITION AT A DISTANCE - A stand-off range or at-a-distance iris detection and tracking for iris recognition having a head/face/eye locator, a zoom-in iris capture mechanism and an iris recognition module. The system may obtain iris information of a subject with or without his or her knowledge or cooperation. This information may be sufficient for identification of the subject, verification of identity and/or storage in a database.09-23-2010
20100239124IMAGE PROCESSING APPARATUS AND METHOD - It is an object to accurately detect an image of an object from an image created by photographing. A computer 09-23-2010
20100239120IMAGE OBJECT-LOCATION DETECTION METHOD - An image object-location detection method includes dividing a target image into a plurality of image blocks, calculating a plurality of sharpness values respectively corresponding to the plurality of image blocks, and analyzing the plurality of sharpness values to accordingly select image blocks corresponding to object-locations in the target image from the plurality of image blocks.09-23-2010
20120140986PROVIDING IMAGE DATA - Embodiments of the present invention provide a method of providing image data for constructing an image of a region of a target object, comprising providing incident radiation from a radiation source at a target object, detecting, by at least one detector, a portion of radiation scattered by the target object with the incident radiation or an aperture at first and second positions, and providing image data via an iterative process responsive to the detected radiation, wherein in said iterative process image data is provided corresponding to a portion of radiation scattered by the target object and not detected by the detector.06-07-2012
20100135532IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR STORING PROGRAM - An image processing apparatus comprises an image capture unit configured to capture an image, a characteristic part detector configured to detect a characteristic part of a face from the image captured by the image capture unit, an outline generator configured to generate a pseudo outline of the face based on positions of the characteristic part detected by the characteristic part detector and a correction unit configured to correct the image based on the pseudo outline generated by the outline generator.06-03-2010
20120140987Methods and Systems for Discovering Styles Via Color and Pattern Co-Occurrence - Methods and systems for discovering styles via color and pattern co-occurrence are disclosed. According to one embodiment, a computer-implemented method comprises collecting a set of fashion images, selecting at least one subset within the set of fashion images, the subset comprising at least one image containing a fashion item, and computing a set of segments by segmenting the at least one image into at least one dress segment. Color and pattern representations of the set of segments are computed by using a color analysis method and a pattern analysis method respectively. A graph is created wherein each graph node corresponds to one of a color representation or a pattern representation computed for the set of segments. Weights of edges between nodes of the graph indicate a degree of how the corresponding colors or patterns complement each other in a fashion sense.06-07-2012
20120140981System and Method for Combining Visible and Hyperspectral Imaging with Pattern Recognition Techniques for Improved Detection of Threats - Systems and method for detecting unknown samples wherein pattern recognition algorithms are applied to a visible image of a first target area comprising a first unknown sample to thereby generate a first set of target data. If comparison of the first set of target data to reference data results in a match, the first unknown is identified and a hyperspectral image of a second target area comprising a second unknown sample is obtained to generate a second set of test data. If comparison of the second set of test data to reference data results in a match, the second unknown sample is identified as a known material. Identification of an unknown through hyperspectral imaging can also trigger the visible camera to obtain an image. In addition, the visible and hyperspectral cameras can be run continuously to simultaneously obtain visible and hyperspectral images.06-07-2012
20100119111TIME EXPANSION FOR DISPLAYING PATH INFORMATION - Embodiments of the present invention provide systems and methods for displaying sequential information representing a path. The sequential information can include a number of tokens representing a path. A representation of the tokens and path of the sequential information can be displayed. An instruction to adjust the representation of the path of the sequential information can be received. For example, instruction can comprise user instruction, including but not limited to a user manipulation of a slider control of a user interface through which the representation of the sequence is displayed. The displayed representation of the path of the sequential information can be updated based on and corresponding to the instruction. So for example, the user can click and drag or otherwise manipulate the slider control above and the displayed representation of the path can be expanded and/or contracted based on the user's movement of the slider control.05-13-2010
20100054535Video Object Classification - Techniques for classifying one or more objects in at least one video, wherein the at least one video comprises a plurality of frames are provided. One or more objects in the plurality of frames are tracked. A level of deformation is computed for each of the one or more tracked objects in accordance with at least one change in a plurality of histograms of oriented gradients for a corresponding tracked object. Each of the one or more tracked objects is classified in accordance with the computed level of deformation.03-04-2010
20090268943COMPOSITION DETERMINATION DEVICE, COMPOSITION DETERMINATION METHOD, AND PROGRAM - A composition determination device includes: a subject detection unit configured to detect a subject in an image based on acquired image data; an actual subject size detection unit configured to detect the actual size which can be viewed as being equivalent to actual measurements, for each subject detected by the subject detection unit; a subject distinguishing unit configured to distinguish relevant subjects from subjects detected by the subject detection unit, based on determination regarding whether or not the actual size detected by the actual subject size detection unit is an appropriate value corresponding to a relevant subject; and a composition determination unit configured to determine a composition with only relevant subjects, distinguished by the subject distinguishing unit, as objects.10-29-2009
20090041301FRAME OF REFERENCE REGISTRATION SYSTEM AND METHOD - A system for assisting in work carried out on a workpiece and having a frame of reference. The system includes a referencing arrangement to register the position of a first location in the frame of reference of the system; a tool holder for holding a tool to assist with the work; a data interface to receive image data relating to the workpiece; and a processing arrangement to register the image data within the frame of reference of the system. The position of the tool holder is known within the frame of reference of the system. The image data represents an image which is indexed by position relative to the first location. The processing arrangement utilizes the relative position of the image represented by the image data with respect to the first location and the position of the first location in the frame of reference of the system.02-12-2009
20090041298IMAGE CAPTURE SYSTEM AND METHOD - Video capture systems, methods and computer program products can be provided and configured to capture video sequences of one or more participants during an activity. The video capture system can be configured to include one or more video capture devices positioned at predetermined locations in an activity area; a tracking device configured to track a location of the participant during the activity; a content storage device communicatively coupled to the video capture devices and configured to store video content received from the video capture devices; and a content assembly device communicatively coupled to the content storage device and to the tracking device, and configured to use tracking information from the tracking device to retrieve video sequences of the participant from the tracking device and to assemble the retrieved video sequences into a composite participant video.02-12-2009
20110235858Grouping Digital Media Items Based on Shared Features - Methods, apparatuses, and systems for grouping digital media items based on shared features. Multiple digital images are received. Metadata about the digital images is obtained either by analyzing the digital images or by receiving metadata from a source separate from the digital images or both. The obtained metadata is analyzed by data processing apparatus to identify a common feature among two or more of the digital images. A grouping of the two or more images is formed by the data processing apparatus based on the identified common feature.09-29-2011
20090252373Method and System for detecting polygon Boundaries of structures in images as particle tracks through fields of corners and pixel gradients - A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness α of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.10-08-2009
20090074246METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION OF EVENTS IN SPORT FIELDS - The present invention refers to the problem of the automatic detection of events in sport field, in particular Goal/NoGoal events by signalling to the mach management, which can autonomously take the final decision upon the event. The system is not invasive for the field structures, neither it requires to interrupt the game or to modify the rules thereof, but it only aims at detecting objectively the event occurrence and at providing support in the referees' decisions by means of specific signalling of the detected events.03-19-2009
20090074247Obstacle detection method - A method is provided for the detection of an obstacle in a road, in particular of a pedestrian, in the surroundings in the range of view of an optical sensor attached to a movable carrier such as in particular a vehicle, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, a first transformed image is produced by a transformation of the first taken image from the image plane of the optical sensor into the road plane, a further transformed image is produced from the first transformed image while taking account of the carrier movement in the time period between the first time and the second time, the further transformed image is transformed back from the road plane into the image plane and an image stabilization is carried out based on the image transformed back into the image plane and on the second taken image.03-19-2009
20090074245Miniature autonomous agents for scene interpretation - A miniature autonomous apparatus for performing scene interpretation, comprising: image acquisition means, image processing means, memory means and communication means, the processing means comprising means for determining an initial parametric representation of the scene; means for updating the parametric representation according to predefined criteria; means for analyzing the image, comprising means for determining, for each pixel of the image, whether it is a hot pixel, according to predefined criteria; means for defining at least one target from the hot pixels; means for measuring predefined parameters for at least one target; and means for determining, for at least one target whether said target is of interest, according to application-specific criteria, and wherein said communication means are adapted to output the results of said analysis.03-19-2009
20090074244Wide luminance range colorimetrically accurate profile generation method - Generating a color profile for a digital input device. Color values for at least one color target positioned within a first scene are measured, the color target having multiple color patches. An image of the first scene is generated using the digital input device, the first scene including the color target(s). Color values from a portion of the image corresponding to the color target are extracted and a color profile is generated, based on the measured color values and the extracted color values. The generated color profile is used to transform the color values of an image of a second scene captured under the same lighting conditions as the first scene. Using this generated color profile to transform images is likely to result in more colorimetrically accurate transformations of images created under real-world lighting conditions.03-19-2009
20090324016MOVING TARGET DETECTING APPARATUS, MOVING TARGET DETECTING METHOD, AND COMPUTER READABLE STORAGE MEDIUM HAVING STORED THEREIN A PROGRAM CAUSING A COMPUTER TO FUNCTION AS THE MOVING TARGET DETECTING APPARATUS - To extract a target pixel that shows a moving target in an image containing a complicated background. An image storing section 12-31-2009
20090324010Neural network-controlled automatic tracking and recognizing system and method - A neural network-controlled automatic tracking and recognizing system includes a fixed field of view collection module, a full functions variable field of view collection module, a video image recognition algorithm module, a neural network control module, a suspect object track-tracking module, a database comparison and alarm judgment module, a monitored characteristic recording and rule setting module, a light monitoring and control module, a backlight module, an alarm output/display/storage module, and security monitoring sensors. The invention relates also to the operation method of the system.12-31-2009
20120195466IMAGE-BASED SURFACE TRACKING - A method of image-tracking by using an image capturing device (08-02-2012
20110026767IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus stores a luminance signal and a color signal extracted from a tracking area in image data and determines a correlation with the stored luminance signal, thereby extracting an area where a specified object exists in another image data to update the tracking area using the position information of the extracted area. If a sufficient correlation cannot be obtained from the luminance signal, the apparatus makes a comparison with the stored color signal to determine whether the specified object is lost. The apparatus updates the luminance signal every time the tracking area is updated, but does not update the color signal even if the tracking area is updated or updates the color signal at a period longer than a period at which the luminance signal is updated.02-03-2011
20110026766MOVING IMAGE EXTRACTING APPARATUS, PROGRAM AND MOVING IMAGE EXTRACTING METHOD - There is provided a moving image extracting apparatus including a movement detecting unit which detects movement of an imaging apparatus at the time when imaging a moving image based on the moving image imaged by the imaging apparatus, an object detecting unit which detects an object from the moving image, a salient object selecting unit which selects an object detected by the object detecting unit over a period of predetermined length or longer as a salient object within a segment in which movement of the imaging apparatus is detected by the movement detecting unit, and an extracting unit which extracts a segment including the salient object selected by the salient object selecting unit from the moving image.02-03-2011
20110026765SYSTEMS AND METHODS FOR HAND GESTURE CONTROL OF AN ELECTRONIC DEVICE - Systems and methods of generating device commands based upon hand gesture commands are disclosed. An exemplary embodiment generates image information from a series of captured images, generates commands based upon hand gestures made by a user that emulate device commands generated by a remote control device, identifies a hand gesture made by the user from the received image information, determines a hand gesture command based upon the identified hand gesture, compares the determined hand gesture command with the plurality of predefined hand gesture commands to identify a corresponding matching hand gesture command from the plurality of predefined hand gesture commands, generates an emulated remote control device command based upon the identified matching hand gesture command, and controls the media device based upon the generated emulated remote control device command.02-03-2011
20090324018Efficient And Accurate 3D Object Tracking - A method of tracking an object in an input image stream, the method comprising iteratively applying the steps of: (a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or the state vector from an initialisation step; (b) extracting a series of point features from the rendered object; (c) localising corresponding point features in the input image stream; (d) deriving a new state vector from the point feature locations in the input image stream.12-31-2009
20090324017CAPTURING AND PROCESSING FACIAL MOTION DATA - Capturing and processing facial motion data includes: coupling a plurality of sensors to target points on a facial surface of an actor; capturing frame by frame images of the plurality of sensors disposed on the facial surface of the actor using at least one motion capture camera disposed on a head-mounted system; performing, in the head-mounted system, a tracking function on the frame by frame images of the plurality of sensors to accurately map the plurality of sensors for each frame; and generating, in the head-mounted system, a modeled surface representing the facial surface of the actor.12-31-2009
20090324013Image processing apparatus and image processing method - An image processing apparatus, a feature point tracking method and a feature point tracking program, which enable efficient feature point tracking by taking the easiness of convergence of a displacement amount according to the image pattern into account in a hierarchical gradient method, are provided. A displacement calculating unit reads a hierarchical tier image with the smallest image size from each of a reference pyramid py12-31-2009
20090324015EMITTER TRACKING SYSTEM - An improved emitter tracking system. In aspects of the present teachings, the presence of a desired emitter may be established by a relatively low-power emitter detection module, before images of the emitter and/or its surroundings are captured with a relatively high-power imaging module. Capturing images of the emitter may be synchronized with flashes of the emitter, to increase the signal-to-noise ratio of the captured images.12-31-2009
20090324014RETRIEVING SCENES FROM MOVING IMAGE DATA - A computer system, method and computer program that retrieves, from at least one piece of moving image data, at least one scene that includes moving image content to be retrieved. The computer system includes a storage unit that stores a locus of a model of the moving image to be retrieved and velocity variation of the model; a first calculation unit that calculates a first vector including the locus and the velocity variation of the model; a second calculation unit that calculates a second vector regarding the moving image content to be retrieved included in the at least one piece of moving image data; a third calculation unit that calculates a degree of similarity between the first and second vectors; and a selection unit that selects, at least one scene which includes the moving image content to be retrieved, on the basis of the degree of similarity.12-31-2009
20090324009Method and system for the determination of object positions in a volume - A method or a system embodiment determines positional information about a moveable object to which is affixed a pattern of stripes having reference lines. A method determines image lines of stripe images of each stripe within at least two video frames, uses the image lines to prescribe planes having lines of intersection, and determines a transformation mapping reference lines to lines of intersection. Position information about the object may be derived from the transformation. A system embodiment comprises a pattern of stripes in a known fixed relationship to an object, reference lines characterizing the stripes, two or more cameras at known locations, a digital computer adapted to receive video frames from the pixel arrays of the cameras, and a program stored in the computer's memory. The program performs some or all of the method. When there are two or more moveable objects, an embodiment may further determine the position information about a first object to be transformed to a local coordinate system fixed with respect to a second object.12-31-2009
20100220891AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM - The invention relates to a method and to devices for the real-time tracking of one or more substantially planar geometrical objects of a real scene in at least two images of a video stream for an augmented-reality application. After receiving a first image of the video stream (09-02-2010
20090141939Systems and Methods for Analysis of Video Content, Event Notification, and Video Content Provision - A method for remote event notification over a data network is disclosed. The method includes receiving video data from any source, analyzing the video data with reference to a profile to select a segment of interest associated with an event of significance, encoding the segment of interest, and sending to a user a representation of the segment of interest for display at a user display device. A further method for sharing video data based on content according to a user-defined profile over a data network is disclosed. The method includes receiving the video data, analyzing the video data for relevant content according to the profile, consulting a profile to determine a treatment of the relevant content, and sending data representative of the relevant content according to the treatment.06-04-2009
20090147994TORO: TRACKING AND OBSERVING ROBOT - The present invention provides a method for tracking entities, such as people, in an environment over long time periods. A region-based model is generated to model beliefs about entity locations. Each region corresponds to a discrete area representing a location where an entity is likely to be found. Each region includes one or more positions which more precisely specify the location of an entity within the region so that the region defines a probability distribution of the entity residing at different positions within the region. A region-based particle filtering method is applied to entities within the regions so that the probability distribution of each region is updated to indicate the likelihood of the entity residing in a particular region as the entity moves.06-11-2009
200901360893D inspection of an object using x-rays - A method is presented for a 3D inspection of an object or bag in order to check for explosives or contraband. The method is applicable to Computed Tomography, Laminography or any other method that can be used to produce images of slices through the object. According to this method, it is not necessary to reconstruct the slice image with a high resolution as is required for visual display, but it is sufficient to reconstruct the image at only a sample or a set of points or pixels that are sparsely distributed within the reconstructed slice. The properties of the object are then analyzed only at these sparsely distributed pixels within the slice to make a determination for the presence or absence of explosives or contraband. This process of image reconstruction and analysis is repeated over several slices spaced through the volume of the object. In another embodiment of this invention, the set of points or pixels at which the image is reconstructed are offset spatially with respect to the set of pixels in the adjacent or neighboring slice. This invention greatly reduces the computational burden, hence simplifies the hardware and software design, speeds up the scanning process and allows for a more complete and uniform inspection of the entire volume of the object.05-28-2009
20090110241IMAGE PROCESSING APPARATUS AND METHOD FOR OBTAINING POSITION AND ORIENTATION OF IMAGING APPARATUS - An image processing apparatus obtains location information of each image feature in a captured image based on image coordinates of the image feature in the captured image. The image processing apparatus selects location information usable to calculate a position and an orientation of the imaging apparatus among the obtained location information. The image processing apparatus obtains the position and the orientation of the imaging apparatus based on the selected location information and an image feature corresponding to the selected location information among the image features included in the captured image.04-30-2009
20090110235SYSTEM AND METHOD FOR SELECTION OF AN OBJECT OF INTEREST DURING PHYSICAL BROWSING BY FINGER FRAMING - A system and method selecting an object from a plurality of objects in a physical environment is disclosed. The method may include framing an object located in a physical environment by positioning an aperture at a selected distance from a user's eye, the position of the aperture being selected such that the aperture substantially encompasses the object as viewed from the user's perspective, detecting the aperture by analyzing image data including the aperture and the physical environment, and selecting the object substantially encompassed by the detected aperture. The method may further include identifying the selected object based on its geolocation, collecting and merging data about the identified object from a plurality of data sources, and displaying the collected and merged data.04-30-2009
20110235857DEVICE AND METHOD FOR CONTROLLING STREETLIGHTS - A method for controlling streetlights located at a streetlight control area using a streetlight power control system controls an image capturing device to capture digital images of at least one route section of the streetlight control area at a predetermined interval. Light of a streetlight corresponding to the streetlight power controller is automatically adjusted by turning on or off the streetlight and by increasing or decreasing the intensity of the streetlight.09-29-2011
20130129141Methods and Apparatus for Facial Feature Replacement - Three dimensional models corresponding to a target image and a reference image are selected based on a set of feature points defining facial features in the target image and the reference image. The set of feature points defining the facial features in the target image and the reference image are associated with corresponding 3-dimensional models. A 3D motion flow between the 3-dimensional models is computed. The 3D motion flow is projected onto a 2D image plane to create a 2D optical field flow. The target image and the reference image are warped using the 2D optical field flow. A selected feature from the reference image is copied to the target image.05-23-2013
20130129142AUTOMATIC TAG GENERATION BASED ON IMAGE CONTENT - Automatic extraction of data from and tagging of a photo (or video) having an image of identifiable objects is provided. A combination of image recognition and extracted metadata, including geographical and date/time information, is used to find and recognize objects in a photo or video. Upon finding a matching identifier for a recognized object, the photo or video is automatically tagged with one or more keywords associated with and corresponding to the recognized objects.05-23-2013
20090110240METHOD FOR DETECTING A MOVING OBJECT IN AN IMAGE STREAM - The invention relates to a method for detecting a moving object in a stream of images taken at successive instants, of the type comprising, for each zone of a predefined set of zones of at least one pixel of the image constituting a current image, a step (04-30-2009
20090110239System and method for revealing occluded objects in an image dataset - Disclosed are a system and method for identifying objects in an image dataset that occlude other objects and for transforming the image dataset to reveal the occluded objects. In some cases, occluding objects are identified by processing the image dataset to determine the relative positions of visual objects. Occluded objects are then revealed by removing the occluding objects from the image dataset or by otherwise de-emphasizing the occluding objects so that the occluded objects are seen behind it. A visual object may be removed simply because it occludes another object, because of privacy concerns, or because it is transient. When an object is removed or de-emphasized, the objects that were behind it may need to be “cleaned up” so that they show up well. To do this, information from multiple images can be processed using interpolation techniques. The image dataset can be further transformed by adding objects to the images.04-30-2009
20090110236Method And System For Object Detection And Tracking - Disclosed is a method and system for object detection and tracking. Spatio-temporal information for a foreground/background appearance module is updated, based on a new input image and the accumulated previous appearance information and foreground/background information module labeling information over time. Object detection is performed according to the new input image and the updated spatio-temporal information and transmitted previous information over time, based on the labeling result generated by the object detection. The information for the foreground/background appearance module is repeatedly updated until a convergent condition is reached. The produced labeling result from objection detection is considered as a new tracking measurement for further updating on a tracking prediction module. A final tracking result may be obtained through the updated tracking prediction module, which is determined by the current tracking measurement and the previous observed tracking results. The tracking object location at the next time is predicted. The returned predicted appearance information for the foreground/background object is used as the input for updating the foreground and background appearance module. The returned labeling information is used as the information over time for the object detection.04-30-2009
20110150274METHODS FOR AUTOMATIC SEGMENTATION AND TEMPORAL TRACKING - In one embodiment, a method of detecting centerline of a vessel is provided. The method comprises steps of acquiring a 3D image volume, initializing a centerline, initializing a Kalman filter, predicting a next center point using the Kalman filter, checking validity of the prediction made using the Kalman filter, performing template matching, updating the Kalman filter based on the template matching and repeating the steps of predicting, checking, performing and updating for a predetermined number of times. Methods of automatic vessel segmentation and temporal tracking of the segmented vessel is further described with reference to the method of detecting centerline.06-23-2011
20110150277IMAGE PROCESSING APPARATUS AND CONTROL METHOD THEREOF - In an image included in a moving image, a specific area is registered as a reference area, and a specific hue range of the reference area is set as a first feature amount based on the distribution of hues of pixels in the reference area. When the occupation ratio of pixels having hues included in a second feature amount, obtained by expanding the hue range of the first feature amount in a surrounding area larger than the reference area, is smaller than a predetermined ratio, an area having a high degree of correlation is identified from an image using the second feature amount in the subsequent matching process. When the occupation ratio is equal to or larger than the predetermined ratio, an area having a high degree of correlation is identified from an image using the first feature amount in the subsequent matching process.06-23-2011
20110129117SYSTEM AND METHOD FOR IDENTIFYING PRODUCE - An apparatus, method and system are presented for identifying produce. Multiple images of a produce item captured using five different types of illumination. The captured images are processed to determine parameters of the produce item and those parameters are compared to parameters of known produce to identify the produce item.06-02-2011
20120243738IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - An image processing device comprises: a tracking area setting unit that sets a tracking area in an input moving image obtained by photographing an object; a following feature point setting unit that detects a feature point that exhibits a motion in correlation with the motion of the tracking area and sets the detected feature point as a following feature point; a motion detection unit that detects movement over time of the following feature point within the input image; and a clip area setting unit that sets a clip area of an image to be employed when a partial image including the tracking area is clipped out of the input image for either recording or displaying or both recording and displaying, and that sets a size and a position of the clip area on the basis of a motion detection result obtained by the motion detection unit.09-27-2012
20120243734Determining Detection Certainty In A Cascade Classifier - Disclosed are embodiments for determining detection certainty in a cascade classifier (09-27-2012
20120243736ADJUSTING PRINT FORMAT IN ELECTRONIC DEVICE - A print format adjustment system includes a receiving module, a visual condition determination module, a print format determination module, and a print control module. The receiving module receives content for printing in a first print format. The visual condition determination module establishes the sharpness of vision of a viewer in front of a display, at a predetermined view distance. The print format determination module determines a second print format based on both the first print format and the visual condition of the viewer. The print control module prints the content in the second print format.09-27-2012
20120243731IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS FOR DETECTING AN OBJECT - An image processing method and an image processing apparatus for detecting an object are provided. The image processing method includes the following steps: partitioning an image into at least a first sub-image covering a first zone and a second sub-image covering a second zone according to a designed trait; and performing an image detection process upon the first sub-image for checking whether the object is within the first zone to generate a first detecting result. The object is a human face, and the image detection process is a face detection process.09-27-2012
20100296698MOTION OBJECT DETECTION METHOD USING ADAPTIVE BACKGROUND MODEL AND COMPUTER-READABLE STORAGE MEDIUM - A motion object detection method using an adaptive background model and a computer-readable storage medium are provided. In the motion object detection method, a background model establishing step is firstly performed to establish a background model to provide a plurality of background brightness reference values. Then, a foreground object detecting step is performed to use the background model to detect foreground objects. In the background model establishing step, a plurality of brightness weight values are firstly provided in accordance with the brightness of background pixels, wherein each of the brightness weight values is determined in accordance with the relative background pixel. Thereafter, the background brightness reference values are calculated based on the brightness of the background pixels and the brightness weight values. In addition, a computer can perform the motion object detection method after reading the computer-readable storage medium.11-25-2010
20100296701PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of tracking movements of a person captured by a camera through lighter processing in comparison with tracking processing that employs a Kalman filter or the like is provided. The method includes: detecting a head on each frame image; calculating a feature quantity that features a person whose head is detected on the frame images; calculating a relevance ratio that represents a degree of agreement between a feature quantity on a past frame image and a feature quantity on a current frame image, which belong to each person whose head is detected on the current frame image; and determining that, a head, which is a basis for calculation of a relevance ratio that represents a degree of agreement being a first threshold as well as being a maximum degree of agreement, is a head of the same person as the person having the head.11-25-2010
20100296702PERSON TRACKING METHOD, PERSON TRACKING APPARATUS, AND PERSON TRACKING PROGRAM STORAGE MEDIUM - A person tracking method capable of obtaining information representing a correspondence between a shot image and a three-dimensional real space, without actual measurement, thereby enabling lighter processing is provided. The method includes: calculating a statistically average correspondence between a size of person's head and a position representing a height of the head on the shot image, the camera looking down a measured space and taking the measured space; detecting a position and a size of a head on each of measured frame images; calculating, based on positions and sizes of heads on plural past measured frame images and the correspondence, a movement feature quantity representing a possibility that a head on a current measured frame image is of the same person on the past measured frame images; and determining that the head on the current measured frame image is of the same person on the past measured frame images.11-25-2010
20100296703METHOD AND DEVICE FOR DETECTING AND CLASSIFYING MOVING TARGETS - Horizontal velocity profile sensing techniques, methods and systems may be used to detect and classify moving targets, including but not limited to a person, an animal, or a vehicle, or any other object that lends itself to characterization. Such techniques, methods and systems may be implemented with an autonomous stand-alone device, for example, as an unattended ground sensor, or it may constitute part of a sensor system. An exemplary illustrative non-limiting implementation allows the device to be fixed to a location, while detecting and classifying moving targets. In another exemplary illustrative non-limiting implementation, the device may be placed on a moving or rotating platform and used to detect stationary objects.11-25-2010
20100296697OBJECT TRACKER AND OBJECT TRACKING METHOD - Referring to FIG. 11-25-2010
20100303298SELECTIVE SOUND SOURCE LISTENING IN CONJUNCTION WITH COMPUTER INTERACTIVE PROCESSING - A method and apparatus for capturing image and sound during interactivity with a computer program is provided. The apparatus includes an image capture unit that is configured to capture one or more image frames. Also provided is a sound capture unit. The sound capture unit is configured to identify one or more sound sources. The sound capture unit generates data capable of being analyzed to determine a zone of focus at which to process sound to the substantial exclusion of sounds outside of the zone of focus. In this manner, sound that is captured and processed for the zone of focus is used for interactivity with the computer program.12-02-2010
20100303293System and Method for Linking Real-World Objects and Object Representations by Pointing - A system and method are described for selecting and identifying a unique object or feature in the system user's three-dimensional (“3-D”) environment in a two-dimensional (“2-D”) virtual representation of the same object or feature in a virtual environment. The system and method may be incorporated in a mobile device that includes position and orientation sensors to determine the pointing device's position and pointing direction. The mobile device incorporating the present invention may be adapted for wireless communication with a computer-based system that represents static and dynamic objects and features that exist or are present in the system user's 3-D environment. The mobile device incorporating the present invention will also have the capability to process information regarding a system user's environment and calculating specific measures for pointing accuracy and reliability.12-02-2010
20100310127SUBJECT TRACKING DEVICE AND CAMERA - A subject tracking device includes: an input unit that sequentially inputs input images; an arithmetic operation unit that calculates a first similarity level between an initial template image and a target image and a second similarity level between an update template image and the target image; a position determining unit that determines a subject position based upon at least one of the first and the second similarity level; a decision-making unit that decides whether or not to update the update template image based upon the first and the second similarity level; and an update unit that generates a new update template image based upon the initial template image multiplied by a first weighting coefficient and the target image multiplied by a second weighting coefficient, and updates the update template image with the newly generated update template image, if the update template image is decided to be updated.12-09-2010
20090060270Image Detection Method - An image detection method is performed by a computer to determine whether or not an image in a region shot by a camera changes. According to the method, consecutive images shot by the camera are captured, and at least one anchored frame for the consecutive images is set. Whether or not the images in the anchored frame should or should not change is determined, and a signal is transmitted to determine whether or not the detected region is normal or not. Then, a notification signal is transmitted automatically to remind supervisors to closely observe the detected region.03-05-2009
20100303297COLOR CALIBRATION FOR OBJECT TRACKING - To calibrate a tracking system a computing device locates an object in one or more images taken by an optical sensor. The computing device determines environment colors included in the image, the environment colors being colors in the one or more images that are not emitted by the object. The computing device determines one or more trackable colors that, if assumed by the object, will enable the computing device to track the object.12-02-2010
20100303295X-Ray Monitoring - Apparatus for monitoring in real time the movement of a plurality of substances in a mixture, such as oil water and air flowing through a pipe comprises an X-ray scanner arranged to make a plurality of scans of the mixture over a monitoring period to produce a plurality of scan data sets, and control means arranged to analyze the data sets to identify volumes of each of the substances and to measure their movement. By identifying volumes of each of the substances in each of a number of layers and for each of a number of scans, real time analysis and imaging of the substance can be achieved.12-02-2010
20100303291Virtual Object - An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector.12-02-2010
20130136300Tracking Three-Dimensional Objects - Method and apparatus for tracking three-dimensional (3D) objects are disclosed. In one embodiment, a method of tracking a 3D object includes constructing a database to store a set of two-dimensional (2D) images of the 3D object using a tracking background, where the tracking background includes at least one known pattern, receiving a tracking image, determining whether the tracking image matches at least one image in the database in accordance with feature points of the tracking image, and providing information about the tracking image in respond to the tracking image matches the at least one image in the database. The method of constructing a database also includes capturing the set of 2D images of the 3D object with the tracking background, extracting a set of feature points from each 2D image, and storing the set of feature points in the database.05-30-2013
20130136301METHOD FOR CALIBRATION OF A SENSOR UNIT AND ACCESSORY COMPRISING THE SAME - Method, means, portable terminal accessory and system for calibrating a sensor device comprising a positioning unit detecting the position of the electronic device, an image capturing unit capturing an image of the environment around the electronic device, a processing unit detecting the presence of at least one identifiable object in the image captured and from a comparison of the position of the object in relation to the position of the user determining the heading of a user of the electronic device. Once the heading of the user is determined, it is used to calibrate one or more sensor devices or sensor functionalities in the electronic device.05-30-2013
20130136302APPARATUS AND METHOD FOR CALCULATING THREE DIMENSIONAL (3D) POSITIONS OF FEATURE POINTS - An apparatus for calculating spatial coordinates is disclosed. The apparatus may extract a plurality of feature points from an input image, calculate a direction vector associated with the feature points, and calculate spatial coordinates the feature points based on a distance between the feature points and the direction vector.05-30-2013
20130136305Pattern generation using diffractive optical elements - Apparatus (05-30-2013
20130136306OBJECT IDENTIFICATION DEVICE - An object identification device identifying an image region of an identification target includes an imaging unit receiving two polarization lights and imaging respective polarization images, a brightness calculation unit dividing the two polarization images into processing regions and calculating a brightness sum value between the two polarizations images for each processing region, a differential polarization degree calculation unit calculating a differential polarization degree for each processing region, a selecting condition determination unit determining whether the differential polarization degree satisfies a predetermined selecting condition, and an object identification processing unit specifying the processing region based on the differential polarization degree or the brightness sum value depending on whether the predetermined selecting condition is satisfied and identifying plural processing regions that are specified as the processing regions as the image region of the identification target.05-30-2013
20130136307METHOD FOR COUNTING OBJECTS AND APPARATUS USING A PLURALITY OF SENSORS - According to one embodiment of the present invention, a method for counting objects involves using an image sensor and a depth sensor, and comprises the steps of: acquiring an image from the image sensor and acquiring a depth map from the depth sensor, the depth map indicating depth information on the subject in the image; acquiring boundary information on objects in the image; applying the boundary information to the depth map to generate a corrected depth map; identifying the depth pattern of the objects from the corrected depth map; and counting the identified objects.05-30-2013
20100303289DEVICE FOR IDENTIFYING AND TRACKING MULTIPLE HUMANS OVER TIME - A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.12-02-2010
20100303290Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the depth image may be generated. The background of a received depth image may be removed to isolate a human target in the received depth image. A model may then be adjusted to fit with in the isolated human target in the received depth image. To adjust the model, a joint or a bone may be magnetized to the closest pixel of the isolated human target. The joint or the bone may then be refined such that the joint or the bone may be further adjusted to a pixels equidistant between two edges the body part of the isolated human target where the joint or bone may have been magnetized.12-02-2010
20100322478Restoration apparatus for weather-degraded image and driver assistance system - In a restoration apparatus, an estimating unit divides a captured original image into a plurality of local pixel blocks, and estimates an luminance level of airlight in each of the plurality of local pixel blocks. A calculating unit directly calculates, from a particle-affected luminance model, a luminance level of each pixel of each of the plurality of local pixel blocks in the original image to thereby generate, based on the luminance level of each pixel of each of the plurality of local pixel blocks, a restored image of the original image. The particle-affected luminance model expresses an intrinsic luminance of a target observed by the image pickup device as a function between the luminance level of airlight and an extinction coefficient. The extinction coefficient represents the concentration of particles in the atmosphere.12-23-2010
20100322474DETECTING MULTIPLE MOVING OBJECTS IN CROWDED ENVIRONMENTS WITH COHERENT MOTION REGIONS - Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.12-23-2010
20100310123METHOD AND SYSTEM FOR ACTIVELY DETECTING AND RECOGNIZING PLACARDS - A method and a system for actively detecting and recognizing a placard are provided. In the present method, an image capturing device is moved according to a maneuver rule, wherein the image capturing device captures an image continuously during the movement. Then whether a placard exists in the image or not is determined. If a placard exists in the image, a content of the placard is identified and a corresponding action is executed. The method repeatedly processes the foregoing steps to further continuously move the image capturing device and determine whether the placard exists in a newly captured image so as to achieve a purpose of detecting and recognizing placards actively.12-09-2010
20100310122Method and Device for Detecting Stationary Targets - Techniques for detecting stationary targets in videos or frame images are described. According to one aspect of the present invention, a sequence of frame images is being received from a video system. Each of the frame images into a plurality of image blocks, and dividing a background image is divided into a plurality of corresponding background image blocks. Characteristic values of the image blocks in each of the frame images are calculated. A plurality of characteristic value sequences is then formed, each of the characteristic value sequences comprises a predefined number of characteristic values for each of the image blocks in the frame images. A histogram of each of the characteristic value sequences is computed to determine whether one of the image blocks in one of the frame images contains a stationary target.12-09-2010
20100310121System and method for passive automatic target recognition (ATR) - A passive automatic target recognition (ATR) system includes a range map processor configured to generate range-to-pixel map data based on digital elevation map data and parameters of a passive image sensor. The passive image sensor is configured to passively acquire image data. The passive ATR system also includes a detection processor configured to identify a region of interest (ROI) in the passively acquired sensor image data based on the range-to-pixel map data, and an ATR processor configured to generate an ATR decision for the ROI.12-09-2010
20100310120METHOD AND SYSTEM FOR TRACKING MOVING OBJECTS IN A SCENE - A method and system for tracking moving objects in a scene is described. One embodiment acquires a digital video signal corresponding to the scene; identifies in the digital video signal one or more candidate moving objects; locates at least one candidate moving object in the digital video signal subsequent to identification of the at least one candidate moving object; tracks candidate moving objects that, for at least a predetermined period after they have been identified, continue to be located in the digital video signal; assigns a score to each tracked candidate moving object in accordance with how long after passage of the predetermined period the tracked candidate moving object has continued to be located in the digital video signal; combines the respective scores of the tracked candidate moving objects to obtain an overall score for the scene; and indicates to a user whether the overall score satisfies a predetermined criterion.12-09-2010
20100322473DECENTRALIZED TRACKING OF PACKAGES ON A CONVEYOR - A decentralized tracking system is discussed herein. The decentralized tracking system can be comprised of two or more tracking elements and be used to track packages moving on a conveyor system. Each tracking element can operate independently, despite being highly sophisticated and dynamically coordinated with one or more other tracking elements. The conveyor system can be a modular and/or accumulation conveyor system that has sorting functionality. The decentralized tracking system can be used to divert packages for sortation by, for example, embedding a destination zone into the package's tracking data and/or preprogramming conveyor zones to sort specific packages based on a package identifier.12-23-2010
20130142383Scanned Image Projection System with Gesture Control Input - An imaging system (06-06-2013
20130142386System And Method For Evaluating Focus Direction Under Various Lighting Conditions - A system and method for generating a direction confidence measure includes a camera sensor device that captures blur images of a photographic target. A depth estimator calculates matching errors for the blur images. The depth estimator then generates the direction confidence measure by utilizing the matching errors and a dynamic optimization constant that is selected depending upon image characteristics of the blur images.06-06-2013
20130142387Identifying a Target Object Using Optical Occlusion - Methods are apparatuses are described for identifying a target object using optical occlusion. A head-mounted display perceives a characteristic of a reference object. The head-mounted display detects a change of the perceived characteristic of the reference object and makes a determination that a detected object caused the change of the perceived characteristic. In response to making the determination, the head-mounted display identifies the detected object as the target object.06-06-2013
20130142389EYE STATE DETECTION APPARATUS AND METHOD OF DETECTING OPEN AND CLOSED STATES OF EYE - An eye state detection apparatus includes a camera, a first calculator, a memory, a second calculator, and a third calculator. The camera obtains a plurality of face images of a driver. The first calculator calculates an opening amount of an eye of the driver based on each face image. The memory stores the opening amounts calculated by the first calculator. The second calculator groups the opening amounts into a plurality of groups in a sequential manner, calculates a group distribution of each group, calculates an entire distribution of all of the opening amounts, and sets the entire distribution as a reference distribution when a difference among the group distributions is within a predetermined range. The third calculator calculates an opening degree of the eye based on the reference distribution of the opening amounts when the reference distribution of the opening amounts is calculated by the second calculator.06-06-2013
20130142391Face Recognition Performance Using Additional Image Features - A technique is provided for recognizing faces in an image stream using a digital image acquisition device. A first acquired image is received from an image stream. A first face region is detected within the first acquired image having a given size and a respective location within the first acquired image. First faceprint data uniquely identifying the first face region are extracted along with first peripheral region data around the first face region. The first faceprint and peripheral region data are stored, and the first peripheral region data are associated with the first face region. The first face region is tracked until a face lock is lost. A second face region is detected within a second acquired image from the image stream. Second peripheral region data around the second face region are extracted. The second face region is identified upon matching the first and second peripheral region data.06-06-2013
20100322471Motion invariant generalized hyperspectral targeting and identification methodology and apparatus therefor - The present disclosure relates to a method and system for enhancing the ability of nuclear, chemical, and biological (“NBC”) sensors, specifically mobile sensors, to detect, analyze, and identify NBC agents on a surface, in an aerosol, in a vapor cloud, or other similar environment. Embodiments include the use of a two-stage approach including targeting and identification of a contaminant. Spectral imaging sensors may be used for both wide-field detection (e.g., for scene classification) and narrow-field identification.12-23-2010
20100322480Systems and Methods for Remote Tagging and Tracking of Objects Using Hyperspectral Video Sensors - Detection and tracking of an object by exploiting its unique reflectance signature. This is done by examining every image pixel and computing how closely that pixel's spectrum matches a known object spectral signature. The measured radiance spectra of the object can be used to estimate its intrinsic reflectance properties that are invariant to a wide range of illumination effects. This is achieved by incorporating radiative transfer theory to compute the mapping between the observed radiance spectra to the object's reflectance spectra. The consistency of the reflectance spectra allows for object tracking through spatial and temporal gaps in coverage. Tracking an object then uses a prediction process followed by a correction process.12-23-2010
20100322475OBJECT AREA DETECTING DEVICE, OBJECT AREA DETECTING SYSTEM, OBJECT AREA DETECTING METHOD AND PROGRAM - To enable detection of an overlying object distinctively even if a stationary object is overlaid with another stationary object or a moving object. A data processing device includes a first unit which detects an object area in a plurality of time-series continuous input images, a second unit which detects a stationary area in the object area from the plurality of continuous input images, a third unit which stores information of the stationary area as time-series background information, and a fourth unit which compares the time-series background information with the object area to thereby detect each object included in the object area.12-23-2010
20100322477DEVICE AND METHOD FOR DETECTING A PLANT - A device for detecting a plant includes a two-dimensional camera for detecting a two-dimensional image of a plant leaf having a high two-dimensional resolution, and a three-dimensional camera for detecting a three-dimensional image of the plant leaf having a high three-dimensional resolution. The two-dimensional camera is a conventional high-resolution color camera, for example, and the three-dimensional camera is a TOF camera, for example. A processor for merging the two-dimensional image and the three-dimensional image creates a three-dimensional result representation having a higher resolution than the three-dimensional image of the 3D camera, which may include, among other things, the border of a leaf. The three-dimensional result representation serves to characterize a plant leaf, such as to calculate the surface area of the leaf, the alignment of the leaf, or serves to identify the leaf.12-23-2010
20100322472OBJECT TRACKING IN COMPUTER VISION - A method and system for object tracking in computer vision. The tracked object is recognized from an image that has been acquired with the camera of the computer vision system. The image is processed by randomly generating samples in the search space and then computing fitness functions. Regions of high fitness attract more samples. The random selection may be based on standard deviation or other weights. Computations are stored into a tree structure. The tree structure can be used as prior information for next image.12-23-2010
20120020516SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures an image of a scene and distance data between points in the scene and a time-of-flight (TOF) camera by the TOF camera. A 3D model of the scene is built according to the image of the scene and the distance data. The motion object monitoring system gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the built 3D model of the scene. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects.01-26-2012
20100322479SYSTEMS AND METHODS FOR 3-D TARGET LOCATION - A target is imaged in a three-dimensional real space using two or more video cameras. A three-dimensional image space combined from two video cameras of the two or more video cameras is displayed to a user using a stereoscopic display. A right eye and a left eye of the user are imaged as the user is observing the target in the stereoscopic video display, a right gaze line of the right eye and a left gaze line of the left eye are calculated in the three-dimensional image space, and a gazepoint in the three-dimensional image space is calculated as the intersection of the right gaze line and the left gaze line using a binocular eyetracker. A real target location is determined by translating the gazepoint in the three-dimensional image space to the real target location in the three-dimensional real space from the locations and the positions of the two video cameras using a processor.12-23-2010
20110038508SYSTEM AND METHOD FOR PERFORMING OPTICAL NAVIGATION USING PORTIONS OF CAPTURED FRAMES OF IMAGE DATA - A system and method for performing optical navigation selectively uses portions of captured frame of image data for cross-correlation for displacement estimation, which can reduce the power consumption and/or increase the tracking performance at higher speed usage.02-17-2011
20110007943Registration Apparatus, Checking Apparatus, Data Structure, and Storage Medium (amended - A registration apparatus, a checking apparatus, a data structure, and a storage medium that are capable of achieving an improved authentication accuracy are provided. The registration apparatus includes an image acquisition unit configured to acquire a venous image for a vein of a living body, an extraction unit configured to extract a parameter resistant to affine transformation from part of the venous image, and a registration unit configured to register the parameter extracted by the extraction unit in storage means. The part of the venous image is set as a target for extracting the parameter resistant to affine transformation.01-13-2011
20110007946UNIFIED SYSTEM AND METHOD FOR ANIMAL BEHAVIOR CHARACTERIZATION WITH TRAINING CAPABILITIES - In general, the present invention is directed to systems and methods for finding the position and shape of an object using video. The invention includes a system with a video camera coupled to a computer in which the computer is configured to automatically provide object segmentation and identification, object motion tracking (for moving objects), object position classification, and behavior identification. In a preferred embodiment, the present invention may use background subtraction for object identification and tracking, probabilistic approach with expectation-maximization for tracking the motion detection and object classification, and decision tree classification for behavior identification. Thus, the present invention is capable of automatically monitoring a video image to identify, track and classify the actions of various objects and the object's movements within the image. The image may be provided in real time or from storage. The invention is particularly useful for monitoring and classifying animal behavior for testing drugs and genetic mutations, but may be used in any of a number of other surveillance applications.01-13-2011
20110007941PRECISELY LOCATING FEATURES ON GEOSPATIAL IMAGERY - Methods for locating a feature on geospatial imagery and systems for performing those methods are disclosed. An accuracy level of each of a plurality of geospatial vector datasets available in a database can be determined. Each of the plurality of geospatial vector datasets corresponds to the same spatial region as the geospatial imagery. The geospatial vector dataset having the highest accuracy level may be selected. When the selected geospatial vector dataset and the geospatial imagery are misaligned, the selected geospatial vector dataset is aligned to the geospatial imagery. The location of the feature on the geospatial imagery is then determined based on the selected geospatial vector dataset and outputted via a display device.01-13-2011
20110007944SYSTEM AND METHOD FOR OCCUPANCY ESTIMATION - A system generates occupancy estimates based on a Kinetic-Motion (KM)-based model that predicts the movements of occupants through a region divided into a plurality of segments. The system includes a controller for executing an algorithm representing the KM-based model. The KM-based model includes state equations that define each of the plurality of segments as containing congested portions and uncongested portions. The state equations define the movement of occupants based, in part, on the distinctions made between congested and uncongested portions of each segment.01-13-2011
20090067674Monitoring device - The invention concerns a monitoring device with a multi-camera device and an object tracking device for the high resolution observation of moving objects. Hereby it is provided that the object tracking device comprises an image integration device for the generation of a total image from the individual images of the multi-camera device and a cut-out definition device for the definition, independent from the borders of the individual images, of the to be observed cut-out.03-12-2009
20090067673METHOD AND APPARATUS FOR DETERMINING THE POSITION OF A VEHICLE, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT - The present invention relates to an apparatus and a method for determining the position of a vehicle moved along a path, markers, particularly code carriers or barcodes being located along the path.. The method is characterized in that the markers are detected with a digital camera placed on the vehicle and that by means of image processing from a position of at least one marker image in the detection or coverage range of the digital camera a position of the vehicle relative to the given marker or the given markers in the main vehicle movement direction along the path and in at least one direction at right angles to the main movement direction is determined. The invention also relates to a computer program and a computer program product.03-12-2009
20110026768Tracking a Spatial Target - Apparatuses and methods for tracking a dermatological feature are disclosed. One method includes establishing an imaging reference proximate to an identified dermatological feature, wherein the imaging reference has a known color spectrum and known physical dimensions. A digital image sequence is obtained containing one or more images of the identified dermatological feature and the imaging reference. At least one trait of the identified dermatological feature is estimated using the imaging reference and at least one image of the digital image sequence.02-03-2011
20130148851KEY-FRAME SELECTION FOR PARALLEL TRACKING AND MAPPING - A method of selecting a first image from a plurality of images for constructing a coordinate system of an augmented reality system. A first image feature in the first image corresponding to the feature of the marker is determined A second image feature in a second image is determined based on a second pose of a camera, said second image feature having a visual match to the first image feature. A reconstructed position of the feature of the marker in a three-dimensional (3D) space is determined based on positions of the first and second image features, the first and the second camera pose. A reconstruction error is determined based on the reconstructed position of the feature of the marker and a pre-determined position of the marker.06-13-2013
20090141938ROBOT VISION SYSTEM AND DETECTION METHOD - A robot vision system for outputting a disparity map includes a stereo camera for receiving left and right images and outputting a disparity map between the two images; an encoder for encoding either the left image or the right image into a motion compensation-based video bit-stream; and a decoder for extracting an encoding type of an image block, a motion vector, and a DCT coefficient from the video bit-stream. Further, the system includes a person detector for detecting and labeling person blocks in the image using the disparity map between the left image and the right image, the block encoding type, and the motion vector, and detecting a distance from the labeled person to the camera; and an obstacle detector for detecting a closer obstacle than the person using the block encoding type, the motion vector, and the DCT coefficient extracted from the video bit-stream, and the disparity map.06-04-2009
20090141941IMAGE PROCESSING APPARATUS AND METHOD FOR ESTIMATING ORIENTATION - A method of estimating an orientation of one or more of a plurality of objects disposed on a plane, from one or more video images of a scene, which includes the objects on the plane produced from a view of the scene by a video camera. The method comprises receiving for each of the one or more objects, object tracking data, which provides a position of the object on the plane in the video images with respect to time, determining from the object tracking data a plurality of basis vectors associated with at least one of the objects, each basis vector corresponding to a factor, which can influence the orientation of the object and each basis vector being related to the movement or location of the one or more objects, and combining the basis vectors in accordance with a blending function to calculate an estimate of the orientation of the object on the plane, the blending function including blending coefficients which determine a relative magnitude of each basis vector used in the blending function.06-04-2009
20110044501Systems and methods for personalized motion control - End users, unskilled in the art, generating motion recognizers from example motions, without substantial programming, without limitation to any fixed set of well-known gestures, and without limitation to motions that occur substantially in a plane, or are substantially predefined in scope. From example motions for each class of motion to be recognized, a system automatically generates motion recognizers using machine learning techniques. Those motion recognizers can be incorporated into an end-user application, with the effect that when a user of the application supplies a motion, those motion recognizers will recognize the motion as an example of one of the known classes of motion. Motion recognizers can be incorporated into an end-user application; tuned to improve recognition rates for subsequent motions to allow end-users to add new example motions.02-24-2011
20110044504INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM - An information processing device, including: a three-dimensional information generating section for obtaining position and attitude of a moving camera or three-dimensional positions of feature points by successively receiving captured images from different viewpoints, and updating status data using observation information which includes tracking information of the feature points, the status data including three-dimensional positions of the feature points within the images and position and attitude information of the camera; and a submap generating section for generating submaps by dividing an area for which the three-dimensional position is to be calculated. The three-dimensional information generating section obtains position and attitude of the camera or three-dimensional positions of the feature points by generating status data corresponding to the submaps not including information about feature points outside of a submap area for each of the generated submaps and updating the generated status data corresponding to the submaps.02-24-2011
20110110561FACIAL MOTION CAPTURE USING MARKER PATTERNS THAT ACCOMODATE FACIAL SURFACE - Capturing facial surface using marker patterns laid out on the facial surface by adapting the marker patterns to contours of the facial surface and motion range of a head including: generating a facial action coding system (FACS) matrix by capturing FACS poses; generating a pattern to wrap over the facial surface using the FACS poses as a guide; capturing and tracking marker motions of the pattern; stabilizing the marker motions of the pattern using a head stabilization transform to remove head motions from the marker motions; and generating and applying a plurality of FACS matrix weights to the stabilized marker motions.05-12-2011
20110110560Real Time Hand Tracking, Pose Classification and Interface Control - A hand gesture from a camera input is detected using an image processing module of a consumer electronics device. The detected hand gesture is identified from a vocabulary of hand gestures. The electronics device is controlled in response to the identified hand gesture. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.05-12-2011
20110019874DEVICE AND METHOD FOR DETERMINING GAZE DIRECTION - An eye tracker device (01-27-2011
20110243382X-Ray Inspection System and Method - The present specification discloses an X-ray system for processing X-ray data to determine an identity of an object under inspection. The X-ray system includes an X-ray source for transmitting X-rays, where the X-rays have a range of energies, through the object, a detector array for detecting the transmitted X-rays, where each detector outputs a signal proportional to an amount of energy deposited at the detector by a detected X-ray, and at least one processor that reconstructs an image from the signal, where each pixel within the image represents an associated mass attenuation coefficient of the object under inspection at a specific point in space and for a specific energy level, fits each of pixel to a function to determine the mass attenuation coefficient of the object under inspection at the point in space; and uses the function to determine the identity of the object under inspection.10-06-2011
20110033087VIDEO CONTENT ANALYSIS - A video content analysis (VCA) system generates an output regarding a detected condition that provides an indication of a confidence level regarding the detected condition. One example VCA system determines whether a first characteristic of a detected object in a field of vision of the video content analysis system satisfies a first criterion. If so, a first signal is generated under selected conditions. The VCA system also determines whether a second characteristic of the detected object satisfies a corresponding second criterion. If so, a second, different signal is generated if the first and second criteria are satisfied. The first and second signals indicate respective, different confidence levels that an event has occurred. A disclosed example includes a VCA as part of a security system.02-10-2011
20110033084IMAGE CLASSIFICATION SYSTEM AND METHOD THEREOF - An image classification system configured to classify a target and method thereof is provided, wherein the system includes at least one light source configured to emit light with at least one line pattern towards the target, wherein at least a portion of the emitted light and line pattern is reflected by the target. The system further includes an imager configured to receive at least a portion of the reflected light and line pattern, such that an obtained 2-D line pattern is produced that is representative of at least a portion of the emitted light and line pattern reflected by the target, and a controller configured to compare the 2-D line pattern to at least one previously obtained 2-D line pattern stored in a database, such that the controller classifies the 2-D line pattern as a function of the comparison.02-10-2011
20090034793Fast Crowd Segmentation Using Shape Indexing - A method for performing crowd segmentation includes receiving video image data (S02-05-2009
20090034796INCAPACITY MONITOR - A method of monitoring incapacity of a subject which includes the steps of continuously monitoring eye and eyelid movement of at least one eye of the subject; analyzing eye and eyelid movements to obtain measures of ocular quiescence and the duration of an interval of no eye or eyelid movement; and if the duration of ocular quiescence exceeds a predetermined value providing a potential incapacity warning and requesting a response within a predetermined period, and applying an emergency procedure if no response is made within a predetermined interval.02-05-2009
20090034794Conduct inference apparatus - In a conduct inference process, feature points are extracted from a capture image. The extracted feature points are collated with conduct inference models to select conduct inference models in each of which an accordance ratio between a target vector and a movement vector is within a tolerance. Among the selected conduct inference models, one conduct inference model in which a distance from a relative feature point to a return point is shortest is selected. Then, a specific conduct designated in the selected conduct inference model is tentatively determined as a specific conduct the driver intends to perform. Furthermore, based on the tentatively determined specific conduct, it is determined whether the specific conduct is probable. When it is determined that the specific conduct is probable, an alarm process is executed to output an alarm to the driver.02-05-2009
20110033085IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to store an attribute of each pixel existing inside a tracking target area set on an image and an attribute of a pixel existing adjacent to the pixel, an allocation unit configured to allocate an evaluation value to a pixel to be evaluated according to a result of comparison between an attribute of the pixel to be evaluated and an attribute of a pixel existing inside the tracking target area and a result of comparison between an attribute of a pixel existing adjacent to the pixel to be evaluated and an attribute of a pixel existing adjacent to the pixel existing inside the tracking target area, and a changing unit configured to change the tracking target area based on the allocated evaluation value.02-10-2011
20110243379VEHICLE POSITION DETECTION SYSTEM - A system stores reference data generated by associating image feature point data with an image-capturing position and a recorded vehicle event. The system generates data for matching by extracting image feature points from an actually-captured image. The system generates information on an actual vehicle event, extracts first reference data whose image-capturing position is located in a vicinity of an estimated position of the vehicle, and extracts second reference data that includes a recorded vehicle event that matches the actual vehicle event. The system performs matching between at least one of the first reference data and the second reference data, and the data for matching, and determines a position of the vehicle based on the matching.10-06-2011
20110044505EQUIPMENT OPERATION SAFETY MONITORING SYSTEM AND METHOD AND COMPUTER-READABLE MEDIUM RECORDING PROGRAM FOR EXECUTING THE SAME - Provided are equipment operation safety monitoring system and method and computer-readable medium having a program recorded thereon, the program allowing a computer to execute the method. The equipment operation safety monitoring system includes an image input unit, an integrated image generation unit, a guideline generation unit, and an image output unit. The image input unit is mounted on heavy equipment and inputs a plurality of images acquired by photographing partitioned areas in all the directions around the heavy equipment. The integrated image generation unit generates an integrated image including the areas in all the directions around the heavy equipment by using the plurality of the images. The guideline generation unit generates a guideline indicating a position separated by a predetermined distance from the heavy equipment. The image output unit illustrates the guideline on the integrated image and outputs the integrated image.02-24-2011
20110044508APPARATUS AND METHOD FOR RAY TRACING USING PATH PREPROCESS - Disclosed is an apparatus and method for ray-tracing using a path preprocess. The method for ray-tracing including launching a ray from a transmitting point at angles with regular intervals, setting a first side of an object where the launched ray is projected as a reference patch, and searching predetermined preprocessed path data for a counterpart patch corresponding to a second side of another object, the second side being exposed to the projected ray reflected or diffracted from the set reference patch, and tracing a transmission path of the reflected or diffracted ray.02-24-2011
20110044506TARGET ANALYSIS APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM - Provided is a target analysis apparatus, method and computer-readable medium based on a depth image and an intensity image of a target is provided. The target analysis apparatus may include a body detection unit to detect a body of the target from the intensity image of the target, a foreground segmentation unit to calculate an intensity threshold value in accordance with intensity values from the detected body, to transform the intensity image into a binary image using the intensity threshold value, and to mask the depth image of the target using the binary image as a mask to thereby obtain a masked depth image, and an active portion detection unit to detect an active portion of the body of the target from the masked depth image.02-24-2011
20110044503VEHICLE TRAVEL SUPPORT DEVICE, VEHICLE, VEHICLE TRAVEL SUPPORT PROGRAM - A vehicle travel support device determines presence of a recognition inhibiting factor of a lane mark on a road on which a vehicle is traveling with high accuracy irrespective of an imaging history by a vehicular camera from the same position. The vehicle travel support system generates an edge image by extracting an edge or actualizing an edge in an image obtained through the vehicular camera. When Hough transform of the edge image is performed, votes for a specified vote value of a linear component is evaluated in a ρ-θ space (Hough space). Presence of a recognition inhibiting factor of a lane mark on a road is determined by determining whether or not the votes of a specified vote value in a specified region denoting a standard travel lane of a vehicle in the real space is ≧a threshold in the ρ-θ space.02-24-2011
20110044502MOTION DETECTION METHOD, APPARATUS AND SYSTEM - A motion detection method, apparatus and system are disclosed in the present invention, which relates to the video image processing field. The present invention can effectively overcome the influence of the background on motion detection and the problem of object “conglutination” to avoid false detection, thereby accomplishing object detection in complex scenes with a high precision. The motion detection method disclosed in embodiments of the present invention comprises: acquiring detection information of the background scene and detection information of the current scene, wherein the current scene is a scene comprising an object(s) to be detected and the same background scene; and calculating the object(s) to be detected according to the detection information of the background scene and the detection information of the current scene. The present invention is applicable to any scenes where moving objects need to be detected, e.g., automatic passenger flow statistical systems in railway, metro and bus sectors, and is particularly applicable to detection and calibration of objects in places where brightness varies greatly.02-24-2011
20110044497SYSTEM, METHOD AND PROGRAM PRODUCT FOR CAMERA-BASED OBJECT ANALYSIS - A system, method and program product for camera-based object analyses including object recognition, object detection, and/or object categorization. An exemplary embodiment of the computerized method for analyzing objects in images obtained from a camera system includes receiving image(s) having pixels from the camera system; calculating a pool of features for each pixel; then deriving either a pool of radial moment of features from the pool of features and a geometric center of the image(s) or a pool of central moments of features from the pool of features; then calculating a normalized descriptor, based on an area of the image(s) and either of the derived pool of moments of features; and then based on the normalized descriptor, a computer then either recognizes, detects, and/or categorizes an object(s) in the image(s).02-24-2011
20110116684SYSTEM AND METHOD FOR VISUALLY TRACKING WITH OCCLUSIONS - Described herein are tracking algorithm modifications to handle occlusions when processing a video stream including multiple image frames. Specifically, system and methods for handling both partial and full occlusions while tracking moving and non-moving targets are described. The occlusion handling embodiments described herein may be appropriate for a visual tracking system with supplementary range information.05-19-2011
20110044507METHOD AND ASSISTANCE SYSTEM FOR DETECTING OBJECTS IN THE SURROUNDING AREA OF A VEHICLE - A method for determining relevant objects in a vehicle moving on a roadway An assistance function is executed in relation to a position of a relevant object, and the relevant objects are determined on the basis of an image evaluation of images of a surrounding area of the vehicle. The images are detected by way of camera sensors. By way of a radar sensor positions of stationary objects in the surrounding area of the vehicle are determined. A profile of a roadway edge is determined using the positions of the stationary objects and that the image evaluation is carried out in relation to the roadway edge profile determined. A driver assistance system suitable for carrying out the method is also described.02-24-2011
20110044500Light Information Receiving Method, Unit and Method for Recognition of Light-Emitting Objects - A light information receiving method, a method and a unit for the recognition of light-emitting objects are provided. The light information receiving method includes the following steps. A light-emitting object array is captured to obtain a plurality of images, wherein the light-emitting object array includes at least one light-emitting object. A temporal filtering process is performed to the images to recognize a light-emitting object. A light-emitting status of the light-emitting object array is recognized according to the light-emitting object location. A decoding process is performed according to the light-emitting status to output an item of information.02-24-2011
20110044498VISUALIZING AND UPDATING LEARNED TRAJECTORIES IN VIDEO SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur.02-24-2011
20110116685INFORMATION PROCESSING APPARATUS, SETTING CHANGING METHOD, AND SETTING CHANGING PROGRAM - Disclosed herein is an information processing apparatus including: a detection block configured to detect persons from an image; and a setting changing block configured such that if one of the persons detected by the detection block from the image is designated, then the setting changing block identifies a plurality of attributes of the designated person based on the image of the person, before changing user interface settings using attribute-specific setting information associated with a combination of the identified multiple attributes.05-19-2011
20110116683REDUCING MOTION ARTEFACTS IN MRI - The invention relates to motion correction in magnetic resonance imaging (MRI), implemented as a MRI apparatus or system, computer programs for such, and a method. A motion pattern of a region of interest ROI is estimated by: selecting a fixed point at an anatomical position that is pre-determined to be little or not affected by motion and rotating a point in the ROI that is affected by motion on the basis of motion detected by a navigator or other methods. From the estimated motion pattern of the ROI, the field of view (FOI) may be adapted by adjusting the gradients and the bandwidth of the RF pulses of the MR system in the acquisition sequence to avoid or reduce motion artefacts. Alternatively motion correction is carried out on the reconstructed images.05-19-2011
20110116682OBJECT DETECTION METHOD AND SYSTEM - An object detection method and an object detection system, suitable for detecting moving object information of a video stream having a plurality of images, are provided. The method performs a moving object foreground detection on each of the images, so as to obtain a first foreground detection image comprising a plurality of moving objects. The method also performs a texture object foreground detection on each of the images, so as to obtain a second foreground detection image comprising a plurality of texture objects. The moving objects in the first foreground detection image and the texture objects in the second foreground detection image are selected and filtered, and then the remaining moving objects or texture objects after the filtering are output as real moving object information.05-19-2011
20090052737Method and Apparatus for Detecting a Target in a Scene - A method of detecting a target in a scene is described that comprises the step of taking one or more data sets, each data set comprising a plurality of normalised data elements, each normalised data element corresponding to the return from a part of the scene normalised to a reference return for the same part of the scene. The method then involves thresholding (02-26-2009
20110019873PERIPHERY MONITORING DEVICE AND PERIPHERY MONITORING METHOD - A flow calculating section 01-27-2011
20110129118SYSTEMS AND METHODS FOR TRACKING NATURAL PLANAR SHAPES FOR AUGMENTED REALITY APPLICATIONS - The present system discloses systems and methods for tracking planar shapes for augmented-reality (AR) applications. Systems for real-time recognition and camera six degrees of freedom pose-estimation from planar shapes are disclosed. Recognizable shapes can be augmented with 3D content. Recognizable shapes can be in form of a predefined library being updated online using a network. Shapes can be added to the library when the user points to a shape and asks the system to start recognizing it. The systems perform shape recognition by analyzing contour structures and generating projective invariant signatures. Image features are further extracted for pose estimation and tracking. Sample points are matched by evolving an active contour in real time.06-02-2011
20090103778Composition determining apparatus, composition determining method, and program - A composition determining apparatus includes a subject detecting unit configured to detect one or more specific subjects in an image based on image data; a subject orientation detecting unit configured to detect subject orientation information indicating an orientation in the image of the subject detected by the subject detecting unit, the detection of the subject orientation information being performed for each of the detected subjects; and a composition determining unit configured to determine a composition based on the subject orientation information. When a plurality of subjects are detected by the subject detecting unit, the composition determining unit determines a composition based on a relationship among a plurality of pieces of the subject orientation information corresponding to the plurality of subjects.04-23-2009
20090080696Automated person identification and location for search applications - A “be on the look out” or BOLO device is an unsupervised device that can be deployed at a particular location to watch for a specific target or person. A camera produces scene images that the BOLO device analyzes to determine if they contain a pattern matching a target descriptor. If a matching pattern is found, then the BOLO device emits an alarm signal. The alarm signal can contain the BOLO device's location or identification. A location database can produce the device's location when given the device's identification. A target transmitter can supply new target descriptors to deployed BOLO devices.03-26-2009
20090080695Electro-optical Foveated Imaging and Tracking System - Conventional electro-optical imaging systems can not achieve wide field of view (FOV) and high spatial resolution imaging simultaneously due to format size limitations of image sensor arrays. To implement wide field of regard imaging with high resolution, mechanical scanning mechanisms are typically used. Still, sensor data processing and communication speed is constrained due to large amount of data if large format image sensor arrays are used. This invention describes an electro-optical imaging system that achieves wide FOV global imaging for suspect object detection and local high resolution for object recognition and tracking. It mimics foveated imaging property of human eyes. There is no mechanical scanning for changing the region of interest (ROI). Two relatively small format image sensor arrays are used to respectively acquire global low resolution image and local high resolution image. The ROI is detected and located by analysis of the global image. A lens array along with an electronically addressed switch array and a magnification lens is used to pick out and magnify the local image. The global image and local image are processed by the processor, and can be fused for display. Three embodiments of the invention are descried.03-26-2009
20110085701STRUCTURE DETECTION APPARATUS AND METHOD, AND COMPUTER-READABLE MEDIUM STORING PROGRAM THEREOF - A plurality of candidate points are extracted from image data. The plurality of candidate points are normalized, and a set of representative points composing form model that is most similar to set form is selected from the plurality of candidate points. Further, the candidate points and the form model are compared with each other, and correction is performed by adding a region forming structure or by deleting a region, or the like. Accordingly, the structure is detected in image data.04-14-2011
20090214081APPARATUS AND METHOD FOR DETECTING OBJECT - A disparity profile indicating a relation between a perpendicular position on time series images and a disparity on a target monitoring area based on an arrangement of a camera is calculated. Processing areas are set, by setting a height of each of the processing areas using a length at the bottom of the image obtained by converting a reference value of a height of an object according to the profile, while setting a position of each bottom of processing areas on the image. An object having a height higher than a certain height with respect to the monitoring area, unify an object detection result in each processing area according to the disparity of the object, and detect the object of the whole monitoring area from each processing area is detected. Position and speed for the object detected by the object primary detection unit are estimated.08-27-2009
20090214080METHODS AND APPARATUS FOR RUNWAY SEGMENTATION USING SENSOR ANALYSIS - Systems and methods for determining whether a region of interest (ROI) includes a runway are provided. One system includes a camera for capturing an image of the ROI, an analysis module for generating a binary large object (BLOB) of at least a portion of the ROI, and a synthetic vision system including a template of the runway. The system further includes a segmentation module for determining if the ROI includes the runway based on a comparison of the template and the BLOB. One method includes the steps of identifying a position for each corner on the BLOB and forming a polygon on the BLOB based on the position of each corner. The method further includes the step of determining that the BLOB represents the runway based on a comparison of the polygon and a template of the runway. Also provided are computer-readable mediums storing instructions for performing the above method.08-27-2009
20090214079SYSTEMS AND METHODS FOR RECOGNIZING A TARGET FROM A MOVING PLATFORM - Systems and methods for recognizing a location of a target are provided. One system includes a camera configured to generate first data representing an object resembling the target, a memory storing second data representing a template of the target, and a processor. The processor is configured to receive the first data and the second data, and determine that the object is the target if the object matches the template within a predetermined percentage error. A method includes receiving first data representing an object resembling the target, receiving second data representing a template of the target, and determining that the object is the target if the object matches the template within a predetermined percentage error. Also provided are computer-readable mediums including processor instructions for executing the above method.08-27-2009
20090214078Method for Handling Static Text and Logos in Stabilized Images - To handle static text and logos in stabilized images without destabilizing the static text and logos, a method of handling overlay subpictures in stabilized images includes detecting an overlay subpicture in an input image, separating the overlay subpicture from the input image, stabilizing the input image to form a stabilized image, and merging the overlay subpicture with the stabilized image to obtain an output image.08-27-2009
20090214077Method For Determining The Self-Motion Of A Vehicle - A method and a device for determining the self-motion of a vehicle in an environment are provided, in which at least part of the environment is recorded via snapshots by an imaging device mounted on the vehicle. At least two snapshots are analyzed for determining the optical flows of image points, reference points that seem to be stationary from the point of view of the imaging device being ascertained from the optical flows. The reference points are collected in an observed set, new reference points being dynamically added to the observed set with the aid of a first algorithm, and existing reference points being dynamically removed from the observed set with the aid of a second algorithm.08-27-2009
20110243389METHOD OF DETECTING PARTICLES BY DETECTING A VARIATION IN SCATTERED RADIATION - A smoke detecting method which uses a beam of radiation such as a laser (10-06-2011
20110243388IMAGE DISPLAY APPARATUS, IMAGE DISPLAY METHOD, AND PROGRAM - An image display apparatus may include a display section for presenting an image. The apparatus may also include a viewing angle calculation section for determining a viewing angle of a user relative to the display section. Additionally, the apparatus may include an image generation section for generating first image data representing a first image, and for supplying the first image data to the display section for presentation of the first image. The image generation section may generate the first image data based on the user's viewing angle, second image data representing a second image, and third image data representing a third image. The second image may include an object viewed from a first viewing angle and the third image may include the object viewed from a second viewing angle, the first viewing angle and the second viewing angle being different from each other and from the user's viewing angle.10-06-2011
20110243387Analysis of Radiographic Images - The present invention therefore provides a method for the analysis of radiographic images, comprising the steps of acquiring a plurality of projection images of a patient, acquiring a surrogate signal indicative of the location of a target structure in the patient, reconstructing a plurality of volumetric images of the patient from the projection images, each volumetric image being reconstructed from projection images having a like breathing phase, identifying the position of the target structure such as a tumour in each volumetric image, associating a surrogate signal with each of the projection images, and determining a relationship between the surrogate signal and the position of the target structure. Multiple projection images having a like breathing phase can be grouped for reconstruction, to provide sufficient numbers for reconstruction. The analysis of the multiple values of the surrogate associated with each breathing phase can be used to determine the mean surrogate value and its variation. Multiple values of the surrogate signal associated with the same nominal breathing phase can be used to determine a mean value of the surrogate signal for the target position associated with that phase and a variation of the value of the surrogate signal for the target position associated with that phase. The breathing phase of specific projection images can be obtained by analysis of one or more features in the images, such as the method we described in U.S. Pat. No. (7,356,112), or otherwise.10-06-2011
20110243386Method and System for Multiple Object Detection by Sequential Monte Carlo and Hierarchical Detection Network - A method and system for detecting multiple objects in an image is disclosed. A plurality of objects in an image is sequentially detected in an order specified by a trained hierarchical detection network. In the training of the hierarchical detection network, the order for object detection is automatically determined. The detection of each object in the image is performed by obtaining a plurality of sample poses for the object from a proposal distribution, weighting each of the plurality of sample poses based on an importance ratio, and estimating a posterior distribution for the object based on the weighted sample poses.10-06-2011
20110243385Moving object detection apparatus, moving object detection method, and program - Disclosed herein is a moving object detection apparatus including: an image input processing section configured to input an analysis image composed of an image taken by a camera in order to establish a designated region inside the analysis image; a first detection processing section configured to detect an image of a moving object which moves within the designated region established by the image input processing section and which is at a distance in a first range from the camera; and a second detection processing section configured to detect an image of the moving object which moves within the designated region established by the image input processing section and which is at a distance in a second range from the camera, the second range being farther than the first range.10-06-2011
20110243384IMAGE PROCESSING APPARATUS AND METHOD AND PROGRAM - There are provided an image processing apparatus, a method, and a program capable of appropriately adjusting the stereoscopic effect in a stereoscopic image with a person. The attention point serving as the provisional cross point position is set to a person's eye, and the cross point position is shifted backwards from the attention point as the percentage of the image occupied by the face increases, thereby adjusting the stereoscopic effect so as to increase an area of the object which is projected forward from the cross point. Regarding the calculation of the back shift amount, the back shift amount is set to increase as the percentage of the face occupied in the standard image increases, and the coefficient is set to be smaller as the number of pixels of the positions nearer than the attention point increases, and the set coefficient kb is multiplied by the back shift amount.10-06-2011
20110243381METHODS FOR TRACKING OBJECTS USING RANDOM PROJECTIONS, DISTANCE LEARNING AND A HYBRID TEMPLATE LIBRARY AND APPARATUSES THEREOF - A method, non-transitory computer readable medium, and apparatus that tracks an object includes utilizing random projections to represent an object in a region of an initial frame in a transformed space with at least one less dimension. One of a plurality of regions in a subsequent frame with a closest similarity between the represented object and one or more of plurality of templates is identified as a location for the object in the subsequent frame. A learned distance is applied for template matching, and techniques that incrementally update the distance metric online are utilized in order to model the appearance of the object and increase the discrimination between the object and the background. A hybrid template library, with stable templates and hybrid templates that contains appearances of the object during the initial stage of tracking as well as more recent ones is utilized to achieve robustness with respect to pose variation and illumination changes.10-06-2011
20110243380COMPUTING DEVICE INTERFACE - A computing device configured for providing an interface is described. The computing device includes a processor and instructions stored in memory. The computing device projects a projected image from a projector. The computing device also captures an image including the projected image using a camera. The camera operates in a visible spectrum. The computing device calibrates itself, detects a hand and tracks the hand based on a tracking pattern in a search space. The computing device also performs an operation.10-06-2011
20110243378METHOD AND APPARATUS FOR OBJECT TRACKING AND LOITERING DETECTION - A method and apparatus for object tracking and loitering detection are provided. The method includes: wavelet-converting an input image by converting the input image into an image of a frequency domain to generate a frequency domain image and separating the frequency domain image according to a frequency band and a resolution; extracting object information including essential information about the input image from the frequency domain image; performing a fractal affine transform on the object information; and compensating for a difference between object information about a previous image and the object information about the input image by using a coefficient which is obtained by the fractal affine transform.10-06-2011
20110243377SYSTEM AND METHOD FOR PREDICTING OBJECT LOCATION - A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area.10-06-2011
20110243376METHOD AND A DEVICE FOR DETECTING OBJECTS IN AN IMAGE - Detection of an object of a specified object category in an image. With the method, it is provided that: (1) at least two detectors are provided which are respectively set up for the purpose of detecting an object of the specified object category with a specified object size, wherein object sizes differ for the detectors, (2) the image is evaluated by the detectors in order to check whether an object of the specified object category is located in the image, and (3) an object of the specified object category is detected in the image when on the basis of the evaluation of the image by at least one of the detectors it is determined that an object of the specified object category is located in the image. A system suitable for implementing the method for detecting an object of a specified object category in an image is also described.10-06-2011
20120243740Scene Determination and Prediction - A system and method for scene determination is disclosed. The system comprises a communication interface, an object detector, a temporal pattern module and a scene determination module. The communication interface receives a video including at least one frame. The at least one frame includes information describing a scene. The object detector detects a presence of an object in the at least one frame and generates at least one detection result based at least in part on the detection. The temporal pattern module generates a temporal pattern associated with the object based at least in part on the at least one detection result. The scene determination module determines a type of the scene based at least in part on the temporal pattern.09-27-2012
20120243741Object Recognition For Security Screening and Long Range Video Surveillance - A method of detecting an object in image data that is deemed to be a threat includes annotating sections of at least one training image to indicate whether each section is a component of the object, encoding a pattern grammar describing the object using a plurality of first order logic based predicate rules, training distinct component detectors to each identify a corresponding one of the components based on the annotated training images, processing image data with the component detectors to identify at least one of the components, and executing the rules to detect the object based on the identified components.09-27-2012
20120243730COLLABORATIVE CAMERA SERVICES FOR DISTRIBUTED REAL-TIME OBJECT ANALYSIS - A collaborative object analysis capability is depicted and described herein. The collaborative object analysis capability enables a group of cameras to collaboratively analyze an object, even when the object is in motion. The analysis of an object may include one or more of identification of the object, tracking of the object while the object is in motion, analysis of one or more characteristics of the object, and the like. In general, a camera is configured to discover the camera capability information for one or more neighboring cameras, and to generate, on the basis of such camera capability information, one or more actions to be performed by one or more neighboring cameras to facilitate object analysis. The collaborative object analysis capability also enables additional functions related to object analysis, such as alerting functions, archiving functions (e.g., storing captured video, object tracking information, object recognition information, and so on), and the like.09-27-2012
20090324008METHOD, APPARTAUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING GESTURE ANALYSIS - A method for providing gesture analysis may include analyzing image data using a skin detection model generated with respect to detecting skin of a specific user, tracking a portion of the image data correlating to a skin region, and performing a gesture recognition for the tracked portion of the image based on comparing features recognized in the skin region to stored features corresponding to a predefined gesture. An apparatus and computer program product corresponding to the method are also provided.12-31-2009
20110129121REAL-TIME FACE TRACKING IN A DIGITAL IMAGE ACQUISITION DEVICE - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream potentially including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image.06-02-2011
20090245579PROBABILITY DISTRIBUTION CONSTRUCTING METHOD, PROBABILITY DISTRIBUTION CONSTRUCTING APPARATUS, STORAGE MEDIUM OF PROBABILITY DISTRIBUTION CONSTRUCTING PROGRAM, SUBJECT DETECTING METHOD, SUBJECT DETECTING APPARATUS, AND STORAGE MEDIUM OF SUBJECT DETECTING PROGRAM - A probability distribution constructing method extracts a subject shape similar to a subject of a specific type repeatedly appearing in various sizes in plural images obtained by repeatedly photographing a field using a fixedly disposed camera, from plurality images, in accordance with a size of the similar subject shape and positional information of the camera on a view angle. Subsequently, the probability distribution constructing method determines the similar subject shape, and calculates an appearance probability distribution of the size of the subject, and detects the subject using the appearance probability distribution.10-01-2009
20100239122METHOD FOR CREATING AND/OR UPDATING TEXTURES OF BACKGROUND OBJECT MODELS, VIDEO MONITORING SYSTEM FOR CARRYING OUT THE METHOD, AND COMPUTER PROGRAM - Video monitoring systems are used for camera-supported monitoring of relevant areas, and usually comprise a plurality of monitoring cameras placed in the relevant areas for recording monitoring scenes. The monitoring scenes may be, for example, parking lots, intersections, streets, plazas, but also regions within buildings, plants, hospitals, or the like. In order to simplify the analysis of the monitoring scenes by monitoring personnel, the invention proposes displaying at least the background of the monitoring scene on a monitor as a virtual reality in the form of a three-dimensional scene model using background object models. The invention proposes a method for creating and/or updating textures of background object models in the three-dimensional scene model, wherein a background image of the monitoring scene is formed from one or more camera images 09-23-2010
20090097709SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object.04-16-2009
20110211729Method for Generating Visual Hulls for 3D Objects as Sets of Convex Polyhedra from Polygonal Silhouettes - A visual hull for a 3D object is generated by using a set of silhouettes extracted from a set of images. First, a set of convex polyhedra is generated as a coarse 3D model of the object. Then for each image, the convex polyhedra are refined by projecting them to the image and determining the intersections with the silhouette in the image. The visual hull of the object is represented as union of the convex polyhedra.09-01-2011
20110249865APPARATUS, METHOD AND COMPUTER-READABLE MEDIUM PROVIDING MARKER-LESS MOTION CAPTURE OF HUMAN - Provided are an apparatus, method and computer-readable medium providing marker-less motion capture of a human. The apparatus may include a two-dimensional (2D) body part detection unit to detect, from input images, candidate 2D body part locations of candidate 2D body parts; a three-dimensional (3D) lower body part computation unit to compute 3D lower body parts using the detected candidate 2D body part locations; a 3D upper body computation unit to compute 3D upper body parts based on a body model; and a model rendering unit to render the model in accordance with a result of the computed 3D upper body parts.10-13-2011
20110085706DEVICE AND METHOD FOR LOCALIZING AN OBJECT OF INTEREST IN A SUBJECT - The present invention relates to a device, a method and a computer program which allow for the localization of an object of interest in a subject. The device includes a registration unit (04-14-2011
20110085703Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. Terrain types are identified in the image. A second image is generated identifying areas of the image which border regions of different intensities by identifying a gradient magnitude value for each pixel of the image. A filtered image is generated from the second image, the filtered image identifying potential objects which have a smaller radius than the size of a filter and a different brightness than background pixels surrounding the potential objects. The second image and the filtered image are compared to identify potential objects as an object. A potential object is identified as an object if the potential object has a gradient magnitude greater than a threshold gradient magnitude, and the threshold gradient magnitude is based on the terrain type identified in the portion of the image where the potential object is located.04-14-2011
20110085705DETECTION OF BODY AND PROPS - A system and method for detecting and tracking targets including body parts and props is described. In one aspect, the disclosed technology acquires one or more depth images, generates one or more classification maps associated with one or more body parts and one or more props, tracks the one or more body parts using a skeletal tracking system, tracks the one or more props using a prop tracking system, and reports metrics regarding the one or more body parts and the one or more props. In some embodiments, feedback may occur between the skeletal tracking system and the prop tracking system.04-14-2011
20110085702OBJECT TRACKING BY HIERARCHICAL ASSOCIATION OF DETECTION RESPONSES - Systems, methods, and computer readable storage media are described that can provide a multi-level hierarchical framework to progressively associate detection responses, in which different methods and models are adopted to improve tracking robustness. A modified transition matrix for the Hungarian algorithm can be used to solve the association problem that considers not only initialization, termination and transition of tracklets but also false alarm hypotheses. A Bayesian inference approach can be used to automatically estimate a scene structure model as the high-level knowledge for the long-range trajectory association.04-14-2011
20110085700Systems and Methods for Generating Bio-Sensory Metrics - Neuromarketing processing systems and methods are described that provide marketers with a window into the mind of the consumer with a scientifically validated, quantitatively-based means of bio-sensory measurement. The neuromarketing processing system generates, from bio-sensory inputs, quantitative models of consumers' responses to information in the consumer environment, under an embodiment. The quantitative models provide information including consumers' emotion, engagement, cognition, and feelings. The information in the consumer environment includes advertising, packaging, in-store marketing, and online marketing.04-14-2011
20110085699Method and apparatus for tracking image patch considering scale - A method and apparatus for tracking an image considering scale are provided. A registered image patch may be divided into a scale-invariant image patch and a scale-variant image patch according to a predetermined scale invariance index (SII). If a registered image patch within an image is a scale-invariant image patch, the scale-invariant image patch is tracked by adjusting its position, while if the registered image patch is a scale-variant image patch, the scale-invariant image patch is tracked by adjusting its position and scale.04-14-2011
20090052741Subject tracking method, subject tracking device, and computer program product - A subject tracking method, includes: calculating a similarity factor indicating a level of similarity between an image contained in a search frame at each search frame position and a template image by shifting the search frame within a search target area set in each of individual frames of input images input in time sequence; determining a position of the search frame for which a highest similarity factor value has been calculated, within each input image to be a position (subject position) at which a subject is present; tracking the subject position thus determined through the individual frames of input images; calculating a difference between a highest similarity factor value and a second highest similarity factor value; and setting the search target area for a next frame based upon the calculated difference.02-26-2009
20090052740MOVING OBJECT DETECTING DEVICE AND MOBILE ROBOT - A moving object detecting device measures a congestion degree of a space and utilizes the congestion degree for tracking. In performing the tracking, a direction measured by a laser range sensor is heavily weighted when the congestion degree is low. When the congestion degree is high, a sensor fusion is performed by heavily weighting a direction measured by a image processing on a captured image to obtain a moving object estimating direction, and obtains a distance by the laser range sensor in the moving object estimating direction.02-26-2009
20090052739HUMAN PURSUIT SYSTEM, HUMAN PURSUIT APPARATUS AND HUMAN PURSUIT PROGRAM - A human pursuit system includes a plurality of cameras, shooting directions of which are directed toward a floor, are installed on a ceiling, a parallax of an object reflected in an overlapping image domain is calculated on the basis of at least a portion of the overlapping image domain where images are overlapped among shot images shot by the plurality of cameras, the object equal to or greater than a threshold value predetermined by the calculated parallax is detected as a human, a pattern image including the detected human object is extracted, and a pattern matching is applied to the extracted pattern image and the image shot by the camera to thereby pursue a human movement trajectory.02-26-2009
20100034425METHOD, APPARATUS AND SYSTEM FOR GENERATING REGIONS OF INTEREST IN VIDEO CONTENT - A method, apparatus and system for generating regions of interest in a video content include identifying the program content of received video content, categorizing the scene content of the identified program content and defining at least one region of interest in at least one of the characterized scenes by identifying at least one of a location and an object of interest in the scenes. In one embodiment of the invention, a region of interest is defined using user preference information for the identified program content and the categorized scene content.02-11-2010
20090262981IMAGE PROCESSING APPARATUS AND METHOD THEREOF - An image processing apparatus estimates an estimated object region including an object on an input image on the basis of a stored object data, obtains a similarity distribution of the estimated object region and peripheral regions thereof by at least one classifier, and obtains an object region coordinate and a template image on the basis of the similarity distribution.10-22-2009
20100014709Super-resolving moving vehicles in an unregistered set of video frames - A method is provided for accurately determining the registration for a moving vehicle over a number of frames so that the vehicle can be super-resolved. Instead of causing artifacts in a super-resolved image, the moving vehicle can be specifically registered and super-resolved individually. This method is very accurate, as it uses a mathematical model that captures motion with a minimal number of parameters and uses all available image information to solve for those parameters. Methods are provided that implement the vehicle registration algorithm and super-resolve moving vehicles using the resulting vehicle registration. One advantage of this system is that better images of moving vehicles can be created without requiring costly new aerial surveillance equipment.01-21-2010
20100008540Method for Object Detection - A method for object detection from a visual image of a scene. The method includes: using a first order predicate logic formalism to specify a set of logical rules to encode contextual knowledge regarding the object to be detected; inserting the specified logical rules into a knowledge base; obtaining the visual image of the scene; applying specific object feature detectors to some or all pixels in the visual image of the scene to obtain responses at those locations; using the obtained responses to generate logical facts indicative of whether specific features or parts of the object are present or absent at that location in the visual image; inserting the generated logical facts into the knowledge base; and combining the logical facts with the set of logical rules to whether the object is present or absent at a particular location in the scene.01-14-2010
20100008542Object detection method and apparatus - An object detection method and apparatus is provided. When an object pixel having a target pixel value is found while an image including an object is scanned at intervals of a preset number of pixels, whether or not each pixel around the object pixel has the target pixel value is sequentially determined, while spreading to pixels around the object pixel, to find an entire pixel region constituting the object and position values of the found pixels are stored. This ensures that an entire pixel region of the object is simply, easily, quickly, and correctly found.01-14-2010
20100008541Method for Presenting Images to Identify Target Objects - A method presents a set of images to a viewer. The images include objects, which can be either distractor objects or target objects. A prevalence of the target objects is substantially lower than the distractor objects. Each image is segmented into portions so that each portion includes one object. The portions are then combined into a combined image. The combined image is presented to a viewer so that the target objects can be accurately and rapidly identified. The combining of the portions can be random or ordered in either the spatial or temporal domain.01-14-2010
20100014710METHOD AND SYSTEM FOR TRACKING POSITIONS OF HUMAN EXTREMITIES - A method for tracking positions of human extremities is disclosed. A left image of a first extremity portion is retrieved using a first picturing device and an outline candidate position of the first extremity portion is obtained according to feature information of the left image. A right image of the first extremity portion is retrieved using a second picturing device and a depth candidate position of the first extremity portion is obtained according to depth information of the right image. Geometry relations between the outline candidate position and the depth candidate position and a second extremity portion of a second extremity position are calculated to determine whether a current extremity position of the first extremity portion is required to be updated.01-21-2010
20100014707Vehicle and road sign recognition device - A vehicle and road sign recognition device each includes: image capturing means (01-21-2010
20100054534SYSTEM AND METHOD FOR INTERACTING WITH A MEDIA DEVICE USING FACES AND PALMS OF VIDEO DISPLAY VIEWERS - Systems and method which allow for user interaction with and control of televisions and other media device are disclosed. A television set is provided with a face and/or palm detection device configured to identify faces and/or palms and map them into coordinates. The mapped coordinates may be translated into data inputs which may be used to interact with applications related to the television. In some embodiments, multiple faces and/or palms may be detected and inputs may be received from each of them. The inputs received by mapping the coordinates may include inputs for interactive television programs in which viewers are asked to vote or rank some aspect of the program.03-04-2010
20100061593Extrapolation system for solar access determination - An extrapolation system includes acquiring a first orientation-referenced image at a first position, acquiring a second orientation-referenced image at a second position having a vertical offset from the first position, and processing the first orientation-referenced image and the second orientation-referenced image to provide an output parameter extrapolated to a third position that has an offset from the first position and the second position.03-11-2010
20100054537VIDEO FINGERPRINTING - A method for fingerprinting video comprising identifying motion in a video as a function of time; using the identified motion to create a motion fingerprint; identifying peaks and/or troughs in the motion fingerprint, and using these to create a reduced size points of interest motion fingerprint. Reduced size fingerprints for a plurality of known videos can be prepared and stored for later comparison with reduced size fingerprints for unknown videos, thereby providing a mechanism for identifying the unknown videos.03-04-2010
20100061595INVENTORY MANAGEMENT SYSTEM - The location of objects in a building is recorded in the inventory management system. The objects are moved through the building with a vehicle. The vehicle transmits wireless messages indicating actions of the vehicle, such as loading or unloading of objects. A camera captures images of an area in which the vehicle moves. Positions of the vehicle are automatically detected from the captured images. The information about locations of objects is updated using the detected positions at time points indicated by the messages. In an embodiment the actions of the vehicle are signalled with light signals and picked up via the camera.03-11-2010
20100054536ESTIMATING A LOCATION OF AN OBJECT IN AN IMAGE - An implementation provides a method including forming a metric surface in a particle-based framework for tracking an object, the metric surface relating to a particular image in a sequence of digital images. Multiple hypotheses are formed of a location of the object in the particular image, based on the metric surface. The location of the object is estimated based on probabilities of the multiple hypotheses.03-04-2010
20120170811METHOD AND APPARATUS FOR WHEEL ALIGNMENT - A vehicle wheel alignment method and system is provided. A three-dimensional target is attached to a vehicle wheel known to be in alignment. The three-dimensional target has multiple target elements thereon, each of which has known geometric characteristics and 3D spatial relationship with one another.07-05-2012
20100061594DETECTION OF MOTOR VEHICLE LIGHTS WITH A CAMERA - A method for detecting front headlights and tail lights of a motor vehicle with a colour camera sensor is presented. The colour camera sensor comprises a plurality of red pixels, i.e. image points which are only sensitive in the red spectral range, and a plurality of pixels of other colours. In a first evaluation stage, only the intensity of the red pixels in the image is analysed in order to select relevant points of light in the image.03-11-2010
20110176707IMAGE ANALYSIS BY OBJECT ADDITION AND RECOVERY - The invention described herein is generally directed to methods for analyzing an image. In particular, crowded field images may be analyzed for unidentified, unobserved objects based on an iterative analysis of modified images including artificial objects or removed real objects. The results can provide an estimate of the completeness of analysis of the image, an estimate of the number of objects that are unobserved in the image, and an assessment of the quality of other similar images.07-21-2011
20110081046METHOD OF IMPROVING THE RESOLUTION OF A MOVING OBJECT IN A DIGITAL IMAGE SEQUENCE - A method of improving the resolution of a small moving object in a digital image sequence comprises the steps of: 04-07-2011
20090028385DETECTING AN OBJECT IN AN IMAGE USING EDGE DETECTION AND MORPHOLOGICAL PROCESSING - A representation of an object in a live event is detected in an image of the event. A location of the object in the live event is translated to an estimated location in the image based on camera sensor and/or registration data. A search area is determined around the estimated location in the image. A direction of motion of the object in the image is also determined. A representation of the object is identified in the search area by detecting edges of the object, e.g., perpendicular to the direction of motion and parallel to the direction of motion, performing morphological processing, and matching against a model or other template of the object. Based on the position of the representation of the object, the camera sensor and/or registration data can be updated, and a graphic can be located in the image substantially in real time.01-29-2009
20090028387Apparatus and method for recognizing position of mobile robot - Provided is an apparatus for recognizing the position of a mobile robot. The apparatus includes an image capturing unit which is loaded into a mobile robot and captures an image; an illuminance determining unit which determines illuminance at a position where an image is to be captured; a light-emitting unit which emits light toward the position; a light-emitting control unit which controls the light-emitting unit according to the determined illuminance; a driving control unit which controls the speed of the mobile robot according to the determined illuminance; and a position recognizing unit which recognizes the position of the mobile robot by comparing a pre-stored image to the captured image.01-29-2009
20090028386AUTOMATIC TRACKING APPARATUS AND AUTOMATIC TRACKING METHOD - An automatic tracking apparatus is provided, which is capable of solving a failure occurred in an automatic tracking operation in connection with a zooming operation, and capable of tracking an object in a stable manner, while a zooming-up operation, or a zooming-down operation is carried out in a high speed.01-29-2009
20110150275MODEL-BASED PLAY FIELD REGISTRATION - A method, apparatus, and system are described for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a homograph matrix of a final video image based on the direct estimation is generated.06-23-2011
20110085704Markerless motion capturing apparatus and method - A markerless motion capturing apparatus and method is provided. The markerless motion capturing apparatus may track a pose and a motion of a performer from an image, inputted from a camera, without using a marker or a sensor, and thereby may extend an application of the markerless motion capturing apparatus and selection of a location.04-14-2011
20110249868LINE-OF-SIGHT DIRECTION DETERMINATION DEVICE AND LINE-OF-SIGHT DIRECTION DETERMINATION METHOD - Provided are a line-of-sight direction determination device and a line-of-sight direction determination method capable of highly precisely and accurately determining a line-of-sight direction from immediately after start of measurement without indication of an object to be carefully observed and adjustment work done in advance. The line-of-sight direction determination device (10-13-2011
20110249866METHODS AND SYSTEMS FOR THREE DIMENSIONAL OPTICAL IMAGING, SENSING, PARTICLE LOCALIZATION AND MANIPULATION - Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector.10-13-2011
20110249864MEASUREMENT OF THREE-DIMENSIONAL MOTION CHARACTERISTICS - A system for measurement of three-dimensional motion of an object is provided. The system includes a light projection means adapted for projecting, for distinct time intervals, light of at least two different colors with a cross-sectional pattern of fringe lines onto a surface of the object and also includes image acquisition means for capturing an image of the object during an exposure time, wherein the distinct time intervals are within the duration of the exposure time. The system further includes image processing means adapted for processing the image to obtain a different depth map for each color based on a projected pattern of fringe lines on the object as viewed from the position of the image acquisition means, to determine corresponding points on the depth maps of each color, and to determine a three-dimensional motion characteristic of the object based on the positions of corresponding points on the depth maps.10-13-2011
20110249863INFORMATION PROCESSING DEVICE, METHOD, AND PROGRAM - An information processing device includes a face detection unit that detects a face area from a target image, a feature point detection unit that detects a feature point of the detected face area, a determination unit that determines an attention area that is an area to which attention is paid in the face area based on the detected feature point, a reference color extraction unit that extracts a reference color that is color setting obtained from the target image in the determined attention area, an adjustment unit that adjusts the extracted reference color to a color setting for a modified image generated from the target image as a base, and a generation unit that generates the modified image from the target image by drawing the attention area using the color setting for the modified image.10-13-2011
20110249861CONTENT INFORMATION PROCESSING DEVICE, CONTENT INFORMATION PROCESSING METHOD, CONTENT INFORMATION PROCESSING PROGRAM, AND PERSONAL DIGITAL ASSISTANT - An information processing apparatus comprising that includes a reproduction unit to reproduce video content comprising a plurality of frames; a memory to store a table including object identification information identifying an object image, and frame identification information identifying a frame of the plurality of frames that includes the object image; and a processor to extract the frame including the object image from the video content and generate display data of a reduced image corresponding to the frame for display.10-13-2011
20110081047ELECTRONIC APPARATUS AND IMAGE DISPLAY METHOD - According to one embodiment, an electronic apparatus detects face images in a still image. The apparatus sets positions and sizes of display ranges on the still image such that the display ranges include the face images respectively, the display ranges being associated with display areas obtained by dividing a display screen. The apparatus displays partial images included in the display ranges on the display areas in order to display the face images on the display areas respectively, and changes the position and size of each of the display ranges such that a display mode of the display screen is caused to transit from a first display mode in which the face images are displayed on the display areas respectively to a second display mode in which an entire image of the still image is displayed on the display screen.04-07-2011
20110081045Systems And Methods For Tracking A Model - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.04-07-2011
20110081048METHOD AND APPARATUS FOR TRACKING MULTIPLE OBJECTS AND STORAGE MEDIUM - The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that performs object detection of one subset per an input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and tracks all objects among images while the objects are detected to track multiple objects in real time, and a storage medium. The method for tracking multiple objects according to the exemplary embodiment of the present invention includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed.04-07-2011
20110069869SYSTEM AND METHOD FOR DEFINING AN ACTIVATION AREA WITHIN A REPRESENTATION SCENERY OF A VIEWER INTERFACE - The invention describes a system (03-24-2011
20110075884Automatic Retrieval of Object Interaction Relationships - A method for automatically retrieving interaction information between objects, including: with a server, transforming a first image and a second image submitted to said server from a source into first and second sets of parameters, respectively; searching a database for an interaction relationship between the first and second images using the first and second sets of parameters; and returning a representation of the interaction relationship to the source.03-31-2011
20110069865METHOD AND APPARATUS FOR DETECTING OBJECT USING PERSPECTIVE PLANE - A method and apparatus for detecting an object using a perspective plane are disclosed. The method includes determining a perspective plane for a background scene, and determining a moving object within the background scene based upon the determined perspective plane. By using a visual surveillance device and an apparatus for detecting objects, the method and apparatus for detecting an object using a perspective plane is capable of efficiently detecting objects and tracking the movements of the corresponding objects.03-24-2011
20110069868SIGNAL PROCESSING SYSTEM AND SIGNAL PROCESSING PROGRAM - A dedicated base vector based on a known spectral characteristic of a subject as an identification target having the known spectral characteristic and a spectral characteristic of an imaging system, which includes a spectral characteristic concerning a color imaging system used for image acquisition of subjects including the subject as the identification target and a spectral characteristic concerning illumination light used when image acquisition of the subjects by the color imaging system, are acquired. A weighting factor concerning the dedicated base vector is calculated based on an image signal obtained by image acquisition of the subject by the color imaging system, the dedicated has vector, and the spectral characteristic of the imaging system. An identification result of the subject which is the identification target having the known spectral characteristic is calculated based on the weighting factor concerning the dedicated base vector to output as an output signal.03-24-2011
20110069867TECHNIQUE FOR REGISTERING IMAGE DATA OF AN OBJECT - A technique of registering image data of an object 03-24-2011
20110069866Image processing apparatus and method - Provided is an image processing apparatus. The image processing apparatus may extract a three-dimensional (3D) silhouette image in an input color image and/or an input depth image. Motion capturing may be performed using the 3D silhouette image and 3D body modeling may be performed.03-24-2011
20120201422SIGNAL PROCESSING APPARATUS - A signal processing apparatus for displaying an input image in the sate in which a part of the image is enlarged, displays an enlarged image obtained by enlarging a part of a designated object in the input image so that the enlarged image is superimposed at a position in accordance with the position of the designated object.08-09-2012
20120201420Object Recognition and Describing Structure of Graphical Objects - Methods for processing machine-readable forms or documents of non-fixed format are disclosed. The methods make use of, for example, a structural description of characteristics of document elements, a description of a logical structure of the document, and methods of searching for document elements by using the structural description. A structural description of the spatial and parametric characteristics of document elements and the logical connections between elements may include a hierarchical logical structure of the elements, specification of an algorithm of determining the search constraints, specification of characteristics of searched elements, and specification of a set of parameters for a compound element identified on the basis of the aggregate of its components. The method of describing the logical structure of a document and methods of searching for elements of a document may be based on the use of the structural description.08-09-2012
20120201419MAP INFORMATION DISPLAY APPARATUS, MAP INFORMATION DISPLAY METHOD, AND PROGRAM - A map information display apparatus for displaying map information on the basis of information on image-capturing times and image-capturing positions that are respectively associated with a plurality of captured images includes a captured image extraction unit configured to extract images captured within a predetermined time period that includes the image-capturing time of a predetermined captured image from among the plurality of captured images; a map area selection unit configured to select an area of a map so as to include the image-capturing positions of the captured images extracted by the captured image extraction unit by using as a reference the image-capturing position of the predetermined captured image; and a map information display unit configured to display map information in such a manner that the area of the map, which is selected by the map area selection unit, is displayed.08-09-2012
20120201418DIGITAL RIGHTS MANAGEMENT OF CAPTURED CONTENT BASED ON CAPTURE ASSOCIATED LOCATIONS - A certification is received from a user stating that captured content does not comprise a particular restricted element and a request from the user for an adjustment of a digital rights management rule identified for the captured content based on the captured content comprising the particular restricted element. At least one term of the digital rights management rule is adjusted to reflect that the captured content does not comprise the particular restricted element. The usage of the captured content by the user is monitored to determine whether the usage matches the certification statement.08-09-2012
20120201417APPARATUS AND METHOD FOR PROCESSING SENSORY EFFECT OF IMAGE DATA - A method and apparatus is capable of processing a sensory effect of image data. The apparatus includes an image analyzer that analyzes depth information and texture information about at least one object included in an image. A motion analyzer analyzes a motion of a user. An image matching processor matches the motion of the user to the image. An image output unit outputs the image to which the motion of the user is matched, and a sensory effect output unit outputs a texture of an object touched by the body of the user to the body of the user.08-09-2012
20110150283APPARATUS AND METHOD FOR PROVIDING ADVERTISING CONTENT - Disclosed herein are an apparatus and method for providing advertising content effectively. The apparatus for providing advertising content comprises: a image processing unit for extracting an object from a captured image; the long-distance analysis unit for creating long-distance analysis information obtained by analyzing the object at a first distance; the short-distance analysis unit for creating short-distance analysis information obtained by analyzing the object at a second distance that is shorter than the first distance; and the content selection unit for selecting advertising content using the long-distance analysis information and the short-distance analysis information.06-23-2011
20110150273METHOD AND SYSTEM FOR AUTOMATED SUBJECT IDENTIFICATION IN GROUP PHOTOS - A system to automatically attach subject descriptions to a digital image containing one or more subjects is described. The system comprises a camera a set of remotely readable badges attached to the subjects, where each badge has a readable identification, a receiver to read the badges where the receiver can determine both the identification of each badge and the location of each badge, and a processor to combine the digital image and the identification and location information is described. By accessing a database containing the subject identification associated with each badge identification the processor can attach subject identification information to each subject in the image.06-23-2011
20110150272SYSTEMS AND METHODS OF TRACKING OBJECT PATHS - Systems and methods for tracking the path of a user configurable object are provided. The method includes displaying a video data stream of a monitored region, configuring an object in the video data stream, configuring a valid path of the object, tracking a path of the object, and providing an alert to a user when the object travels outside of the valid path.06-23-2011
20110044499INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.02-24-2011
20110033086IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An apparatus includes a storage unit configured to classify pixels existing inside a tracking target area set on an image and pixels existing outside the tracking target area according to an attribute and to store a result of classification of the pixels on a storage medium, a first derivation unit configured to derive a first ratio of the pixels existing inside the tracking target area and having the attribute to the pixels existing outside the tracking target area and having the attribute, a second derivation unit configured to derive a second ratio of pixels, whose the first ratio is higher than a first predetermined value, to all pixels existing inside the tracking target area, and a determination unit configured, if the second ratio is higher than a second predetermined value, to determine that the tracking target area can be tracked.02-10-2011
20100303296MONITORING CAMERA SYSTEM, MONITORING CAMERA, AND MONITORING CAMERACONTROL APPARATUS - A system includes a plurality of image capturing units configured to capture an object image to generate video data, a video coding unit configured to code each of the generated video data, a measurement unit configured to measure a recognition degree representing a feature of the object from each of the generated video data, and a control unit configured to control the video coding unit to code each of the video data based on the measured recognition degree.12-02-2010
20080253614METHOD AND APPARATUS FOR DISTRIBUTED ANALYSIS OF IMAGES - A method and apparatus for intelligent distributed analyses of images including capturing the images and analyzing the captured images, where feature information is extracted from the captured images. The extracted feature information is used in determining whether a predefined condition is met, and the extracted feature information is transmitted for further analysis when the predefined condition is met. The extracted feature information is stored and is used to generate statistical information related to the extracted feature information. Further, additional feature information is provided from other databases to implement further analysis including an event detection or recognition. Accordingly, distributed intelligent analyses of images is provided for analyzing captured images to efficiently and effectively implement event detection or recognition.10-16-2008
20130163816Prioritized Contact Transport Stream - A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.06-27-2013
201301638153D RECONSTRUCTION OF TRAJECTORY - Disclosed is a method of determining a 3D trajectory of an object from at least two observed trajectories of the object in a scene. The observed trajectories are captured in a series of images by at least one camera, each of the images in the series being associated with a pose of the camera. First and second points of the object from separate parallel planes of the scene are selected. A first set of 2D capture locations corresponding to the first point and a second set of 2D capture locations corresponding to the second point to determine a approximated 3D trajectory of the object.06-27-2013
20100260378SYSTEM AND METHOD FOR DETECTING THE CONTOUR OF AN OBJECT ON A MOVING CONVEYOR BELT - A system for detecting the contour of an object situated on a surface includes an image acquisition assembly, wherein there is relative motion between the image acquisition assembly and the object. The image acquisition assembly includes a line detector, operable for scanning the surface line by line by virtue of the relative motion. Each line is scanned during a scan cycle, the line being transverse to the direction of the relative motion. A light source is operable for emitting light toward the line detector during active periods between idle periods, such that during each of the active periods the light is emitted for at least one cycle synchronized with the scan cycle, allowing the line detector to acquire a first group of at least one lit scan line. During each of the idle periods lasting for at least another cycle synchronized with the scan cycle, no light is emitted, allowing the line detector to acquire a second group of at least one unlit scan line. The object passes between the line detector and the light source by virtue of the relative motion. A processor is coupled with the image acquisition assembly and receives and analyzes scan lines acquired by the line detector. For each of the first group of at least one lit scan line and a successive one of the second group of at least one unlit scan line, the processor identifies a token pattern consisting of a lit segment of the first group adjoining an unlit segment of the second group. The processor searches along the first group and the successive second group for locations where the token pattern ends or reappears, thereby defining edges of the object, and combining the collection of the defined edges to produce a contour of the object.10-14-2010
20100296704SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application.11-25-2010
20100260380DEVICE FOR OPTICALLY MEASURING AND/OR TESTING OBLONG PRODUCTS - A device for optically measuring and/or testing oblong products moving in a longitudinal direction. The device includes a plurality of cameras arranged in a plane perpendicular to the longitudinal direction, and distributed around the longitudinal direction. Each of the cameras has a fixed focus. The device further includes a displacing device adapted to displace each of the cameras simultaneously and jointly over the same distance toward the surface of the oblong product to focus on the oblong product, wherein the device defines a center that is located in the plane.10-14-2010
20100260379Image Processing Apparatus And Image Sensing Apparatus - A tracking process portion includes a search area setting portion for setting a search area in the input image, an image analysis portion for analyzing an image in the search area, an auxiliary track value setting portion for setting an auxiliary track value based on a result of the analysis, a track value setting portion for setting an auxiliary track value based on a result of the analysis and deciding whether the set track value is correct or not, and a track target detection portion for detecting a track object from the image in the search area based on the track value. If the set track value is incorrect, the track value setting portion performs a switching operation for setting the auxiliary track value and a track value.10-14-2010
20100260377MOBILE DETECTOR, MOBILE DETECTING PROGRAM, AND MOBILE DETECTING METHOD - When a mobile is detected using an imaging device installed in the mobile, the image of a partial area is enlarged/reduced depending on variation in distance to the detection object mobile and then it is compared under a fixed scale thus causing increase in computation cost. In order to eliminate the need for an enlargement/reduction processing or a deformation correction processing every time when collation is performed, an input image is converted into a virtual plane image having a size or a shape on the image of a detection object mobile which does not vary depending on the distance between the mobiles. Using a pair of virtual plane images obtained at two different times, points are made to correspond and the mobile is detected based on the gap of corresponding points.10-14-2010
20100260376MAPPER COMPONENT FOR MULTIPLE ART NETWORKS IN A VIDEO ANALYSIS SYSTEM - Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network.10-14-2010
20110249867DETECTION OF OBJECTS IN DIGITAL IMAGES - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A given color channel of the image is extracted. At least one blob that stands out from a background of the given color channel is identified. One or more features are extracted from the blob. The one or more features are provided to a plurality of pre-learned object models each including a set of pre-defined features associated with a pre-defined blob type. The one or more features are compared to the set of pre-defined features. The blob is determined to be of a type that substantially matches a pre-defined blob type associated with one of the pre-learned object models. At least a location of an object is visually indicated within the image that corresponds to the blob.10-13-2011
20100098294METHOD AND APPARATUS FOR DETECTING LANE - A method and an apparatus for detecting a lane are disclosed. The lane detecting apparatus includes: a region of ID setup setting a region of ID including a road region of a current lane in an acquired image; a road sign verifier verifying existence of a road sign within the set region of ID; an ROI setup calculating a difference value between a lane prediction result and previous lane information when there exists a road sign and setting an ROI based on the calculated difference value; and a lane detector detecting a lane by extracting lane markings based on the set ROI. Accordingly, a lane can be more accurately detected even in a road environment including a road sign by removing the road sign to extract only necessary lane markings.04-22-2010
20120033853INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - The present invention refers to an information processing apparatus comprising: an obtaining unit adapted to obtain an image of an object; a face region detection unit adapted to detect a face region of the object from the image; an eye region detection unit adapted to detect an eye region of the object; a generation unit adapted to generate a high-resolution image and low-resolution image of the face region detected by the face region detection means; a first extraction unit adapted to extract a first feature amount indicating a direction of a face existing in the face region from the low-resolution image; a second extraction unit adapted to extract a second feature amount indicating a direction of an eye existing in the eye region from the high-resolution image; and an estimation unit adapted to estimate a gaze direction of the object from the first feature amount and the second feature amount.02-09-2012
20110158475Position Measuring Method And Position Measuring Instrument - The present invention provides a position measuring instrument, comprising a GPS position detecting device 06-30-2011
20100061591OBJECT RECOGNITION DEVICE - An object recognition device detects a position of a vehicle based on a running path obtained by GPS, vehicle speed, steering angle, etc., and also detects the position of the vehicle based on a result of recognition of an object obtained using a captured image of a camera. The device computes a positioning accuracy in detecting the vehicle position, which accuracy mostly deteriorates as a movement distance of the vehicle increases.03-11-2010
20100034424POINTING SYSTEM FOR LASER DESIGNATOR - A system for illuminating an object of interest includes a platform and a gimbaled sensor associated with an illuminator. The gimbaled sensor provides sensor data corresponding to a sensed condition associated with an area. The gimbaled sensor is configured to be articulated with respect to the platform. A first transceiver transceives communications to and from a ground control system. The ground system includes an operator control unit allowing a user to select and transmit to the first transceiver at least one image feature corresponding to the object of interest. An optical transmitter is configured to emit a signal operable to illuminate a portion of the sensed area proximal to the object of interest. A correction subsystem is configured to determine an illuminated-portion-to-object-of-interest error and, in response to the error determination, cause the signal to illuminate the object of interest.02-11-2010
20110150281METHOD AND DEVICE FOR DETERMINING THE ORIENTATION OF A CROSS-WOUND BOBBIN TUBE - A method and device for determining the orientation of a cross-wound bobbin tube (06-23-2011
20110150280SUBJECT TRACKING APPARATUS, SUBJECT REGION EXTRACTION APPARATUS, AND CONTROL METHODS THEREFOR - A subject tracking apparatus which performs subject tracking based on the degree of correlation between a reference image and an input image is disclosed. The degree of correlation between each of a plurality of reference images based on images input at different times, and the input image is obtained. If the maximum degree of correlation between a reference image based on a first input image among the plurality of reference images and the input image is equal to or higher than a threshold, a region with a maximum degree of correlation with a first reference image is determined as a subject region. Otherwise, a region with a maximum degree of correlation with a reference image based on an image input later than the first input image is determined as a subject region.06-23-2011
20090022368MONITORING DEVICE, MONITORING METHOD, CONTROL DEVICE, CONTROL METHOD, AND PROGRAM - The present invention relates to a monitoring device, monitoring method, control device, control method, and program that use information on a face direction or gaze direction of a person to cause a device to perform processing in accordance with a movement or status of the person. A target detector 01-22-2009
20090022367THREE-DIMENSIONAL SHAPE DETECTING DEVICE AND THREE-DIMENSIONAL SHAPE DETECTING METHOD - A three-dimensional shape detection device which can detect a three-dimensional shape of an object to be picked up even in the case that an image pick-up part with a narrow dynamic range is used is disclosed. An image of an object to be picked up is picked up under a plurality of different exposure conditions in a state that each of a plurality of kinds of patterned lights alternatively disposing bright and dark portions is time-sequentially projected to the object to be picked up and a plurality of brightness images are generated on for respective exposure conditions. Further, based on such a plurality of the brightness images, a coded image is formed on each exposure condition and a code edge position for a space code is obtained for every exposure condition. Based on a plurality of code edge positions for every exposure condition obtained in this manner, one code edge position for calculating a three-dimensional shape of the object to be picked up is determined such that the three-dimensional shape of the object to be picked up is calculated.01-22-2009
20090022366SYSTEM AND METHOD FOR ANALYZING VIDEO FROM NON-STATIC CAMERA - A novel system and method of treating the output of moving cameras, in particular ones that enable the application of conventional “static camera” algorithms, e.g., to enable the continuous vigilance of computer surveillance technology to be applied to moving cameras that cover a wide area. According to the invention, a single camera is deployed to cover an area that might require many static cameras and a corresponding number of processing units. A novel system for processing the main video sufficiently enables long-term change detection, particularly the observation that a static object has been moved or has appeared, for instance detecting the parking and departure of vehicles in a parking lot, the arrival of trains in stations, delivery of goods, arrival and dispersal of people, or any other application.01-22-2009
20090022364MULTI-POSE FAC TRACKING USING MULTIPLE APPEARANCE MODELS - A system and method are provided for tracking a face moving through multiple frames of a video sequence. A predicted position of a face in a video frame is obtained. Similarity matching for both a color model and an edge model are performed to derive correlation values for each about the predicted position. The correlation values are then combined to determine a best position and scale match to track a face in the video.01-22-2009
20090310822Feedback object detection method and system - A feedback object detection method and system. The system includes an object segmentation element, an object tracking element and an object prediction element. The object segmentation element extracts the object from an image according to prediction information of the object provided by the object prediction element. Then, the object tracking element tracks the extracted object to generate motion information of the object like moving speed and moving direction. The object prediction element generates the prediction information such as predicted position and predicted size of the object according to the motion information. The feedback of the prediction information to the object segmentation element facilitates accurately extracting foreground pixels from the image.12-17-2009
20110150282BACKGROUND IMAGE AND MASK ESTIMATION FOR ACCURATE SHIFT-ESTIMATION FOR VIDEO OBJECT DETECTION IN PRESENCE OF MISALIGNMENT - Disclosed herein are a method, system, and computer program product for aligning an input video frame from a video sequence with a background model associated with said video sequence. The background model includes a plurality of model blocks (06-23-2011
20110150276Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual - A method may include automatically remotely identifying at least one characteristic of an individual via facial recognition; and providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. A system may include a facial recognition module configured for automatically remotely identifying at least one characteristic of an individual via facial recognition; and a display module coupled with the facial recognition module, the display module configured for providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual.06-23-2011
20090296986IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes: a tracking unit to track a predetermined point on an image as a tracking point, to correspond with an operation of a user; a display control unit to display the tracking point candidate serving as the tracking point candidates, which are greater in number than objects moving on the image and fewer than the number of pixels of the image, on the image; and a setting unit to set the tracking point candidates as the tracking points on the next frame of the tracking unit, corresponding to an operation by a user.12-03-2009
20090252375Position Detection System, Position Detection Method, Program, Object Determination System and Object Determination Method - There is provided a position detection system including an imaging unit to capture an image of a projection plane of an electromagnetic wave, an electromagnetic wave emission unit to emit the electromagnetic wave to the projection plane, a control unit to control emission of the electromagnetic wave by the electromagnetic wave emission unit, and a position detection unit including a projected image detection section to detect a projected image of an object existing between the electromagnetic wave emission unit and the projection plane based on a difference between an image of the projection plane captured during emission of the electromagnetic wave fay the electromagnetic wave emission unit and an image of the projection plane captured during no emission of the electromagnetic wave, and a position detection section to detect a position of the object based on a position of the projected image of the object.10-08-2009
20110150285LIGHT EMITTING DEVICE AND METHOD FOR TRACKING OBJECT - A technique and a light emitting device that can smoothly read out data while tracking a position of the light emitting device (an object). The light emitting device expresses data with “a change in the change of a color (switching of changes)”. The light emitting device specifies an object and the position thereof with a first primary change and thereafter expresses data with, so to speak, a secondary change (switching of the primary change). The primary change means that G and B alternately turn on (indicated by G*B) and so on. The secondary change means a change from the condition (G*B), in which G and B alternately turn on, to the condition (B*R) in which B and R alternately turn on. Thus, since data is expressed by the change of color condition changes, it is easier to freely express data while the position of an object is specified.06-23-2011
20090103780Hand-Gesture Recognition Method - One embodiment of the invention includes a method of providing device inputs. The method includes illuminating hand gestures performed via a bare hand of a user in a foreground of a background surface with at least one infrared (IR) light source. The method also includes generating a first plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface and generating a second plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface. The method also includes determining a plurality of three-dimensional features of the bare hand relative to the background surface based on a parallax separation of the bare hand in the first plurality of silhouette images relative to the second plurality of silhouette images. The method also includes determining a provided input gesture based on the plurality of three-dimensional features of the bare hand and comparing the provided input gesture with a plurality of predefined gesture inputs in a gesture library. The method further includes providing at least one device input corresponding to interaction with displayed visual content based on the provided input gesture corresponding to one of the plurality of predefined gesture inputs.04-23-2009
20090147993HEAD-TRACKING SYSTEM - A head-tracking system and a method for operating a head-tracking system in which a stationary reference point is detected are provided. A detector for detecting the position of a head is calibrated based on the detected stationary reference point. In one example implementation, the detection of the stationary reference point is used to determine the position of the head.06-11-2009
20110158473DETECTING METHOD FOR DETECTING MOTION DIRECTION OF PORTABLE ELECTRONIC DEVICE - A detecting method is provided for detecting motion direction of a portable electronic device. The portable electronic device senses a plurality of continuous images in time sequence via an image sense unit. The differences among the plurality of images are analyzed by a process unit. Consequently the process unit determines the motion direction of the portable electronic device, generates motion data based on the differences, and sends a control signal corresponding to the motion direction of the device and the motion data.06-30-2011
20090110238Automatic correlation modeling of an internal target - A method and apparatus to automatically control the timing of an image acquisition by an imaging system in developing a correlation model of movement of a target within a patient.04-30-2009
20090080702Method for the recognition of obstacles - A method is provided for the recognition of an obstacle, in particular a pedestrian, located in the travel path of a movable carrier such as in particular a motor vehicle, in the environment in the range of view of an optical sensor attached to the movable carrier, wherein a first image is taken by means of the optical sensor at a first time and a second image is taken at a later second time, wherein a first transformed lower part image is generated by a projection of an image section of the first taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a first transformed upper part image is generated by a projection of an image section of the first taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a second transformed lower part image is generated by a projection of an image section of the second taken image lying below the horizon from the image plane of the optical sensor into the ground plane, wherein a second transformed upper part image is generated by a projection of an image section of the second taken image lying above the horizon from the image plane of the optical sensor into a virtual plane parallel to the ground plane, wherein a lower difference part image is determined from the first and second transformed lower part images, an upper difference part image is determined from the first and second upper part images and it is determined by evaluation of the lower difference part image and of the upper difference part image whether an obstacle is located in the travel path of the movable carrier.03-26-2009
20090052738SYSTEM AND METHOD FOR COUNTING FOLLICULAR UNITS - A system and method for counting follicular units using an automated system comprises acquiring an image of a body surface having skin and follicular units, filtering the image to remove skin components in the image, processing the resulted image to segment it, and filtering noise to eliminate all elements other than hair follicles of interest so that hair follicles in an area of interest can be counted. The system may comprise an image acquisition device and an image processor for performing the method. In another aspect, the system and method also classifies the follicular units based on the number of hairs in the follicular unit.02-26-2009
20110176708Task-Based Imaging Systems - A task-based imaging system for obtaining data regarding a scene for use in a task includes an image data capturing arrangement for (a) imaging a wavefront of electromagnetic energy from the scene to an intermediate image over a range of spatial frequencies, (b) modifying phase of the wavefront, (c) detecting the intermediate image, and (d) generating image data over the range of spatial frequencies. The task-based imaging system also includes an image data processing arrangement for processing the image data and performing the task. The image data capturing and image data processing arrangements cooperate so that signal-to-noise ratio (SNR) of the task-based imaging system is greater than SNR of the task-based imaging system without phase modification of the wavefront over the range of spatial frequencies.07-21-2011
20110158476ROBOT AND METHOD FOR RECOGNIZING HUMAN FACES AND GESTURES THEREOF - A robot and a method for recognizing human faces and gestures are provided, and the method is applicable to a robot. In the method, a plurality of face regions within an image sequence captured by the robot are processed by a first classifier, so as to locate a current position of a specific user from the face regions. Changes of the current position of the specific user are tracked to move the robot accordingly. While the current position of the specific user is tracked, a gesture feature of the specific user is extracted by analyzing the image sequence. An operating instruction corresponding to the gesture feature is recognized by processing the gesture feature through a second classifier, and the robot is controlled to execute a relevant action according to the operating instruction.06-30-2011
20080317281MEDICAL MARKER TRACKING WITH MARKER PROPERTY DETERMINATION - A method for tracking at least one medical marker is provided, wherein actual properties of the at least one marker are compared with nominal properties of the at least one marker. A basis for subsequent use of information obtained from the at least one marker is formed based on the comparison.12-25-2008
20080267453METHOD FOR ESTIMATING THE POSE OF A PTZ CAMERA - Provided is an iterative method of estimating the pose of a moving PTZ camera. The first step is to use an image registration method on a reference image and a current image to calculate a matrix that estimates the motion of sets of points corresponding to the same object in both images. Information about the absolute camera pose, embedded in the matrix obtained in the first step, is used to simultaneously recalculate both the starting positions in the reference image and the motion estimate. The recalculated starting positions and motion estimate are used to determine the pose of the camera in the current image. The current image is taken as a new reference image, a new current image is selected and the process is repeated in order to determine the pose of the camera in the new current image. The entire process is repeated until the camera stops moving.10-30-2008
20110255745IMAGE ANALYSIS PLATFORM FOR IDENTIFYING ARTIFACTS IN SAMPLES AND LABORATORY CONSUMABLES - A High-resolution Image Acquisition and Processing Instrument (HIAPI) performs at least five simultaneous measurements in a noninvasive fashion, namely: (a) determining the volume of a liquid sample in welh (or microtubes) containing liquid sample, (b) detection of precipitate, objects of artifacts within microliter plate wells, (c) classification of colored samples in microliter plate wells or microtubes; (dl determination of contaminant (e.g. wafer concentration}; (e) air bubbles; (f) problems with the actual plate. Remediation of contaminant is also possible.10-20-2011
20110255743OBJECT RECOGNITION USING HAAR FEATURES AND HISTOGRAMS OF ORIENTED GRADIENTS - A system and method to detect objects in a digital image. At least one image representing at least one frame of a video sequence is received. A sliding window of different window sizes at different locations is placed in the image. A cascaded classifier including a plurality of increasingly accurate layers is applied to each window size and each location. Each layer includes a plurality of classifiers. An area of the image within a current sliding window is evaluated using one or more weak classifiers in the plurality of classifiers based on at least one of Haar features and Histograms of Oriented Gradients features. An output of each weak classifier is a weak decision as to whether the area of the image includes an instance of an object of a desired object type. A location of the zero or more images associated with the desired object type is identified.10-20-2011
20080253611Analyst cueing in guided data extraction - The Analyst Cueing method addresses the issues of locating desired targets of interest from among very large datasets in a timely and efficient manner. The combination of computer aided methods for classifying targets and cueing a prioritized list for an analyst produces a robust system for generalized human-guided data mining. Incorporating analyst feedback adaptively trains the computerized portion of the system in the identification and labeling of targets and regions of interest. This system dramatically improves analyst efficiency and effectiveness in processing data captured from a wide range of deployed sensor types.10-16-2008
20080253612Method and an Arrangement for Locating and Picking Up Objects From a Carrier - The invention relates to a method for locating and picking up objects that are placed on a carrier. A scanning operation is performed over the carrier. The scanning is performed by a line laser scanner whose results are used to generate a virtual surface that represents the area that has been scanned. The virtual surface is compared to a pre-defined virtual object corresponding to an object to be picked from the carrier, whereby a part of the virtual surface that matches the pre-defined virtual object is identified. A robot arm is then caused to move to a location corresponding to the identified part of the virtual surface and pick up an object from the carrier at this location.10-16-2008
20080253613System and Method for Cooperative Remote Vehicle Behavior - A method for facilitating cooperation between humans and remote vehicles comprises creating image data, detecting humans within the image data, extracting gesture information from the image data, mapping the gesture information to a remote vehicle behavior, and activating the remote vehicle behavior. Alternatively, voice commands can by used to activate the remote vehicle behavior.10-16-2008
20080253609TRACKING WORKFLOW IN MANIPULATING MEDIA ITEMS - A computer-implemented method is described including receiving input specifying an image frame from among a series of image frames, and automatically detecting one or more points in the specified image frame that would be suitable for tracking a point in the series of image frames. In addition, a computer-implemented method is described including choosing a first position of a point on a first image frame of a plurality of image frames, and displaying in a bounded region on the first image frame content relating to a second image frame of the plurality of image frames, wherein the content displayed in the bounded region includes a second position of the point at a different time than the first position of the point.10-16-2008
20080253610Three dimensional shape reconstitution device and estimation device - A face model providing portion provides an stored average face model to an estimation portion estimating an affine parameter for obtaining a head pose. An individual face model learning portion obtains a result of tracking feature points by the estimation portion and learns an individual face model. The individual face model learning portion terminates the learning when a free energy of the individual face model is over a free energy of the average face model, and switches a face model provided to the estimation portion from the average face model to the individual face model. While learning the individual face mode, an observation matrix is factorized using a reliability matrix showing reliability of each observation value forming the observation matrix with emphasis on the feature point having higher reliability.10-16-2008
20080205703Methods and Apparatus for Automatically Tracking Moving Entities Entering and Exiting a Specified Region - Techniques for tracking entities using a single overhead camera are provided. A foreground region is detected in a video frame of the single overhead camera corresponding to one or more entities. It is determined if the foreground region is associated with an existing tracker. It is determined whether the detected foreground region is the result of at least one of a merger of two or more smaller foreground regions having corresponding existing trackers and a split of a larger foreground region having a corresponding existing tracker when the detected foreground region is not associated with an existing tracker. The detected foreground region is tracked via at least one existing tracker when the foreground region is associated with an existing tracker or the foreground region is the result of at least one of a merger and a split.08-28-2008
20110164788METHOD AND DEVICE FOR DETERMINING LEAN ANGLE OF BODY AND POSE ESTIMATION METHOD AND DEVICE - Provided are a method and device for determining a lean angle of a body and a pose estimation method and device. The method for determining a lean angle of a body of the present invention includes: a head-position obtaining step for obtaining a position of a head; a search region determination step for determining a plurality of search region spaced with an angle around the head; an energy function calculating step for calculating a value of an energy function for the search region; and a lean angle determining step for determining the lean angle of a search region with a largest or smallest value of the energy function as the lean angle of the body. The pose estimation method of the present invention includes a body lean-angle obtaining step, for obtaining a lean angle of a body; and a pose estimation step, for performing a pose estimation based on the lean angle of the body.07-07-2011
20110164787METHOD AND SYSTEM FOR APPLYING COSMETIC AND/OR ACCESSORIAL ENHANCEMENTS TO DIGITAL IMAGES - A method for a creating a virtual makeover includes inputting an initial digital Image into and initiating a virtual makeover at a local processor. Instructions are transmitted from the main server to the local processor. Positions of facial features are isolated within the digital image at the local processor. Facial regions within the digital image are defined based on the positions of the facial features at the local processor. After receiving input, cosmetic enhancements or the accessorial enhancement are applied to the digital image at the local processor. A final digital image is generated including the enhancements. The final digital image is then displayed. At least the defining, applying, and generating steps include instructions written in a non-flash format for execution in a flash-based wrapper.07-07-2011
20110164785TUNABLE WAVELET TARGET EXTRACTION PREPROCESSOR SYSTEM - The present invention is a target tracking system for enhanced target identification, target acquisition and track performance that is significantly superior over other methods. Specifically, the target tracking system incorporates an intelligent Tunable Wavelet Target Extraction Preprocessor (TWTEP). The TWTEP, which defines target characteristics in the presence of noise and clutter, 1) enhances and augments the target within the video scene to provide a better tracking source for the externally provided Track Process, 2) implements a tunable target definition from the video image to provide a highly resolved target delineation and selection, 3) utilizes a weighted pseudo-covariance technique to define target area for shape determination, extraction, 4) implements a target definition and extraction process, and 5) defines methodologies for presentation of filtered video and images for external processing.07-07-2011
20120063643Methods, Systems, and Products for Gesture-Activation - Methods, systems, and products are disclosed recognizing gestures. A sequence of images is captured by a camera and compared to a stored sequence of images in memory. A gesture is then recognized in the stored sequence of images.03-15-2012
20120063641SYSTEMS AND METHODS FOR DETECTING ANOMALIES FROM DATA - The present disclosure concerns methods and/or systems for processing, detecting and/or notifying for the presence of anomalies or infrequent events from data. Some of the disclose methods and/or systems may be used on large-scale data sets. Certain applications are directed to analyzing sensor surveillance records to identify aberrant behavior. The sensor data may be from a number of sensor types including video and/or audio. Certain applications are directed to methods and/or systems that use compressive sensing. Certain applications may be performed in substantially real time.03-15-2012
20100215213TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - This invention will introduce a fast and effective target approach planning method preferably for needle guided percutaneous interventions using a rotational X-ray device. According to an exemplary embodiment A targeting method for targeting a first object in an object under examination is provided, wherein the method comprises selecting a first two-dimensional image of an three-dimensional data volume representing the object under examination, determining a target point in the first two-dimensional image, displaying an image of the three-dimensional data volume with the selected target point. Furthermore, the method comprises positioning the said image of the three-dimensional data volume by scrolling and/or rotating such that a suitable path of approach crossing the target point has a first direction parallel to an actual viewing direction of the said image of the three-dimensional data volume and generating a second two-dimensional image out of the three-dimensional data volume, wherein a normal of the plane of the second two-dimensional image is oriented parallel to the first direction and crosses the target point.08-26-2010
20100284565Method and apparatus for fingerprint motion tracking using an in-line array - A fingerprint motion tracking method and system is provided for sensing features of a fingerprint along an axis of finger motion, where a linear sensor array has a plurality of substantially contiguous sensing elements configured to capture substantially contiguous overlapping segments of image data. A processing element is configured to receive segments of image data captured by the linear sensor array and to generate fingerprint motion data. Multiple sensor arrays may be included for generating directional data. The motion tracking data may be used in conjunction with a fingerprint image sensor to reconstruct a fingerprint image using the motion data either alone or together with the directional data.11-11-2010
20100284566PICTURE DATA MANAGEMENT APPARATUS AND PICTURE DATA MANAGEMENT METHOD - A land mark used as a key for organizing images captured by, e.g., a digital camera is adequately selected. A association degree adding section (11-11-2010
20100284569LANE RECOGNITION SYSTEM, LANE RECOGNITION METHOD, AND LANE RECOGNITION PROGRAM - To provide a lane recognition system which can improve the lane recognition accuracy by suppressing noises that are likely to be generated respectively in an original image and a bird's-eye image. The lane recognition system recognizes a lane based on an image. The system includes: a synthesized bird's-eye image creation module which creates a synthesized bird's-eye image by connecting a plurality of bird's-eye images that are obtained by transforming respective partial regions of original images picked up at a plurality of different times into bird's-eye images; a lane line candidate extraction module which detects a lane line candidate by using information of the original images or the bird's-eye images created from the original images, and the synthesized bird's-eye image; and a lane line position estimation module which estimates a lane line position based on information of the lane line candidate.11-11-2010
20100284568OBJECT RECOGNITION APPARATUS AND OBJECT RECOGNITION METHOD - An object recognition apparatus recognizes an object from video data for a predetermined time period generated by a camera, analyzes the recognition result, and determines a minimum size and moving speed of faces of the video image recognized from the received frame image. Then, the object recognition apparatus determines a lower limit value of a frame rate and resolution from the determined minimum size and moving speed of the faces.11-11-2010
20100246888IMAGING APPARATUS, IMAGING METHOD AND COMPUTER PROGRAM FOR DETERMINING AN IMAGE OF A REGION OF INTEREST - The present invention relates to an imaging apparatus for determining an image of a region of interest, wherein a motion generation unit (09-30-2010
20100246887METHOD AND APPARATUS FOR OBJECT TRACKING - There is described an apparatus and method for tracking objects in video. In particular, there is described a method and apparatus that improves the realism of the object in the captured scene. This improvement is effected by identifying a first and last frame in a video and subjecting the detected path of the object to a correcting function which improves the output positional data.09-30-2010
20100329512METHOD FOR REALTIME TARGET DETECTION BASED ON REDUCED COMPLEXITY HYPERSPECTRAL PROCESSING - There is provided a method for real-time target detection comprising detecting a preprocessed pixel as a target and/or a background, based on a library, and refining the library by extracting a sample from the target or the background.12-30-2010
20100329510METHOD AND DEVICE FOR DISPLAYING THE SURROUNDINGS OF A VEHICLE - In a method for displaying on a display device the surroundings of a vehicle, the surroundings are detected by at least one detection sensor as an image of the surroundings while the vehicle is traveling or at a standstill. A surroundings image from a given surrounding area is ascertained by the detection sensor in different vehicle positions, and/or at least one surroundings image from the given surrounding area is ascertained by each of at least two detection sensors situated at a distance from one another, and in each case a composite surroundings image is obtained from the surroundings images and displayed by the display device.12-30-2010
20100329509METHOD AND SYSTEM FOR GESTURE RECOGNITION - A method and a system for gesture recognition are provided for recognizing a gesture performed by a user in front of an electronic product having a video camera. In the present method, an image containing the upper body of the user is captured and a hand area in the image is obtained. The hand area is fully scanned by a first couple of concentric circles. During the scanning, a proportion of a number of skin color pixels on an inner circumference of the first couple of concentric circles and a proportion of a number of skin color pixels on an outer circumference of the first couple of concentric circles are used to determine a number of fingertips in the hand area. The gesture is recognized by the number of fingertips and an operation function of the electronic product is executed according to an operating instruction corresponding to the recognized gesture.12-30-2010
20100246886MOVING OBJECT IMAGE TRACKING APPARATUS AND METHOD - An apparatus includes a first-computation unit computing first-angular-velocity-instruction values for driving first-and-second-rotation units to track a moving object, using a detected tracking error and a detected angles, when the moving object exists in a first range separate from a zenith by at least a preset distance, a second-computation unit computing second-angular-velocity-instruction values for driving the first-and-second-rotation units to track the moving object and avoid a zenith-singular point, using the detected angles, the detected tracking error and an estimated traveling direction, and a control unit controlling the first-and-second-rotation units to eliminate differences between the first-angular-velocity-instruction values and the angular velocities when the moving object exists in the first range, and controlling the first-and-second-rotation units to eliminate differences between the second-angular-velocity instruction values and the angular velocities when the moving object exists in a second range within the preset distance from the zenith.09-30-2010
20100246885SYSTEM AND METHOD FOR MONITORING MOTION OBJECT - A motion object monitoring system captures images of monitored objects in a monitored area, and gives numbers to the monitored objects according to specific features of the monitored objects. The specific features of the monitored objects are obtained by detecting the captured images. Only one of the numbers of each of the monitored objects is stored, instead of repeatedly storing the numbers of same motion objects. The motion object monitoring system analyzes the stored numbers, and displays an analysis result. The motion object monitoring system also determines a movement of each of the motion objects according to corresponding numbers of the motion objects.09-30-2010
20100246884METHOD AND SYSTEM FOR DIAGNOSTICS SUPPORT - A method for displaying a diagnostic image acquires the diagnostic digital image and applies one or more pattern recognition algorithms to the acquired diagnostic digital image, detecting at least one feature within the acquired diagnostic digital image. At least a portion of the acquired diagnostic digital image displays with a marking at the location of the at least one detected feature. At least one detected feature displays under a first set of image display settings for a first interval, then under at least a second set of image display settings for a second interval.09-30-2010
20120121135POSITION AND ORIENTATION CALIBRATION METHOD AND APPARATUS - A position and orientation measuring apparatus calculates a difference between an image feature of a two-dimensional image of an object and a projected image of a three-dimensional model in a stored position and orientation of the object projected on the two-dimensional image. The position and orientation measuring apparatus further calculates a difference between three-dimensional coordinate information and a three-dimensional model in the stored position and orientation of the object. The position and orientation measuring apparatus then converts a dimension of the first difference and/or the second difference to cause the first difference and the second difference to have an equivalent dimension and corrects the stored position and orientation.05-17-2012
20120121134CONTROL APPARATUS, CONTROL METHOD, AND PROGRAM - The present invention relates to a control apparatus, a control method, and a program in which, when performing automatic image-recording, the frequency with which image-recording is performed can be changed so that the recording frequency can be suitably changed in accordance with, for example, a user's intention or the state of an imaging apparatus.05-17-2012
20120121133SYSTEM FOR DETECTING VARIATIONS IN THE FACE AND INTELLIGENT SYSTEM USING THE DETECTION OF VARIATIONS IN THE FACE - A face change detection system is provided, comprising an image input unit acquiring a plurality of input images, a face extraction unit extracting a face region of the input images, and a face change extraction unit detecting a face change in the input images by calculating an amount of change in the face region.05-17-2012
20120121132OBJECT RECOGNITION METHOD, OBJECT RECOGNITION APPARATUS, AND AUTONOMOUS MOBILE ROBOT - To carry out satisfactory object recognition in a short time. An object recognition method in accordance with an exemplary aspect of the present invention is an object recognition method for recognizing a target object by using a preliminarily-created object model. The object recognition method generates a range image of an observed scene, detects interest points from the range image, extracts first features, the first features being features of an area containing the interest points, carries out a matching process between the first features and second features, the second features being features of an area in the range image of the object model, calculates a transformation matrix based on a result of the matching process, the transformation matrix being for projecting the second features on a coordinate system of the observed scene, and recognizes the target object with respect to the object model based on the transformation matrix.05-17-2012
20120121131METHOD AND APPARATUS FOR ESTIMATING POSITION OF MOVING VEHICLE SUCH AS MOBILE ROBOT - An apparatus of estimating a position of a moving vehicle such as a robot includes a feature point matching unit which generates vectors connecting feature points of a previous image frame and feature points of a current image frame, corresponding to the feature points of the previous image frame, and determines spatial correlations between the feature points of the current image frame, a clustering unit which configures at least one motion cluster by grouping at least one vector among the vectors based on the spatial correlations in a feature space, and a noise removal unit removing noise from each motion cluster, wherein the position of the moving vehicle is estimated based on the at least one motion cluster.05-17-2012
20120121127IMAGE PROCESSING APPARATUS AND NON-TRANSITORY STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM - An image processing apparatus executes acquiring, on a first image having a pattern having first areas and second areas that have a different color from the first areas, center position of the pattern where the first areas and the second areas cross, acquiring boundary positions between the first and second area, converting the first image to a second image having its image distortion corrected by using the center position and the boundary positions, acquiring, by scanning on the second image, expectation values which are areas including the point where the first and second areas cross excluding the center position, acquiring a intersection position of the intersection on the second image based on the expectation values, acquiring the center position and the positions on the first image corresponding to the intersection position by inverting the second image to the first image, determining the points corresponding to the acquired positions as features.05-17-2012
20120121126METHOD AND APPARATUS FOR ESTIMATING FACE POSITION IN 3 DIMENSIONS - An apparatus and method for estimating a three-dimensional face position. The method of estimating the three-dimensional face position includes acquiring two-dimensional image information from a single camera, detecting a face region of a user from the two-dimensional image information, calculating the size of the detected face region, estimating a distance between the single camera and the user's face using the calculated size of the face region, and obtaining positional information of the user's face in a three-dimensional coordinate system using the estimated distance between the single camera and the user's face. Accordingly, it is possible to estimate the distance between the user and the single camera using the size of the face region of the user in the image information acquired by the single camera so as to acquire the three-dimensional position coordinates of the user.05-17-2012
20120121125METHODS AND SYSTEMS FOR SOLAR SHADE ANALYSIS - A device for performing solar shade analysis combines a spherical reflective dome and a ball compass mounted on a platform, with a compass alignment mark and four dots in the corners of the platform. A user may place the device on a surface of a roof, or in another location where solar shading analysis is required. A user, while standing above the device can take a photo of the device. The photographs can then be used in order to evaluate solar capacity and perform shade analysis for potential sites for solar photovoltaic systems. By using the device in conjunction with a mobile device having a camera, photographs may be taken and uploaded, to be analyzed and processed to determine a shading percentage. For example, the solar shade analysis system may calculate the percentage of time that the solar photovoltaic system might be shaded for each month of the year. These measurements and data, or similar measurements and data, may be valuable when applying for solar rebates or solar installation permits.05-17-2012
20110255748ARTICULATED OBJECT REGIONARTICULATED OBJECT REGION DETECTION APPARATUS AND METHOD OF THE SAME - An articulated object region detection apparatus includes: a subclass classification unit which classifies trajectories into subclasses; a distance calculating unit which calculates, for each of the subclasses, a point-to-point distance and a geodetic distance between the subclass and another subclass; and a region detection unit which detects, as a region having an articulated motion, two subclasses to which trajectories corresponding to two regions connected via the same articulation and indicating the articulated motion belong, based on a temporal change in the point-to-point distance and a temporal change in the geodetic distance between two given subclasses.10-20-2011
20110255746 SYSTEM FOR USING THREE-DIMENSIONAL MODELS TO ENABLE IMAGE COMPARISONS INDEPENDENT OF IMAGE SOURCE - A method for identifying an object based at least in part on a reference database including two-dimensional images of objects includes the following steps: (a) providing a three-dimensional model reference database containing a plurality of estimated three-dimensional models, wherein each estimated three-dimensional model is derived from a corresponding two-dimensional image from the two-dimensional reference database; (b) sampling at least one image of an object to be identified; (c) implementing at least one identification process to identify the object, the identification process employing data from the three-dimensional model reference database.10-20-2011
20110255739IMAGE CAPTURING DEVICE AND METHOD WITH OBJECT TRACKING - A method for dynamically tracking a specific object in a monitored area obtains an image of the monitored area by one of a plurality of image capturing devices in the monitored area, and detects the specific object in the obtained image. The method further determines adjacent image capturing devices in the monitored area according to the path table upon the condition that the specific object is detected, and adjusts a detection sensitivity of each of the adjacent image capturing devices.10-20-2011
20110255742INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM - A situation data obtaining unit obtains situation data describing a situation of an image capturing target of which image is captured by an image capturing device for producing an image to be output. Based on the situation data, a simulation process executing unit carries out a simulation process for simulating a behavior of the image capturing target after the situation of the image capturing target, described by the situation data. A combined screen image output unit outputs a result of the simulation process by the simulation process executing unit. The simulation process executing unit changes the behavior of the image capturing target in the simulation process in response to an operation received from a user.10-20-2011
20110255741METHOD AND APPARATUS FOR REAL-TIME PEDESTRIAN DETECTION FOR URBAN DRIVING - A computer implemented method for detecting the presence of one or more pedestrians in the vicinity of the vehicle is disclosed. Imagery of a scene is received from at least one image capturing device. A depth map is derived from the imagery. A plurality of pedestrian candidate regions of interest (ROIs) is detected from the depth map by matching each of the plurality of ROIs with a 3D human shape model. At least a portion of the candidate ROIs is classified by employing a cascade of classifiers tuned for a plurality of depth bands and trained on a filtered representation of data within the portion of candidate ROIs to determine whether at least one pedestrian is proximal to the vehicle.10-20-2011
20110135148METHOD FOR MOVING OBJECT DETECTION AND HAND GESTURE CONTROL METHOD BASED ON THE METHOD FOR MOVING OBJECT DETECTION - A method for moving object detection includes the steps: obtaining successive images of the moving object and dividing the successive images into blocks; selecting one block, calculating color feature values of the block at a current time point and a following time point; according to the color feature values, obtaining an active part of the selected block; comparing the color feature value of the selected block at the current time point with that of the other blocks at the following time point to obtain a similarity relating to each of the other blocks, and defining a maximum similarity as a local correlation part; obtaining a motion-energy patch of the block according to the active part and the local correlation part; repeating the steps to obtain all motion-energy patches to form a motion-energy map; and acquiring the moving object at the current time point in the motion-energy map.06-09-2011
20110135154LOCATION-BASED SIGNATURE SELECTION FOR MULTI-CAMERA OBJECT TRACKING - Disclosed herein are a method, system, and computer program product for determining a correspondence between a first object (06-09-2011
20110135152INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes: a detection unit detecting the faces of persons from frames of moving-image contents; a first specifying unit specifying the persons corresponding to the detected faces by extracting feature amounts of the detected faces and verifying the extracted feature amounts in a first database in which the feature amounts of the faces are registered in correspondence with person identifying information; a voice analysis unit analyzing the voices acquired when the faces of the persons are detected from the frames of the moving-image contents and generating voice information; and a second specifying unit specifying the persons corresponding to the detected faces by verifying the voice information corresponding to the face of a person which is not specified by the first specifying unit in a second database in which the voice information is registered in correspondence with the person identifying information.06-09-2011
20110135151METHOD AND APPARATUS FOR SELECTIVELY SUPPORTING RAW FORMAT IN DIGITAL IMAGE PROCESSOR - A digital image processing apparatus and method for supporting a RAW format (a sensor data format before image processing is performed) selectively supports a user-desired region of a captured image in a RAW format. A method of supporting a RAW format in a digital image processing apparatus includes setting at least one portion of an image displayed in a live-view mode as a region of interest (ROI), storing the ROI in a RAW format, storing a non-ROI of the displayed image, which is a portion of the image other than the ROI, in a compression format, and compositing the stored ROI with the stored non-ROI.06-09-2011
20110135149Systems and Methods for Tracking Objects Under Occlusion - A method for tracking objects in a scene may include receiving visual-based information of the scene with a vision-based tracking system and telemetry-based information of the scene with a RTLS-based tracking system. The method may also include determining a location and identity of a first object in the scene using a combination of the visual-based information and the telemetry-based information. Another method for tracking objects in a scene may include detecting a location and identity of a first object and determining a telemetry-based measurement between the first object and a second object using a real time locating system (RTLS)-based tracking system. The method may further include determining a location and identity of the second object based on the detected location of the first object and the determined measurement. A system for tracking objects in a scene may include visual-based and telemetry-based information receivers and an object tracker.06-09-2011
20110262010ARRANGEMENT AND METHOD RELATING TO AN IMAGE RECORDING DEVICE - An input system for a digital camera may include a portion for taking at least one image to be used as a control image; and a controller to control at least one operation of the digital camera based on a control command recognized from the control image, the control command controlling a function of the camera.10-27-2011
20110262005OBJECT DETECTING METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING AN OBJECT DETECTION PROGRAM - An object detecting method includes dividing a standard pattern into two or more areas radially from a central point; selecting, in each divided area of the standard pattern, a standard pattern pixel position at the maximum distance from the area dividing central point as a standard pattern representative point; dividing a determined pattern into two or more areas; selecting, in each divided area of the determine pattern, a determined pattern pixel position at the maximum distance from the area dividing central point as a determined pattern representative point; determining a positional difference between the standard pattern representative point and the determined pattern representative point in the corresponding divided areas; and determining the determined pattern as a target object when the positional differences in all of the divided areas are within a predetermined range.10-27-2011
20110262003OBJECT LEARNING METHOD, OBJECT TRACKING METHOD USING THE SAME, AND OBJECT LEARNING AND TRACKING SYSTEM - The present invention relates to an object learning method that minimizes time required for learning an object, an object tracking method using the object learning method, and an object learning and tracking system. The object learning method includes: receiving an image to be learned through a camera to generate a front image by a terminal; generating m view points used for object learning and generating first images obtained when viewing the object from the m view points using the front image; generating second images by performing radial blur on the first images; separating an area used for learning from the second images to obtain reference patches; and storing pixel values of the reference patches.10-27-2011
20120148099SYSTEM AND METHOD FOR MEASURING FLIGHT INFORMATION OF A SPHERICAL OBJECT WITH HIGH-SPEED STEREO CAMERA - Disclosed is a method for automatically extracting centroids and features of a spherical object required to measure a flight speed, a flight direction, a rotation speed, and a rotation axis of the spherical object in a system for measuring flight information of the spherical object with a high-speed stereo camera.06-14-2012
20120148094IMAGE BASED DETECTING SYSTEM AND METHOD FOR TRAFFIC PARAMETERS AND COMPUTER PROGRAM PRODUCT THEREOF - An image-based detecting system for traffic parameters first sets a range of a vehicle lane for monitoring control, and sets an entry detection window and an exit detection window in the vehicle lane. When the entry detection window detects an event of a vehicle passing by using the image information captured at the entry detection window, a plurality of feature points are detected in the entry detection window, and will be tracked hereafter. Then, the feature points belonging to the same vehicle are grouped to obtain at least a location tracking result of single vehicle. When the tracked single vehicle moves to the exit detection window, according to the location tracking result and the time correlation through estimating the information captured at the entry detection window and the exit detection window, at least a traffic parameter is estimated.06-14-2012
20120148092AUTOMATIC TRAFFIC VIOLATION DETECTION SYSTEM AND METHOD OF THE SAME - Disclosed herein are a system and method for the automatic detection of traffic and parking violations. Camera input is digitally analyzed for vehicle type and location. This information is then processed against local traffic and parking regulations to detect violations. Detectable driving offenses include, but are not limited to: no scooters, buses only, and scooters only lane violations. Detectable parking offenses include, but are not limited to: parking or loitering in bus stops, parking next to fire hydrants, and parking in no-parking zones. Camera input, detected vehicle information, and violations can be stored for later search and retrieval. The system may be configured to signal the authorities or other automated analysis systems about specific violations. When coupled with automatic license plate recognition, vehicles may be automatically matched against a registration database and reported or ticketed.06-14-2012
20110255744Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.10-20-2011
20100329511Apparatus and method for detecting hands of subject in real time - An apparatus and method can effectively detect both hands and hand shape of a user from images input through cameras. A skin image detecting skin regions from one of the input images and a stereoscopic distance image are used. For hand detection, background and noise are eliminated from a combined image of the skin image and the distance image and regions corresponding to actual both hands are detected from effective images having a high probability of hands. For hand shape detection, a non-skin region is eliminated from the skin image based on the stereoscopic distance information, hand shape candidate regions are detected from the remaining region after elimination, and finally a hand shape is determined.12-30-2010
20090208055Efficient detection of broken line segments in a scanned image - Systems and methods are presented for detecting and repairing broken lines within an image from a plurality of edge segments comprising a plurality of pixels and having associated first and second endpoints. A characteristic angle is determined for each edge segment. A normal distance is determined for each line segment according the distance of closest approach to a reference point for a line defined by the first and second endpoints of each edge segment. At least one line within the scanned image is located according to the determined characteristic angles and the determined normal distance for the plurality of edge segments.08-20-2009
20080219507Passive Touch System And Method Of Detecting User Input - A method of tracking an object of interest preferably includes (i) acquiring a first image and a second image representing different viewpoints of the object of interest; (ii) processing the first image into a first image data set and the second image into a second image data set; (iii) processing the first image data set and the second image data set to generate a background data set associated with a background; (iv) generating a first difference map by determining differences between the first image data set and the background data set and a second difference map by determining differences between the second image data set and the background data set; (v) detecting a first relative position of the object of interest in the first difference map and a second relative position of the object of interest in the second difference map; and (vi) producing an absolute position of the object of interest from the first and second relative positions of the object of interest.09-11-2008
20080219506Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. The method and system operate using various resolutions of the image to identify the objects. Information obtained while processing the image at one resolution is employed when processing the image at another resolution.09-11-2008
20080219504AUTOMATIC MEASUREMENT OF ADVERTISING EFFECTIVENESS - An automated system for measuring information about a target image in a video is described. One embodiment includes receiving a set of one or more video images for the video, automatically finding the target image in at least a subset of the video images, determining one or more statistics regarding the target image being in the video, and reporting the one or more statistics.09-11-2008
20110255747MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - A moving object detection apparatus includes: an image input unit which receives a plurality of pictures included in video; a trajectory calculating unit which calculates a plurality of trajectories from the pictures; a subclass classification unit which classifies the trajectories into a plurality of subclasses; an inter-subclass approximate geodetic distance calculating unit which calculates, for each of the subclasses, an inter-subclass approximate geodetic distance representing similarity between the subclass and another subclass, using an inter-subclass distance that is a distance including a minimum value of a linear distance between each of trajectories belonging to the subclass and one of trajectories belonging to the other subclass; and a segmentation unit which performs segmentation by determining, based on the calculated inter-subclass approximate geodetic distance, a set of subclasses including similar trajectories as one class.10-20-2011
20080219502TRACKING BIMANUAL MOVEMENTS - Hands may be tracked before, during, and after occlusion, and a gesture may be recognized. Movement of two occluded hands may be tracked as a unit during an occlusion period. A type of synchronization characterizing the two occluded hands during the occlusion period may be determined based on the tracked movement of the occluded hands. Based on the determined type of synchronization, it may be determined whether directions of travel for each of the two occluded hands change during the occlusion period. Implementations may determine that a first hand and a second hand are occluded during an occlusion period, the first hand having come from a first direction and the second hand having come from a second direction. The first hand may be distinguished from the second hand after the occlusion period based on a determined type of synchronization characterizing the two hands, and a behavior of the two hands.09-11-2008
20080219503MEANS FOR USING MICROSTRUCTURE OF MATERIALS SURFACE AS A UNIQUE IDENTIFIER - A method and apparatus for the visual identification of materials for tracking an object comprises parameter setting, acquisition and identification phases. The parameter setting phase comprises the steps of defining acquisition parameters for the objects. The acquisition phase comprises the steps of digitally acquiring two-dimensional template image of an object, applying a flattening function and generating downsampled template version of the flattened template and storing it in a reference database with the flattened template. The identification phase comprises the steps of digitally acquiring a snapshot image, applying the flattening function and generating one downsampled version, cross-correlating the downsampled version of the flattened snapshot with the corresponding downsampled templates of the reference database, and selecting templates according to the value of the signal to noise ratio, for the selected templates, cross-correlating the flattened snapshot image with the reference flattened template, and identifying the object by finding the best corresponding template.09-11-2008
20110019875IMAGE DISPLAY DEVICE - On a table type image display device A, a display (01-27-2011
20110150284METHOD AND TERMINAL FOR DETECTING AND TRACKING MOVING OBJECT USING REAL-TIME CAMERA MOTION - A method is provided for detecting and tracking a moving object using real-time camera motion estimation, including generating a feature map representing a change in an input pattern in an input image, extracting feature information of the image, estimating a global motion for recognizing a motion of a camera using the extracted feature information, correcting the input image by reflecting the estimated global motion, and detecting a moving object using the corrected image.06-23-2011
20110150279IMAGE PROCESSING APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus comprising: an input unit configured to input a plurality of images obtained by capturing a target object from different viewpoints; a detection unit configured to detect a plurality of line segments from each of the plurality of input images; a setting unit configured to set, for each of the plurality of detected line segments, a reference line which intersects with the line segment; an array derivation unit configured to obtain a pattern array in which a plurality of pixel value change patterns on the set reference line are aligned; and a decision unit configured to decide association of the detected line segments between the plurality of images by comparing the pixel value change patterns, contained in the obtained pattern array, between the plurality of images.06-23-2011
200901296303D TEXTURED OBJECTS FOR VIRTUAL VIEWPOINT ANIMATIONS - 3d textured objects are provided for virtual viewpoint animations. In one aspect, an image of an event is obtained from a camera and an object in the image is automatically detected. For example, the event may be a sports event and the object may be a stationary object which is detected based on a known location, color and shape. A 3d model of the object is combined with a textured 3d model of the event to depict a virtual viewpoint which differs from a viewpoint of the camera. The textured 3d model of the event has texture applied from an image of the event, while the 3d model of the object does not have such texture applied, in one approach. In another aspect, an object in the image such as a participant in a sporting event is represented in the virtual viewpoint by a textured 3d kinematics model.05-21-2009
20110096956VEHICLE PERIPHERY MONITORING DEVICE - A vehicle periphery monitoring device is operable to report a high contact possibility between a vehicle and an object at an appropriate time or frequency according to the type of the object. When the object is determined to be a human being and the position of the object in real space is contained in a first contact determination area, a high contact possibility between the vehicle and the object is reported. On the other hand, when the object is determined to be a quadruped animal and the real spatial position of the object is contained in a second contact determination area, the corresponding report is made. The second contact determination area has an overlapped area that overlaps with the first contact determination area, and an overflowed area that has at least a part thereof overflowing from the first contact determination area.04-28-2011
20110096955SECURE ITEM IDENTIFICATION AND AUTHENTICATION SYSTEM AND METHOD BASED ON UNCLONABLE FEATURES - The present invention is a method and apparatus for protection of various items against counterfeiting using physical unclonable features of item microstructure images. The protection is based on the proposed identification and authentication protocols coupled with portable devices. In both cases a special transform is applied to data that provides a unique representation in the secure key-dependent domain of reduced dimensionality that also simultaneously resolves performance-security-complexity and memory storage requirement trade-offs. The enrolled database needed for the identification can be stored in the public domain without any risk to be used by the counterfeiters. Additionally, it can be easily transportable to various portable devices due to its small size. Notably, the proposed transformations are chosen in such a way to guarantee the best possible performance in terms of identification accuracy with respect to the identification in the raw data domain. The authentication protocol is based on the proposed transform jointly with the distributed source coding. Finally, the extensions of the described techniques to the protection of artworks and secure key exchange and extraction are disclosed in the invention.04-28-2011
20110096954OBJECT AND MOVEMENT DETECTION - Motions, positions or configurations off, for example a human hand can be recognised by transmitting a plurality of transmit signals in respective time frames; receiving a plurality of receive signals; determining a plurality of channel impulse responses using the transmit and receive signals; defining a matrix of impulse responses, with impulse responses for adjacent time frames adjacent each other; and analysing the matrix for patterns (04-28-2011
20100215217Method and System of Tracking and Stabilizing an Image Transmitted Using Video Telephony - Herein described is a system and method that tracks the face of a person engaged in a videophone conversation. In addition to performing facial tracking, the invention provides stabilization of facial images that are transmitted during the videophone conversation. The face is tracked by employing one or more algorithms that correlate videophone captured facial images against a stored facial image. The face may be better identified by way of employing one or more voice recognition algorithms. The one or more voice recognition algorithms may correlate utterances of the person engaged in a conversation to one or more stored utterances. The identified utterances are subsequently mapped to a stored facial image. In a representative embodiment, the system used for performing facial tracking and image stabilization comprises an image sensor, a lens, an actuator, and a controller/processor.08-26-2010
20110052008System and Method for Image Based Sensor Calibration - Apparatus and methods are disclosed for the calibration of a tracked imaging probe for use in image-guided surgical systems. The invention uses actual image data collected from an easily constructed calibration jig to provide data for the calibration algorithm. The calibration algorithm analytically develops a geometric relationship between the probe and the image so objects appearing in the collected image can be accurately described with reference to the probe. The invention can be used with either two or three dimensional image data-sets. The invention also has the ability to automatically determine the image scale factor when two dimensional data-sets are used.03-03-2011
20100166262MULTI-MODAL OBJECT SIGNATURE - Disclosed herein are a method and system for appearance-invariant tracking of an object in an image sequence. A track is associated with the image sequence, wherein the track has an associated track signature comprising at least one mode. The method detects the object in a frame of the image sequence (07-01-2010
20100166257METHOD AND APPARATUS FOR DETECTING SEMI-TRANSPARENCIES IN VIDEO - A method and apparatus for detecting semi-transparencies in video is disclosed.07-01-2010
20100166259OBJECT ENUMERATING APPARATUS AND OBJECT ENUMERATING METHOD - An object enumerating apparatus comprises means for generating and binarizing inter-frame differential data from moving image data representative of a photographed object under detection, means for extracting feature data from a plurality of the inter-frame binary differential data directly adjacent to each other on a pixel-by-pixel basis through cubic higher-order local auto-correlation, means for calculating a coefficient of each factor vector from a factor matrix comprised of a plurality of factor vectors previously generated through learning using a factor analysis and arranged for one object under detection, and the feature data, and means for adding a plurality of the coefficients for one object under detection, and rounding off the sum to the decimal point to the closest integer representative of a quantity. By courtesy of small fluctuations in the sum of coefficients and accurate matching with the quantity of objects intended for recognition, a recognition can be accomplished with robustness to differences in scale and speed of objects and to dynamic changes thereof.07-01-2010
20100166261SUBJECT TRACKING APPARATUS AND CONTROL METHOD THEREFOR, IMAGE CAPTURING APPARATUS, AND DISPLAY APPARATUS - A subject tracking apparatus extracts a subject region which is similar to a reference image on the basis of a degree of correlation with the reference image for tracking a predetermined subject from images supplied in a time series manner. Further, the subject tracking apparatus detects the position of the predetermined subject in the subject region on the basis of the distribution of characteristic pixels representing the predetermined subject contained in the subject region, and corrects the subject region so as to reduce a shift in position of the predetermined subject in the subject region. Moreover, the corrected subject region is taken as the result of tracking the predetermined subject, and the reference image is updated with the corrected subject region as the reference image to be used for the next supplied image.07-01-2010
20100166260METHOD FOR AUTOMATIC DETECTION AND TRACKING OF MULTIPLE TARGETS WITH MULTIPLE CAMERAS AND SYSTEM THEREFOR - A method for automatically detecting and tracking multiple targets in a multi-camera surveillance zone and system thereof. In each camera view of the system only a simple object detection algorithm is needed. The detection results from multiple cameras are fused into a posterior distribution, named TDP, based on the Bayesian rule. This TDP distribution represents a likelihood of presence of some moving targets on the ground plane. To properly handle the tracking of multiple moving targets with time, a sample-based framework which combines Markov Chain Monte carlo (MCMC), Sequential Monte Carlo (SMC), and Mean-Shift Clustering, is provided. The detection and tracking accuracy is evaluated by both synthesized videos and real videos. The experimental results show that this method and system can accurately track a varying number of targets.07-01-2010
20100166256Method and apparatus for identification and position determination of planar objects in images - A method of identifying a planar object in source images is disclosed. In at least one embodiment, the method includes: retrieving a first source image obtained by a first terrestrial based camera; retrieving a second source image obtained by a second terrestrial based camera; retrieving position data associated with the first and second source image; retrieving orientation data associated with the first and second source image; performing a looking axis rotation transformation on the first and second source image by use of the associated position data and orientation data to obtain first and second intermediate images, wherein the first and second intermediate images have an identical looking axis; performing a radial logarithmic space transformation on the first and second intermediate images to obtain first and second radial logarithmic data images; detecting an area in the first image potentially being a planar object; comparing the potential planar object having similar dimensions in the second radial logarithmic data image and similar rgb characteristics; and finally, identifying the area as a planar object and determining its position. At least one embodiment of the method enables the engineer to detect very efficiently planar perpendicular objects in subsequent images.07-01-2010
20110052007GESTURE RECOGNITION METHOD AND INTERACTIVE SYSTEM USING THE SAME - A gesture recognition method for an interactive system includes the steps of: capturing image windows with an image sensor; obtaining information of object images associated with at least one pointer in the image windows; calculating a position coordinate of the pointer relative to the interactive system according to the position of the object images in the image windows when a single pointer is identified according to the information of object images; and performing gesture recognition according to a relation between the object images in the image window when a plurality of pointers are identified according to the information of object images. The present invention further provides an interactive system.03-03-2011
20110052006EXTRACTION OF SKELETONS FROM 3D MAPS - A method for processing data includes receiving a temporal sequence of depth maps of a scene containing a humanoid form having a head. The depth maps include a matrix of pixels having respective pixel depth values. A digital processor processes at least one of the depth maps so as to find a location of the head and estimates dimensions of the humanoid form based on the location. The processor tracks movements of the humanoid form over the sequence using the estimated dimensions.03-03-2011
20110052005Designation of a Characteristic of a Physical Capability by Motion Analysis, Systems and Methods - Motion Analysis is used to classify or rate human capability in a physical domain via a minimized movement and data collection protocol producing a discreet, overall figure of merit of the selected physical capability. The minimal protocol is determined by data mining of a more extensive movement and data collection. Protocols are relevant in medical, sports and occupational applications. Kinematic, kinetic, body type, Electromyography (EMG), Ground Reactive Force (GRF), demographic, and psychological data are encompassed. Resulting protocols are capable of transforming raw data representing specific human motions into an objective rating of a skill or capability related to those motions.03-03-2011
20110052004CAMERA DEVICE AND IDENTITY RECOGNITION METHOD UTILIZING THE SAME - A camera device includes an image capturing module, a face detection module, a light detection and ranging (LIDAR) system, a storage module, and a microprocessor. The image capturing module continuously captures images of a determined filed. The face detection module detects the images to obtain a face to be tested, and records coordinates of the face in the image. The LIDAR system scans the face to be tested in the determined field according to the coordinates thereby to obtain three-dimensional information of the face to be tested. The storage module stores three-dimensional information of a determined face. The microprocessor compares the three-dimensional information of the face to be tested with the three-dimensional information of the determined face, and then outputs a recognition signal.03-03-2011
20110052003FOREGROUND OBJECT DETECTION IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.03-03-2011
20110052002FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.03-03-2011
20110052000DETECTING ANOMALOUS TRAJECTORIES IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous.03-03-2011
20110051999Device and method for detecting targets in images based on user-defined classifiers - A device and method for detecting targets of interest in an image, such as people or objects of a certain type. Targets are detected based on an optimized strong classifier descriptor that can be based on a combination of weak classifier descriptors. The weak classifier descriptors can include a user-defined weak classifier descriptor that is defined by a user to represent a shape or appearance attribute that is characteristic of parts of the target of interest. The strong classifier descriptor can be optimized by selecting a subset of weak classifier descriptors that exhibit improved performance in detecting targets in training images.03-03-2011
20100158314METHOD AND APPARATUS FOR MONITORING TREE GROWTH - A system for identifying forest stands within an area of interest that are exhibiting abnormal growth determines a relationship between vegetation index (VI) values determined from a first and a second image of the area of interest. From the relationship, an expected or predicted VI value for each forest stand is determined and compared with the actual VI value computed for the forest stand from the first image. Those forest stands with a difference between the actual and predicted VI values that exceed a threshold are identified as exhibiting abnormal growth.06-24-2010
20100158316Action estimating apparatus, method for updating estimation model, and program - A storage unit stores a model defining a position or a locus of a feature point of an occupant in each specific action. An action estimation unit compares the feature point with each of the models to detect an estimated action. A detecting unit detects that a specific action is being performed as a definite action. A first generating unit generates a new definite model corresponding to the definite action by modifying a position or a locus of the feature point according to an in-action feature point when the definite action is being performed. A second generating unit generates a new non-definite model using the in-action feature point according to a correspondence between the feature point in the definite action and the feature point of a non-definite model other than the definite model. An update unit updates the definite action model and the non-definite action model.06-24-2010
20100158315SPORTING EVENT IMAGE CAPTURE, PROCESSING AND PUBLICATION - Systems, methods and software are disclosed for capturing and/or importing and processing media items such as digital images or video (06-24-2010
20100158312Method for tracking and processing image - The invention relates to a method for image processing, which can be used to calibrate the background quickly. When the external environment is changed due to the switch of light, the color of background is calibrated quickly, and the background can be updated together. The method not only is used to update the background, but also can be used to eliminate the convergence of background again.06-24-2010
20100195869VISUAL TARGET TRACKING - A visual target tracking method includes representing a human target with a machine-readable model configured for adjustment into a plurality of different poses and receiving an observed depth image of the human target from a source. The observed depth image is compared to the model. A refine-z force vector is then applied to one or more force-receiving locations of the model to move a portion of the model towards a corresponding portion of the observed depth image if that portion of the model is Z-shifted from that corresponding portion of the observed depth image.08-05-2010
20100195868TARGET-LOCKING ACQUISITION WITH REAL-TIME CONFOCAL (TARC) MICROSCOPY - Presented herein is a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using, for example, a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. The system's capabilities are demonstrated by target-locking freely-diffusing clusters of attractive colloidal particles, and actively-transported quantum dots (QDs) endocytosed into live cells free to move in three dimensions for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume. Embodiments may be applied to other applications, such as manufacturing, open water observation of marine life, aerial observation of flying animals, or medical devices, such as tumor removal.08-05-2010
20100195870TRACKING METHOD AND DEVICE ADOPTING A SERIES OF OBSERVATION MODELS WITH DIFFERENT LIFE SPANS - The present invention relates to a tracking method and a tracking device adopting multiple observation models with different life spans. The tracking method is suitable for tracking an object in a low frame rate video or with abrupt motion, and uses three observation models with different life spans to track and detect a specific subject in frame images of a video sequence. An observation model I performs online learning with one frame image prior to the current image, an observation model II performs online learning with five frames prior to the current image, and an observation model III is offline trained. The three observation models are combined by a cascade particle filter so that the specific subject in the low frame rate video or the object with abrupt motion can be tracked quickly and accurately.08-05-2010
20100195867VISUAL TARGET TRACKING USING MODEL FITTING AND EXEMPLAR - A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.08-05-2010
20100067738IMAGE ANALYSIS USING A PRE-CALIBRATED PATTERN OF RADIATION - A system and method of image content analysis using a pattern generator that emits a regular and pre-calibrated pattern of non-visible electromagnetic radiation from a surface in range of a camera adapted to perceive the pattern. The camera captures images of the perceived pattern and other objects within the camera's range, and outputs image data. The image data is analyzed to determine attributes of the objects and area within the camera's range. The pattern provides a known background, which enables an improved and simplified image analysis.03-18-2010
20100067741Real-time tracking of non-rigid objects in image sequences for which the background may be changing - A method and apparatus is disclosed for tracking an arbitrarily moving object in a sequence of images where the background may be changing. The tracking is based on visual features, such as color or texture, where regions of images (such as those which represent the object being tracked or the background) can be characterized by statistical distributions of feature values. The method improves on the prior art by incorporating a means whereby characterizations of the background can be rapidly re-learned for each successive image frame. This makes the method robust against the scene changes that occur when the image capturing device moves. It also provides robustness in difficult tracking situations, such as when the tracked object passes in front of backgrounds with which it shares similar colors or other features. Furthermore, a method is disclosed for automatically detecting and correcting certain kinds of errors which may occur when employing this or other tracking methods.03-18-2010
20100067740Pedestrian Detection Device and Pedestrian Detection Method - A near-infrared night vision device to which a pedestrian detection device is applied includes a near-infrared projector, a near-infrared camera, a display and an ECU. By executing programs, the ECU constitutes a pedestrian candidate extraction portion and a determination portion. The pedestrian candidate extraction portion extracts pedestrian candidate regions from near-infrared images. The determination portion normalizes the sizes and the brightnesses of the pedestrian candidates extracted by the pedestrian candidate extraction portion, and then computes the degrees of similarity between the normalized pedestrian candidates. The determination portion determines that a pedestrian candidate having two or more other pedestrian candidates whose degree of similarity with the pedestrian candidate is greater than or equal to a predetermined value is not a pedestrian.03-18-2010
20100067739Sequential Stereo Imaging for Estimating Trajectory and Monitoring Target Position - A method for determining a position of a target includes obtaining a first image of the target, obtaining a second image of the target, wherein the first and the second images have different image planes and are generated at different times, processing the first and second images to determine whether the target in the first image corresponds spatially with the target in the second image, and determining the position of the target based on a result of the act of processing. Systems and computer products for performing the method are also described.03-18-2010
20100067744Method and Single Laser Device for Detecting Magnifying Optical Systems - The invention comprises illuminating a scene where said magnifying optical system (OP) may occur with at least one pulse generated by first laser transmitter (E). The laser transmitter (E) and a first detector of the scene thus illuminated (D03-18-2010
20100067743SYSTEM AND METHOD FOR TRACKING AN ELECTRONIC DEVICE - A system for tracking a spatially manipulated user controlling object using a camera associated with a processor. While the user spatially manipulates the controlling object, an image of the controlling object is picked-up via a video camera, and the camera image is analyzed to isolate the part of the image pertaining to the controlling object for mapping the position and orientation of the device in a two-dimensional space. Robust data processing systems and computerized method employing calibration and tracking algorithms such that minimal user intervention is required for achieving and maintaining successful tracking of the controlling object in changing backgrounds and lighting conditions.03-18-2010
20100067742OBJECT DETECTING DEVICE, IMAGING APPARATUS, OBJECT DETECTING METHOD, AND PROGRAM - An object detecting device includes a calculating unit configured to calculate gradient intensity and gradient orientation of luminance for a plurality of regions in an image and calculate a frequency distribution of the luminance gradient intensity as to the calculated luminance gradient orientation for each of the regions, and a determining unit configured to determine whether or not an identified object is included in the image by comparing a plurality of frequency distributions calculated for each of the regions.03-18-2010
20120269388ONLINE REFERENCE PATCH GENERATION AND POSE ESTIMATION FOR AUGMENTED REALITY - A reference patch of an unknown environment is generated on the fly for positioning and tracking. The reference patch is generated using a captured image of a planar object with two perpendicular sets of parallel lines. The planar object is detected in the image and axes of the world coordinate system are defined using the vanishing points for the two sets of parallel lines. The camera rotation is recovered based on the defined axes, and the reference patch of at least a portion of the image of the planar object is generated using the recovered camera rotation. The reference patch can then be used for vision based detection and tracking. The planar object may be detected in the image as sets of parallel lines or as a rectangle.10-25-2012
20120269389INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM - An information processing apparatus comprising: an obtaining unit configured to obtain image data; a detection unit configured to detect an object from the image data; an attribute determination unit configured to determine an attribute indicating a characteristic of the object detected by the detection unit; a registration unit configured to register the image data in at least one of a plurality of dictionaries based on the attribute determined by the attribute determination unit; and an adding unit configured to add, when the image data is registered in not less than two dictionaries, link information concerning the image data registered in the other dictionary to the image data registered in one dictionary.10-25-2012
20090175502Methods for discriminating moving objects in motion image sequences - In an exemplary embodiment of the present invention, an automated, computerized method is provided for classifying pixel values in a motion sequence of images. According to a feature of the present invention, the method comprises the steps of determining spectral information relevant to the sequence of images, and utilizing the spectral information to classify a pixel as one of background, shadow and object.07-09-2009
20090175501Imaging control apparatus and imaging control method - An imaging control apparatus includes preset information management means for holding and managing unit preset information including positional information indicative of the position of an imaging field changing mechanism that changes the imaging field of view of an imaging unit, and reference image data, the preset information management means, as a registration process in response to a registration instruction, producing and holding unit preset information including positional information indicative of the position of the imaging field changing mechanism when the registration instruction is issued and reference image data related to the positional information and produced based on an image signal obtained through imaging performed by the imaging unit when the registration instruction is issued; operation screen display control means for controlling display of an operation image used to select among preset items that correspond to respective sets of unit preset information held in the preset information management means, the operation screen display control means displaying and presenting, for each of the preset items, the reference image data contained in the corresponding unit preset information on the operation screen; and drive control means for carrying out drive control for changing the position of the imaging field changing mechanism, the drive control means carrying out the drive control in such a way that when a preset item is selected and entered on the operation screen, the imaging field changing mechanism is positioned as indicated by the positional information in the unit preset information that corresponds to the selected and entered preset item.07-09-2009
20090175499Systems and methods for identifying objects and providing information related to identified objects - Systems and methods for identifying an object and presenting additional information about the identified object are provided. The techniques of the present invention can allow the user to specify modes to help with identifying objects. Furthermore, the additional information can be provided with different levels of detail depending on user selection. Apparatus for presenting a user with a log of the identified objects is also provided. The user can customize the log by, for example, creating a multi-media album.07-09-2009
20090175498LOCATION MEASURING DEVICE AND METHOD - To realize high speed and high precision with device and method of three-dimensional measurement by applying estimating process to points corresponding to feature points in a plurality of motion frame images. With the device and method of calculating location information through processes of choosing a stereo pair, relative orientation, and bundle adjustment and using corresponding points of feature points extracted from respective motion frame images, each process is made up of two stages. To the first process section (stages:07-09-2009
20090175497LOCATION MEASURING DEVICE AND METHOD - With apparatus and method for measuring in three dimensions by applying an estimating process to points corresponding to feature points in a plurality of motion image frames, high speed and high accuracy are realized. The apparatus comprises: a first track determining section (07-09-2009
20090175496Image processing device and method, recording medium, and program - The present invention relates to image processing apparatus and method, a recording medium, and a program for providing reliable tracking of a tracking point. When a right eye 07-09-2009
20110142286DETECTIVE INFORMATION REGISTRATION DEVICE, TARGET OBJECT DETECTION DEVICE, ELECTRONIC DEVICE, METHOD OF CONTROLLING DETECTIVE INFORMATION REGISTRATION DEVICE, METHOD OF CONTROLLING TARGET OBJECT DETECTION DEVICE, CONTROL PROGRAM FOR DETECTIVE INFORMATION REGISTRATION DEVICE, AND CONTROL PROGRAM FOR TARGET OBJECT DETECTION DEVICE - A digital camera (06-16-2011
20110135147SYSTEM AND METHOD FOR OBSTACLE DETECTION USING FUSION OF COLOR SPACE INFORMATION - A method comprises receiving an image of the area, the image representing the area in a first color space; converting the received image to at least one second color space to produce a plurality of converted images, each converted image corresponding to one of a plurality of color sub-spaces in the at least one second color space; calculating upper and lower thresholds for at least two of the plurality of color sub-spaces; applying the calculated upper and lower thresholds to the converted images corresponding to the at least two color sub-spaces to segment the corresponding converted images; fusing the segmented converted images corresponding to the at least two color sub-spaces to segment the received image; and updating the segmentation of the received image based on edge density data in the received image.06-09-2011
20110164786CLOSE-UP SHOT DETECTING APPARATUS AND METHOD, ELECTRONIC APPARATUS AND COMPUTER PROGRAM - A close-up shot detection device includes motion detection element that calculates the amount of motion between at least two frames or fields constituting a video image every predetermined unit which is composed of one pixel or a plurality of adjacent pixels constituting the frame or field; binarization element that binarizes the calculated amount of motion; large-area specifying element that specifies, as a large area, a connected area in which the number of units is equal to or larger than a predetermined threshold, among connected areas which are obtained by connecting a predetermined number of units having the same binarized amount of motion; and close-up shot specifying element that, when at least one of preset criteria for the specified large area satisfies a predetermined condition, specifies a frame or field having the specified large area as a close-up shot. Consequently, a close-up shot can be easily and rapidly detected.07-07-2011
20110170746CAMERA BASED SENSING IN HANDHELD, MOBILE, GAMING OR OTHER DEVICES - Method and apparatus are disclosed to enable rapid TV camera and computer based sensing in many practical applications, including, but not limited to, handheld devices, cars, and video games. Several unique forms of social video games are disclosed.07-14-2011
20110188705IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - A frequency component of noise that is included in both images, and a frequency component of a first image that does not include said noise are estimated based on first image data obtained through imaging, using an imaging device, a first image that includes a specific image pattern, and based on second image data, obtained by imaging, using the imaging device, second image data that does not include the specific image pattern; and weighting is controlled, relative to frequencies, when calculating a correlation between the first image data and third image data, obtained through imaging a third image through the imaging device, based on the estimated individual frequency components.08-04-2011
20100027844MOVING OBJECT RECOGNIZING APPARATUS - Provided is a moving object recognizing apparatus capable of effectively showing reliability of result of image processing involved in moving object recognition and issuing alarms in an appropriate manner when needed. The moving object recognizing apparatus includes a data acquisition unit (02-04-2010
20100027840System and method for bullet tracking and shooter localization - A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.02-04-2010
20100027839SYSTEM AND METHOD FOR TRACKING MOVEMENT OF JOINTS - A first image is obtained. At least one moving object indicated by the at least one image is selected. At least one joint that is associated with the at least one moving object is identified. At least one second image including the at least one moving object with the at least one joint is obtained and the movement of the at least one joint is tracked in a three-dimensional space.02-04-2010
20100021009METHOD FOR MOVING TARGETS TRACKING AND NUMBER COUNTING - The invention discloses a method for moving targets tracking and number counting, comprising the steps of: a). acquiring continuously the video images comprising moving targets; b). acquiring the video image of a current frame, and pre-processing the video image of the current frame; c). segmenting the target region of the processed image, and extracting the target region; d). matching the target region of the current frame obtained in step c) with that of the previous frame based on an online feature selection to establish a match tracking link; and e). determining the number of the targets corresponding to each match tracking link based on the target region tracks recorded by the match tracking link. The invention can solve the problem of low precision of the number statistic results caused by the bad environment, such as that the distribution of the illumination is extremely not equilibrium spatially, the change in a time period is complicated, the change of the gesture during the people goes by is evident, and the like, under the normal application condition.01-28-2010
20100021008System and Method for Face Tracking - Improved face tracking is provided during determination of an image by an imaging device using a low power face tracking unit. In one embodiment, image data associated with a frame and one or more face detection windows from a face detection unit may be received by the face tracking unit. The face detection windows are associated with the image data of the frame. A face list may be determined based on the face detection windows and one or more faces may be selected from the face list to generate an output face list. The output face list may then be provided to a processor of an imaging device for the detection of an image based on at least one of coordinate and scale values of the one or more faces on the output face list.01-28-2010
20100021007RECONSTRUCTION OF DATA PAGE FROM IMAGED DATA - The present invention relates to an electronic device (01-28-2010
20100021005Time Managing Device of a Computer System and Related Method - A time managing device of a computer system including a graphic user interface capable of displaying application windows is disclosed. The time managing device includes an image capturing device, a sight-light detecting unit and a reminding unit. The image capturing device is used for capturing a user image corresponding to a user. The sight-light detecting unit is coupled to the image capturing device and used for analyzing a user sight-light state according to the user image to generate a sight-light detection result. The reminding unit is coupled to the sight-light detecting unit and the graphic user interface, and used for performing a reminder to a predetermined application window displayed on the graphic user interface according to a predetermined time and the sight-light detection result.01-28-2010
20110216943IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person.09-08-2011
20110216942IMAGE-CAPTURING APPARATUS AND METHOD, EXPRESSION EVALUATION APPARATUS, AND PROGRAM - An image-capturing apparatus for capturing an image by using a solid-state image-capturing device may include a face detector configured to detect a face of a human being on the basis of an image signal in a period until an image signal obtained by image capturing is recorded on a recording medium; an expression evaluation section configured to evaluate the expression of the detected face and to compute an expression evaluation value indicating the degree to which the detected face is close to a specific expression in relation to expressions other than the specific expression; and a notification section configured to notify notification information corresponding to the computed expression evaluation value to an image-captured person.09-08-2011
20110216940TARGET DETECTION DEVICE AND TARGET DETECTION METHOD - Disclosed is a target detection device which can match a moving object in a captured image to an identifier when a plurality of identifiers began to be received in a short time, or when the number of identifiers received was larger than the number of detected position histories. The device (09-08-2011
20110216939APPARATUS AND METHOD FOR TRACKING TARGET - A target tracking apparatus and method according to an exemplary embodiment of the present invention may quickly and accurately perform target detection and tracking in a photographed image given as consecutive frames by acquiring at least one target candidate image most similar to a photographed image of a previous frame among prepared reference target images, determining one of the target candidate images as a target confirmation message based on the photographed image, and calculating a homography between the determined target confirmation image and the photographed image, and searching the photographed image of the previous image for feature points according to the calculated homography, and tracking an inter-frame change from the previous frame of the found feature points to a current frame.09-08-2011
20110216938Apparatus for detecting lane-marking on road - The image processing ECU periodically acquires road-surface images and extracts edge points in the acquired road-surface image. Subsequently, the ECU determines the operating mode and extracts the edge line when the operating mode is either a dotted mode or a frame-accumulation mode. The edge points are transformed e.g. Hough transform, to extract an edge line that most frequently passes through the edge points. The extracted edge line denotes the lane marking. The ECU outputs a signal to activate a buzzer alert when determining the vehicle may depart from the lane.09-08-2011
20090074248GESTURE-CONTROLLED INTERFACES FOR SELF-SERVICE MACHINES AND OTHER APPLICATIONS - A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.03-19-2009
20120148093Blob Representation in Video Processing - A method of processing a video sequence is provided that includes receiving a frame of the video sequence, identifying a plurality of blobs in the frame, computing at least one interior point of each blob of the plurality of blobs, and using the interior points in further processing of the video sequence. The interior points may be used, for example, in object tracking.06-14-2012
20110052001AUTOMATIC ERROR DETECTION FOR INVENTORY TRACKING AND MANAGEMENT SYSTEMS USED AT A SHIPPING CONTAINER YARD - A method automatically detects errors in a container inventory database associated with a container inventory tracking system of a container storage facility. A processor in the inventory tracking system performs a method that: obtains a first data record, identifies an event (e.g., pickup, drop-off, or movement) associated with the first record, provides a list of error types based on the identified event, and determines whether a data error has occurred through a checking process. In each of the checking steps, the processor selects an error type from the list of error types, determines a search criterion based on the selected error type and the first data record, queries the database using the search criterion, compares query results with the first data record to detect data conflicts between them, and upon the detection of the data conflicts, reports that a data error of the selected error type has been detected.03-03-2011
20120308079IMAGE PROCESSING DEVICE AND DROWSINESS ASSESSMENT DEVICE - An object of the present invention is to reduce false detection of an eyelid from a face image. According to the present invention, it is determined whether the amount of the change in the position of an eyelid outline candidate line during blinking matches the normal movement of an eyelid. When it is determined that the amount of the change in the position of the eyelid outline candidate line does not match the normal movement of the eyelid during blinking, the eyelid outline candidate line is not set as an eyelid outline. Therefore, it is possible to reduce false detection of the eyelid from the face image.12-06-2012
20120308081POSITION INFORMATION ACQUIRING APPARATUS, POSITION INFORMATION ACQUIRING APPARATUS CONTROL METHOD, AND STORAGE MEDIUM - A position information acquiring apparatus comprises: a first acquiring unit configured to acquire first position information of the position information acquiring apparatus upon image capturing; a first storage unit configured to store image data generated by the image capturing and the first position information in a memory in association with each other; a second acquiring unit configured to acquire second position information of the position information acquiring apparatus upon image capturing; and a second storage unit configured to store the second position information in the memory in association with the image data when the second position information higher in accuracy than the first position information is acquired after the first storage unit stores the image data and the first position information in association with each other.12-06-2012
20120308082RECOGNITION OBJECT DETECTING APPARATUS - A recognition object detecting apparatus is provided which includes an imaging unit which generates image data representing a taken image, and a detection unit which detects a recognition object from the image represented by the image data. The imaging unit has a characteristic in which a relation between luminance and output pixel values varies depending on a luminance range. The detection unit binarizes the output pixel values of the image represented by the image data by using a plurality of threshold values to generate a to plurality of binary images, and detects the recognition object based on the plurality of binary images.12-06-2012
20120308080IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a motion vector detector configured to detect, based on a first image and a second image different from the first image among a plurality of images, motion vectors representing a movement of an object on the second image with respect to an object on the first image; a first calculation unit configured to calculate an acceleration of the object on the image based on the motion vectors; a second calculation unit configured to calculate an object position representing a position of an object on an interpolation image interpolated between the images adjacent in a time direction among the images based on the acceleration, and an interpolation processing unit configured to interpolate the interpolation image on which the object is drawn at the object position.12-06-2012
20120308077Computer-Vision-Assisted Location Check-In - In one embodiment, an uploaded multimedia object comprising a photo image or video is subjected to computer vision algorithms to detect and isolate objects within the multimedia object, and the isolated object is searched against a photographic location database containing images of a plurality of locations. Upon detecting a matching object, the location information associated with the photograph in the database containing the matching object may be leveraged to automatically check the user in to the associated location.12-06-2012
20120308078STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND IMAGE PROCESSING SYSTEM - An image processing apparatus writes a virtual space image obtained by imaging a virtual space in which objects are arranged from a virtual camera to an output area. When a pointer image representing a positional relationship between a referential position and an arrangement position of the object is depicted on the virtual space image stored in the output area, the pointer image to be depicted is changed in correspondence with conditions, such as the height of the virtual camera and the attribute of the object.12-06-2012
20120308076APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION - Object recognition apparatus and methods useful for extracting information from an input signal. In one embodiment, the input signal is representative of an element of an image, and the extracted information is encoded into patterns of pulses. The patterns of pulses are directed via transmission channels to a plurality of detector nodes configured to generate an output pulse upon detecting an object of interest. Upon detecting a particular object, a given detector node elevates its sensitivity to that particular object when processing subsequent inputs. In one implementation, one or more of the detector nodes are also configured to prevent adjacent detector nodes from generating detection signals in response to the same object representation. The object recognition apparatus modulates properties of the transmission channels by promoting contributions from channels carrying information used in object recognition.12-06-2012
20120148103METHOD AND SYSTEM FOR AUTOMATIC OBJECT DETECTION AND SUBSEQUENT OBJECT TRACKING IN ACCORDANCE WITH THE OBJECT SHAPE - A method and system for automatic object detection and subsequent object tracking in accordance with the object shape in digital video systems having at least one camera for recording and transmitting video sequences. In accordance with the method and system, an object detection algorithm based on a Gaussian mixture model and expanded object tracking based on Mean-Shift are combined with each other in object detection. The object detection is expanded in accordance with a model of the background by improved removal of shadows, the binary mask generated in this way is used to create an asymmetric filter core, and then the actual algorithm for the shape-adaptive object tracking, expanded by a segmentation step for adapting the shape, is initialized, and therefore a determination at least of the object shape or object contour or the orientation of the object in space is made possible.06-14-2012
20090041300HEADLIGHT SYSTEM FOR VEHICLES, PREFERABLY FOR MOTOR VEHICLES - 1. Headlight system for vehicles, preferably for motor vehicles02-12-2009
20090103775Multi-Tracking of Video Objects - An inventive method for video object tracking includes the steps of selecting an object; choosing an object type for the object, and enabling one of multiple object tracking processes responsive to the object type chosen. In a preferred embodiment selecting the object includes one of segmenting the object by using a region, selecting points on the boundary of an object, aggregating regions or combining a selected region and selected points on a boundary of an object. The object tracking processes can be expanded to include tracking processes adapted to newly created object types.04-23-2009
20100278385FACIAL EXPRESSION RECOGNITION APPARATUS AND FACIAL EXPRESSION RECOGNITION METHOD THEREOF - A facial expression recognition apparatus and a facial expression recognition method thereof are provided. The facial expression recognition apparatus comprises a gray image generating unit, a face edge detection unit, a motion skin extraction unit, a face contour generating unit and a facial expression recognition unit. The gray image generating unit generates a gray image according to an original image. The face edge detection unit outputs a face edge detection result according to the gray image. The motion skin extraction unit generates a motion skin extraction result according to the original image, and generates a face and background division result according to the motion skin extraction result. The face contour generating unit outputs a face contour according to the gray image, the face edge detection result and the face and background division result. The facial expression recognition unit outputs a facial expression recognition result according to the face contour.11-04-2010
20110150278INFORMATION PROCESSING APPARATUS, PROCESSING METHOD THEREOF, AND NON-TRANSITORY STORAGE MEDIUM - An information processing apparatus comprising: a storage unit configured to store image features of multiple targets and mutual relationship information of the multiple targets; an input unit configured to input an image; a detection unit configured to detect a region of a target from the input image; an identification unit configured to, based on the stored image features and image features of the detected region, identify the target of the region; and an estimation unit configured to, in the case where both a first region in which a target was identified and a second region in which a target could not be identified are present in the input image, estimate a candidate for the target in the second region based on the mutual relationship information and the target in the first region.06-23-2011
20110150271MOTION DETECTION USING DEPTH IMAGES - A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera.06-23-2011
20090087028Hand Washing Monitoring System - A hand washing monitoring system (04-02-2009
20120121129IMAGE PROCESSING APPARATUS - An image processing apparatus includes a first searcher. The first searcher searches for, from a designated image, one or at least two first partial images each of which represents a face portion. A second searcher searches for, from the designated image, one or at least two second partial images each of which represents a rear of a head. A first setter sets a region corresponding to the one or at least two first partial images detected by the first searcher as a reference region for an image quality adjustment. A second setter sets a region different from a region corresponding to the one or at least two second partial images detected by the second searcher as the reference region. A start-up controller selectively starts up the first setter and the second setter so that the first setter has priority over the second setter.05-17-2012
20120121128OBJECT TRACKING SYSTEM - The present invention provides a system, method and computer program product for tracking the movement of a plurality of targets, wherein the detected movement is used for the modification of an interactive environment. The system comprises one or more imaging devices configured to capture two or more images of at least some of a plurality of target identifiers with one or more of a plurality of targets. The system further comprises a processing module which is operatively coupled to the one or more imaging devices, and configured to receive and process the two or more images. During the processing a first location parameter and a second location parameter for a predetermined region are determined. The one or more movement parameters are at least in part determined from the first and second location parameters and used for the modification of the interactive environment.05-17-2012
20110305366Adaptive Action Detection - Described is providing an action model (classifier) for automatically detecting actions in video clips, in which unlabeled data of a target dataset is used to adaptively train the action model based upon similar actions in a labeled source dataset. The target dataset comprising unlabeled video data is processed into a background model. The action model is generated from the background model using a source dataset comprising labeled data for an action of interest. The action model is iteratively refined, generally by fixing a current instance of the action model and using the current instance of the action model to search for a set of detected regions (subvolumes), and then fixing the set of subvolumes and updating the current instance of the action model based upon the set of subvolumes, and so on, for a plurality of iterations.12-15-2011
20110305369PORTABLE WIRELESS MOBILE DEVICE MOTION CAPTURE AND ANALYSIS SYSTEM AND METHOD - Portable wireless mobile device motion capture and analysis system and method configured to display motion capture/analysis data on a mobile device. System obtains data from motion capture elements and analyzes the data. Enables unique displays associated with the user, such as 3D overlays onto images of the user to visually depict the captured motion data. Ratings associated with the captured motion can also be displayed. Predicted ball flight path data can be calculated and displayed. Data shown on a time line can also be displayed to show the relative peaks of velocity for various parts of the user's body. Based on the display of data, the user can determine the equipment that fits the best and immediately purchase the equipment, via the mobile device. Custom equipment may be ordered through an interface on the mobile device from a vendor that can assemble-to-order customer built equipment and ship the equipment. Includes active and passive golf shot count capabilities.12-15-2011
20110305368STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus detects a predetermined image object including a first graphic pattern with a plurality of inner graphic patterns drawn therein from a captured image captured by an image-capturing section. The game apparatus first obtains the captured image captured by the image-capturing section, and detects an area of the first graphic pattern from the captured image. Then, the game apparatus detects the plurality of inner graphic patterns from within the detected area, and calculates center positions of the inner graphic patterns so as to detect the position of the predetermined image object.12-15-2011
20110305367STORAGE MEDIUM HAVING IMAGE RECOGNITION PROGRAM STORED THEREIN, IMAGE RECOGNITION APPARATUS, IMAGE RECOGNITION SYSTEM, AND IMAGE RECOGNITION METHOD - A game apparatus obtains a captured image captured by a camera. First, the game apparatus detects an object area of the captured image that includes a predetermined image object based on pixel values obtained at a first pitch across the captured image. Then, the game apparatus detects a predetermined image object from an image of the object area based on pixel values obtained at a second pitch smaller than the first pitch across the object area of the captured image.12-15-2011
20110142284Method and Apparatus for Acquiring Accurate Background Infrared Signature Data on Moving Targets - A method for measuring an infrared signature of a moving target includes: tracking the moving target with a tracking system along a path from a start position to an end position, measuring infrared radiation data of the moving target along the path, repositioning the tracking system to the start position, retracing the path to measure the infrared radiation data of the background, and determining the infrared signature of the moving target by comparing the infrared radiation data of the moving object with the infrared radiation data of the background without the moving object.06-16-2011
20130170699Techniques for Context-Enhanced Confidence Adjustment for Gesture - Techniques are provided for a gesture device to detect a series of gestures performed by a user and execute corresponding electronic commands associated with the gestures. The gesture device detects a gesture constituting movements from a user in three-dimensional space and generates a confidence score value for the gesture. The gesture device selects an electronic command associated with the gesture and compares the electronic command with a prior electronic command associated with a prior gesture previously detected by the gesture device in order to determine a compatibility metric between the electronic command and the prior electronic command. The gesture device then adjusts the confidence score value based on the compatibility metric to obtain a modified confidence score value. The electronic command is executed by the gesture device when the modified confidence score value is greater than a predetermined threshold confidence score value.07-04-2013
20130170704IMAGE PROCESSING APPARATUS AND IMAGE MANAGEMENT METHOD - Provided is an image processing apparatus comprising: an acquisition unit that acquires location information indicating a photographed point and date/time information indicating a photographed date/time for each of a plurality of images representing image data obtained by photographing; a determination unit that determines whether the photographed point of each image is a main photographed point or a sub-photographed point on the basis of the location information and the date/time information; and a recording unit that, if the photographed point of the image is the main photographed point, records information indicating the location of the main photographed point in association with the image data of the image, and that, if the photographed point of the image is the sub-photographed point, records information indicating the locations of the sub-photographed point and of the main photographed point in association with the image data of the image.07-04-2013
20130170706GUIDANCE DEVICE, GUIDANCE METHOD, AND GUIDANCE PROGRAM - Image recognition is performed based on a surrounding image and a recognition template used for the image recognition of a marker object, and a recognition confidence level used for determining if the marker object can be recognized in the surrounding image is calculated. A determination is made if the recognition confidence level has increased as compared with the recognition confidence level calculated based on the surrounding image acquired at the guidance output point. If it is determined that the recognition confidence level has increased, the image of the marker object, generated based on the surrounding image acquired at the guidance output point, is stored as a new template to be used for the image recognition of the marker object. This increases the possibility to recognize the marker object based on the new template, thus increasing the recognition accuracy of the marker object.07-04-2013
20100290668LONG DISTANCE MULTIMODAL BIOMETRIC SYSTEM AND METHOD - A system for multimodal biometric identification has a first imaging system that detects one or more subjects in a first field of view, including a targeted subject having a first biometric characteristic and a second biometric characteristic; a second imaging system that captures a first image of the first biometric characteristic according to first photons, where the first biometric characteristic is positioned in a second field of view smaller than the first field of view, and the first image includes first data for biometric identification; a third imaging system that captures a second image of the second biometric characteristic according to second photons, where the second biometric characteristic is positioned in a third field of view which is smaller than the first and second fields of view, and the second image includes second data for biometric identification. At least one active illumination source emits the second photons.11-18-2010
20100232648IMAGING APPARATUS, MOBILE BODY DETECTING METHOD, MOBILE BODY DETECTING CIRCUIT AND PROGRAM - An imaging apparatus includes: a moving body detecting section that detects if an object in an image is a moving body which makes a motion between frames; and an attribute determining section that determines a similarity indicating whether or not the object detected as the moving body is similar among a plurality of frames, and a change in luminance of the object based on a texture and luminance of the object, and, when determining that the object is a light/shadow-originated change in luminance, adds attribute information indicating the light/shadow-originated change in luminance to the object detected as the moving body.09-16-2010
20120039508TARGET DETECTING METHOD AND APPARATUS - Target detecting method and apparatus are disclosed. In the target detecting method, edges in a first direction in an input image may be detected to obtain an edge image comprising a plurality of edges in the first direction; and one or more candidate targets may be generated according to the plurality of edges in the first direction, a region between any two of the plurality of edges in the first direction in the input image corresponding to one of the candidate targets.02-16-2012
20120039506METHOD FOR IDENTIFYING AN OBJECT IN A VIDEO ARCHIVE - The invention concerns a method for identifying an object in a video archive including multiple images acquired in a network of cameras including a phase of characterisation of the object to be identified and a phase of searching for the said object in the said archive, where the said characterisation phase consists in defining for the said object at least one semantic characteristic capable of being extracted, even in low-resolution images, from the said video archive.02-16-2012
20100177929ENHANCED SAFETY DURING LASER PROJECTION - The present invention is directed to systems and methods that provide enhanced eye safety for image projection systems. In particular, the instant invention provides enhanced eye safety for long throw laser projection systems.07-15-2010
20120148095IMAGE PROCESSING APPARATUS - An image processing apparatus includes a detector. A detector detects one or at least two object images each of which is coincident with a dictionary image from each of K (K: an integer of two or more) of continuous shot images. A classifier executes on the K of continuous shot images a process of classifying the object images detected according to a common object. A determiner determines an attribute of equal to or less than K of object images belonging to each of one or at least two object image groups classified. A first excluder excludes a continuous shot image satisfying an error condition out of the K of the continuous shot images, based on a determined result. A selector selects a part of one or at least two continuous shot images remained after an exclusion as a specific image.06-14-2012
20120148096APPARATUS AND METHOD FOR CONTROLLING IMAGE USING MOBILE PROJECTOR - Disclosed is an image control system using a mobile projector, including a first apparatus configured to determine, when a first picture is projected and a user input for a specific image is received, whether the projected first picture is projected onto the specific image, and if so, control the specific image to perform an operation corresponding to the user input, and a second apparatus configured to, receive the user input from the first apparatus, determine whether the first picture is projected onto the specific image, and if so, perform an operation corresponding to the user input.06-14-2012
20120148102MOBILE BODY TRACK IDENTIFICATION SYSTEM - There is provided a mobile body track identification system that determines which mobile body matches which detected track with a high precision irrespective of frequent interruption of tracks of a mobile body detected in a tracking area. Herein, hypotheses are generated by use of sets of track-coupling candidate/identification pairs, which combines track-coupling candidates, combining tracks of a mobile body detected in a predetermined time in the past, and identifications of the mobile body and which satisfies a predetermined condition. Next, identification likelihoods are calculated as likelihoods of detecting identifications in connection with tracks indicated by track-coupling candidates included in track-coupling candidate/identification pairs ascribed to each of the selected hypotheses. Identification likelihoods are integrated per each track-coupling candidate/identification pair, thus calculating an identification likelihood regarding the selected hypothesis. A most-probable hypothesis is estimated based on identification likelihoods of hypotheses.06-14-2012
20120148100POSITION AND ORIENTATION MEASUREMENT DEVICE AND POSITION AND ORIENTATION MEASUREMENT METHOD - A position and orientation measurement device includes a grayscale image input unit that inputs a grayscale image of an object, a distance image input unit that inputs a distance image of the object, an approximate position and orientation input unit that inputs an approximate position and orientation of the object with respect to the position and orientation measurement device, and a position and orientation calculator that updates the approximate position and orientation. The position and orientation calculator calculates a first position and orientation so that an object image on an image plane and a projection image of the three-dimensional shape model overlap each other, associates the three-dimensional shape model with the image features of the grayscale image and the distance image, and calculates a second position and orientation on the basis of a result of the association.06-14-2012
20120148098ELECTRONIC CAMERA - An electronic camera includes an imager. An imager outputs an electronic image corresponding to an optical image captured on an imaging surface. A first generator generates a first notification forward of the imaging surface. A searcher searches for one or at least two face images each having a size exceeding a reference from the electronic image outputted from the imager. A controller controls a generation manner of the first generator with reference to an attribute of each of one or at least two face images detected by the detector.06-14-2012
201201480973D MOTION RECOGNITION METHOD AND APPARATUS - Disclosed are a three-dimensional motion recognition method and an apparatus using a motion template method and an optical flow tracking method of feature points. The three dimensional (3D) motion recognition method through feature-based stereo matching according to an exemplary embodiment of the present disclosure includes: obtaining a plurality of images from a plurality of cameras; extracting feature points from a single reference image; and comparing and tracking the feature points of the reference image and another comparison image photographed at the same time using an optical flow method.06-14-2012
20090220124AUTOMATED SCORING SYSTEM FOR ATHLETICS - Disclosed are methods and systems for utilizing motion capture techniques, for example, video based motion capture techniques, for capturing and modeling the captured 3D movement of an athlete through a defined space. The model is then compared with an intended motion pattern in order to identify deviations and/or form breaks that, in turn, may be used in combination with a scoring algorithm to quantify the athlete's execution of the intended motion pattern to produce an objective score. It is anticipated that these methods and systems will be particularly useful for training and judging in those sports that have struggled with the vagaries introduced by the subjective nature of human scoring.09-03-2009
20120039510SYSTEM AND METHOD FOR REMOTELY MONITORING AND/OR VIEWING IMAGES FROM A CAMERA OR VIDEO DEVICE - A system and method are provided for remotely monitoring images from an image capturing device. Image data from an image capturing component is received where image data represents images of a scene in a field of view of the image capturing component. The image data may be analyzed to determine that the scene has changed. A determination may be made that the scene has changed. In response to this determination being made, a communication may be transmitted to a designated device, recipient or network location. The communication may be informative that a scene change or event occurred. The communication may be in the form of a notification or an actual image or series of images of the scene after the change or event.02-16-2012
20120039509INFORMATION-INPUTTING DEVICE INPUTTING CONTACT POINT OF OBJECT ON RECORDING SURFACE AS INFORMATION - Structure and function for inputting information preferably includes a display device having two cameras in respective corners thereof. At least one computer readable medium preferably has program instructions configured to cause at least one processing structure to: (i) extract an object located on a plane of the display device from an image that includes the plane of the object, (ii) determine whether the object is a writing implement by determining, when a plurality of objects are extracted from the image, that one of the plurality of objects that satisfies a prescribed condition is the writing implement, (iii) calculate a position of a contact point between the writing implement and the plane as information to be input if the object has been determined as the writing implement, and (iv) input the information representing a position on the plane indicated by the object.02-16-2012
20120039511Information Processing Apparatus, Information Processing Method, and Computer Program - An information processing apparatus that executes processing for creating an environmental map includes a camera that photographs an image, a self-position detecting unit that detects a position and a posture of the camera on the basis of the image, an image-recognition processing unit that detects an object from the image, a data constructing unit that is inputted with information concerning the position and the posture of the camera and information concerning the object and executes processing for creating or updating the environmental map, and a dictionary-data storing unit having stored therein dictionary data in which object information is registered. The image-recognition processing unit executes processing for detecting an object from the image acquired by the camera with reference to the dictionary data. The data constructing unit applies the three-dimensional shape data registered in the dictionary data to the environmental map and executes object arrangement on the environmental map.02-16-2012
20120039507Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for moving image including an image of a user and captured by an image capturing device. An initial processing unit determines correspondence between an amount of movement of the user and a parameter defining an image to be ultimately output in a conversion information storage unit. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate the magnification and translation amount of the user's head contour. The input value conversion unit converts the amount of movement of the user into the parameter defining an image using the magnification and the translation amount as parameters. The output data generation unit generates an image based on the parameter. The output control unit controls the generated image so as to be displayed on a display device.02-16-2012
20120039505DYNAMICALLY RESIZING TEXT AREA ON A DISPLAY DEVICE - Dynamically resizing a text area in which text is displayed on a display device. A camera device periodically captures snapshots of a user's gaze point and head position while reading text, and the captured snapshots are used to detect movement of the user's head. Head movement suggests that the text area is too wide for comfortable viewing. Accordingly, the width of the text area is automatically resized, responsive to detecting head movement. Preferably, the resized width is set to the position of the user's gaze point prior to the detected head movement. The text is then preferably reflowed within the resized text area. Optionally, the user may be prompted to confirm whether the resizing will be performed.02-16-2012
20080232642System and method for 3-D recursive search motion estimation - A method for 3-D recursive search motion estimation is provided to estimate a motion vector for a current block in a current frame. The method includes the following steps. First, provide a spatial prediction by selecting at least one motion vector for at least one neighboring block in the current frame. Then, provide a temporal prediction. After that, estimate the motion vector for the current block based on the spatial prediction and the temporal prediction. The temporal prediction is obtained by selecting at least one most frequent motion vector from a plurality of motion vectors for a plurality of blocks in a corresponding region of a previous frame, wherein the corresponding block encloses a previous block which is location corresponding to the current block in the current frame.09-25-2008
20120148104PEDESTRIAN-CROSSING MARKING DETECTING METHOD AND PEDESTRIAN-CROSSING MARKING DETECTING DEVICE - Provided are a pedestrian-crossing marking detecting method and a pedestrian-crossing marking detecting device, wherein the existence of pedestrian crossing markings and the positions thereof can be detected accurately from within a picked up image, even when detection of the intensity edges of painted sections is difficult. In the pedestrian-crossing mark detecting device (06-14-2012
20110064271METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT USING A SEQUENCE OF CROSS-SECTION IMAGES, COMPUTER PROGRAM PRODUCT, AND CORRESPONDING METHOD FOR ANALYZING AN OBJECT AND IMAGING SYSTEM - The method comprises, for each cross-section image, determining the position of the object (O) in relation to the cross-section plane at the moment the cross-section image is captured, and determining a three-dimensional representation (V) of the object (O) using cross-section images (X03-17-2011
20110064268VIDEO SURVEILLANCE SYSTEM CONFIGURED TO ANALYZE COMPLEX BEHAVIORS USING ALTERNATING LAYERS OF CLUSTERING AND SEQUENCING - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A video surveillance system may be configured to observe a scene (as depicted in a sequence of video frames) and, over time, develop hierarchies of concepts including classes of objects, actions and behaviors. That is, the video surveillance system may develop models at progressively more complex levels of abstraction used to identify what events and behaviors are common and which are unusual. When the models have matured, the video surveillance system issues alerts on unusual events.03-17-2011
20110064267CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior.03-17-2011
20110064269OBJECT POSITION TRACKING SYSTEM AND METHOD - A method of tracking an object is provided. The method includes obtaining sensed positions of the object at a plurality of time instants and predicting a future position of the object by applying fuzzy predictive rules to the sensed positions of the object obtained from at least two previous time instants.03-17-2011
20110064270OPTICAL TRACKING DEVICE AND POSITIONING METHOD THEREOF - The present invention discloses an optical tracking device and a positioning method thereof. The optical tracking device comprises several light-emitting units, several image tracking units, an image processing unit, an analysis unit, and a calculation unit. First, the light-emitting units are correspondingly disposed on a carrier in geometric distribution and provide light sources. Secondly, the image tracking units track the plurality of light sources and capture images. The images are subjected to image processing by the image processing unit to obtain light source images corresponding to the light sources from each image. Then the analysis unit analyzes the light source images to obtain positions and colors corresponding to the light-emitting units. Lastly, the calculation unit establishes three-dimensional coordinates corresponding to the light-emitting units based on the positions and colors and calculates the position of the carrier based on the three-dimensional coordinates.03-17-2011
20110064272Method and apparatus for three-dimensional tracking of infra-red beacons - A method for processing data includes identifying a time signature of an infra-red (IR) beacon. Image data associated with the IR beacon is identified using the time signature.03-17-2011
20110170739Automated Acquisition of Facial Images - Described is a technology by which medical patient facial images are acquired and maintained for associating with a patient's records and/or other items. A video camera may provide video frames, such as captured when a patient is being admitted to a hospital. Face detection may be employed to clip the facial part from the frame. Multiple images of a patient's face may be displayed on a user interface to allow selection of a representative image. Also described is obtaining the patient images by processing electronic documents (e.g., patient records) to look for a face pictured therein.07-14-2011
20110091074MOVING OBJECT DETECTION METHOD AND MOVING OBJECT DETECTION APPARATUS - A moving object detection method includes: extracting NL long-term trajectories (NL≧2) over TL pictures (TL≧3) and NS short-term trajectories (NS>NL) over TS pictures (TL>TS≧2), using movement trajectories; calculating a geodetic distance between the NL long-term trajectories and a geodetic distance between the NS short-term trajectories (S04-21-2011
20110091073MOVING OBJECT DETECTION APPARATUS AND MOVING OBJECT DETECTION METHOD - To provide a moving object detection apparatus which accurately performs region extraction, regardless of the pose or size of a moving object. The moving object detection apparatus includes: an image receiving unit receiving the video sequence; a motion analysis unit calculating movement trajectories based on motions of the image; a segmentation unit performing segmentation so as to divide the movement trajectories into subsets, and setting a part of the movement trajectories as common points shared by the subsets; a distance calculation unit calculating a distance representing a similarity between a pair of movement trajectories, for each of the subsets; a geodesic distance calculation unit transforming the calculated distance into a geodesic distance; an approximate geodesic distance calculation unit calculating an approximate geodesic distance bridging over the subsets, by integrating geodesic distances including the common points; and a region extraction unit performing clustering on the calculated approximate geodesic distance.04-21-2011
20110091072IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, AND CONTROL METHOD FOR IMAGE PROCESSING APPARATUS - An image processing apparatus capable of communicating with a plurality of servers stores image data including an object of recognition, and a plurality of recognition dictionaries. The image processing apparatus establishes communication with one of the servers to receive, from the server with which the communication has been established, designation information designating a recognition dictionary for recognizing the object of recognition included in the image data. The image processing apparatus identifies the recognition dictionary designated in the received designation information from among the stored recognition dictionaries and uses the identified recognition dictionary to recognize the object of recognition included in the image data.04-21-2011
20110091071INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus including an image acquisition unit that acquires a target image; a face part extraction unit that extracts a face region including a face part from the target image; an identification unit that identifies a model face part by comparing the face part to a plurality of model face parts stored in a storage unit; and an illustration image determination unit that determines an illustration image corresponding to the identified model face part.04-21-2011
20110091070COMBINING MULTI-SENSORY INPUTS FOR DIGITAL ANIMATION - Animating digital characters based on motion captured performances, including: receiving sensory data collected using a variety of collection techniques including optical video, electro-oculography, and at least one of optical, infrared, and inertial motion capture; and managing and combining the collected sensory data to aid cleaning, tracking, labeling, and re-targeting processes. Keywords include Optical Video Data and Inertial Motion Capture.04-21-2011
20110091069INFORMATION PROCESSING APPARATUS AND METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus comprises: an extraction unit configured to extract a person from a video obtained by capturing a real space; a holding unit configured to hold a movement estimation rule corresponding to a partial region specified in the video; a determination unit configured to determine whether a region where the person has disappeared from the video or appeared in the video corresponds to the partial region; and an estimation unit configured to estimate, based on the movement estimation rule corresponding to the partial region determined to correspond, a movement of the person after the person has disappeared from the video or before the person has appeared in the video.04-21-2011
20110091068Secure Tracking Of Tablets - A method of tracking and tracing tablets, in particular pharmaceutical tablets, includes reading, i.e. detecting, code structure from the tablet, reading additional information from the package on an information sheet, and then comparing the readings to verify authenticity. The code structure may be two-dimensional or three-dimensional. The detected code may further be compared with information stored in a database.04-21-2011
20110317876Optical Control System for Heliostats - A method of aligning a reflector with a target includes receiving, at a first reflector, light from a light source. The first reflector is configured to reflect light from the light source onto a target, illuminating the target in a first target region. A first image of the target is captured, using an imaging device. The first reflector is configured to reflect light from the light source onto the target, illuminating the target in a second target region. A second image of the target is captured, using the imaging device. The differences between the first image and the second image are compared to determine the alignment of the first reflector with respect to at least one of the light source and the target.12-29-2011
20110317874Information Processing Device And Information Processing Method - An image acquisition unit of an information processing device acquires data for a moving image including an image of a user and captured by an image capturing device. A tracking processing unit uses a particle filter to perform visual tracking in the moving image so as to estimate a head contour of the user. A gesture detection unit identifies a facial region in an area inside the head contour, acquires a parameter indicating the orientation of the face, and keeping a history of parameters. When time-dependent change in the orientation of the face meets a predetermined criterion, it is determined that a gesture is made. The output data generation unit generates output data dependent on a result of detecting a gesture. The output control unit controls the generated output data so as to display the data on the display, for example.12-29-2011
20110317877METHOD OF MOTION DETECTION AND AUTONOMOUS MOTION TRACKING USING DYNAMIC SENSITIVITY MASKS IN A PAN-TILT CAMERA - A method of identifying motion within a field of view includes capturing at least two sequential images within the field of view. Each of the images includes a respective array of pixel values. An array of difference values between corresponding ones of the pixel values in the sequential images is calculated. A sensitivity region map corresponding to the field of view is provided. The sensitivity region map includes a plurality of regions having different threshold values. A presence of motion is determined by comparing the difference values to corresponding ones of the threshold values.12-29-2011
20110317875Identifying and Redressing Shadows in Connection with Digital Watermarking and Fingerprinting - The present disclosure relates generally to cell phones and cameras, and to shadow detection in images captured by such cell phones and cameras. One claim recites a method comprising: identifying a shadow cast by a camera on a subject being imaged; and using a programmed electronic processor, redressing the shadow in connection with: i) reading a digital watermark from imagery captured of the subject, or ii) calculating a fingerprint from the imagery captured of the subject. Another claim recites a method comprising: identifying a shadow cast by a cell phone on a subject being imaged by a camera included in the cell phone; and using a programmed electronic processor, determining a proximity of the camera to the subject based on an analysis of the shadow. Of course, other claims and combinations are provided too.12-29-2011
20110317873Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.12-29-2011
20110317872Low Threshold Face Recognition - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are disclosed for reducing the impact of lighting conditions and biometric distortions, while providing a low-computation solution for reasonably effective (low threshold) face recognition. In one aspect, the methods include processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model. The reference model corresponds to a high information portion of human faces. The methods further include comparing the processed captured image to at least one target profile corresponding to a user associated with the resource, and selectively recognizing the user seeking access to the resource based on a result of said comparing.12-29-2011
20110317871SKELETAL JOINT RECOGNITION AND TRACKING SYSTEM - A system and method are disclosed for recognizing and tracking a user's skeletal joints with a NUI system and further, for recognizing and tracking only some skeletal joints, such as for example a user's upper body. The system may include a limb identification engine which may use various methods to evaluate, identify and track positions of body parts of one or more users in a scene. In examples, further processing efficiency may be achieved by segmenting the field of view in smaller zones, and focusing on one zone at a time. Moreover, each zone may have its own set of predefined gestures which are recognized.12-29-2011
20120045096MONITORING CAMERA TERMINAL - A monitoring camera terminal has an imaging portion for imaging a monitoring target area allocated to an own-terminal, an object extraction portion for processing a frame image imaged by the imaging portion to extract an imaged object, an ID addition portion for adding an ID to the object extracted by the object extraction portion, an object map creation portion for creating, for each object extracted by the object extraction portion, an object map associating the ID added to the object with a coordinate position in the frame image, and a tracing portion for tracing an object in the monitoring target area allocated to the own-terminal using the object maps created by the object map creation portion.02-23-2012
20120045090MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode.02-23-2012
20120045098ARCHITECTURES AND METHODS FOR CREATING AND REPRESENTING TIME-DEPENDENT IMAGERY - The present invention pertains to geographical image processing of time-dependent imagery. Various assets acquired at different times are stored and processing according to acquisition date in order to generate one or more image tiles for a geographical region of interest. The different image tiles are sorted based on asset acquisition date. Multiple image tiles for the same region of interest may be available. In response to a user request for imagery as of a certain date, one or more image tiles associated with assets from prior to that date are used to generate a time-based geographical image for the user.02-23-2012
20120045097HIGH ACCURACY BEAM PLACEMENT FOR LOCAL AREA NAVIGATION - An improved method of high accuracy beam placement for local area navigation in the field of semiconductor chip manufacturing. This invention demonstrates a method where high accuracy navigation to the site of interest within a relatively large local area (e.g. an area 200 μm×200 μm) is possible even where the stage/navigation system is not normally capable of such high accuracy navigation. The combination of large area, high-resolution scanning, digital zoom and registration of the image to an idealized coordinate system enables navigation around a local area without relying on stage movements. Once the image is acquired any sample or beam drift will not affect the alignment. Preferred embodiments thus allow accurate navigation to a site on a sample with sub-100 nm accuracy, even without a high-accuracy stage/navigation system.02-23-2012
20120045095IMAGE PROCESSING APPARATUS, METHOD THEREOF, PROGRAM, AND IMAGE CAPTURING APPARATUS - An image processing apparatus stores model information representing a subject model belonging to a specific category, detects the subject from an input image by referring to the model information, determines a region for which an image correction is to be performed within a region occupied by the detected subject in the input image, stores, for a local region of the image, a plurality of correction data sets representing correspondence between a feature vector representing a feature before correction and a feature vector representing a feature after correction, selects at least one of the correction data sets to be used to correct a local region included in the region determined to undergo the image correction, and corrects the region determined to undergo the image correction using the selected correction data sets.02-23-2012
20120045094TRACKING APPARATUS, TRACKING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - The present invention provides a tracking apparatus for tracking a target designated on an image which is captured by an image sensing element, including a calculation unit configured to calculate, for each of feature candidate colors, a first area of a pixel group which includes a pixel of a feature candidate color of interest and in which pixels of colors similar to the feature candidate color of interest continuously appear, a second area of pixels of colors similar to the feature candidate color of interest in the plurality of pixels, and a ratio of the first area to the second area, and an extraction unit configured to extract a feature candidate color having the smallest first area as a feature color of the target from feature candidate colors for each of which the ratio of the first area to the second area is higher than a predetermined reference ratio.02-23-2012
20120045093METHOD AND APPARATUS FOR RECOGNIZING OBJECTS IN MEDIA CONTENT - An approach is provided for recognizing objects in media content. The capture manager determines to detect, at a device, one or more objects in a content stream. Next, the capture manager determines to capture one or more representations of the one or more objects in the content stream. Then, the capture manager associates the one or more representations with one or more instances of the content stream.02-23-2012
20120045092Hierarchical Video Sub-volume Search - Described is a technology by which video, which may be relatively high-resolution video, is efficiently processed to determine whether the video contains a specified action. The video corresponds to a spatial-temporal volume. The volume is searched with a top-k search that finds a plurality of the most likely sub-volumes simultaneously in a single search round. The score volumes of larger spatial resolution videos may be down-sampled into lower-resolution score volumes prior to searching.02-23-2012
20120045091System and Method for 3D Wireframe Reconstruction from Video - In one or more aspects of the present disclosure, a method, a computer program product and a system for reconstructing scene features of an object in 3D space using structure-from-motion feature-tracking includes acquiring a first camera frame at a first camera position; extracting image features from the first camera frame; initializing a first set of 3D points from the extracted image features; acquiring a second camera frame at a second camera position; predicting a second set of 3D points by converting their positions and variances to the second camera position; projecting the predicted 3D positions to an image plane of the second camera to obtain 2D predictions of the image features; measuring an innovation of the predicted 2D image features; and updating estimates of 3D points based on the measured innovation to reconstruct scene features of the object image in 3D space.02-23-2012
20100215214IMAGE PROCESSING METHOD - A method and apparatus for localizing an area in relative movement and for determining the speed and direction thereof in real time is disclosed. Each pixel of an image is smoothed using its own time constant. A binary value corresponding to the existence of a significant variation in the amplitude of the smoothed pixel from the prior frame, and the amplitude of the variation, are determined, and the time constant for the pixel is updated. For each particular pixel, two matrices are formed that include a subset of the pixels spatially related to the particular pixel. The first matrix contains the binary values of the subset of pixels. The second matrix contains the amplitude of the variation of the subset of pixels. In the first matrix, it is determined whether the pixels along an oriented direction relative to the particular pixel have binary values representative of significant variation, and, for such pixels, it is determined in the second matrix whether the amplitude of these pixels varies in a known manner indicating movement in the oriented direction. In each of several domains, histogram of the values in the first and second matrices falling in such domain is formed. Using the histograms, it is determined whether there is an area having the characteristics of the particular domain. The domains include luminance, hue, saturation, speed (V), oriented direction (D08-26-2010
20120207348VEHICLE DETECTION APPARATUS - A vehicle detection apparatus includes a lamp candidate extraction unit that extracts a pixel region that may correspond to a tail lamp of a vehicle from pixel regions that an integration processing unit creates by extracting and integrating pixels of an image as a lamp candidate and a grouping unit that regroups groups containing the lamp candidate of the groups generated by grouping position data detected by a position detection unit and then regroups all groups. In the regrouping processing, a threshold used for regrouping groups containing the lamp candidate is set easier for regrouping than a threshold used for subsequently regrouping all groups.08-16-2012
20120207345TOUCHLESS HUMAN MACHINE INTERFACE - A system and method for receiving input from a user is provided. The system includes at least one camera configured to receive an image of a hand of the user and a controller configured to analyze the image and issue a command based on the analysis of the image.08-16-2012
20120057754IMAGE SELECTION BASED ON IMAGE CONTENT - An image capture system comprises an image input and processing unit. The image input obtains image information which is then passed to the processing unit. The processing unit is coupled to the image input for determining image metrics on the image information. The processing unit initiates a capture sequence when the image metrics meet a predetermined condition. The capture sequence may store one or more images, or it may indicate that one or more images have been detected. In one embodiment, the image input is a CMOS or CCD sensor.03-08-2012
20120057751Particle Tracking Methods - A method for tracking an object in a video data, comprises the steps of determining a plurality of particles for estimating a location of the object in the video data, determining a weight for each of the plurality of the particles, wherein the weights of two or more particles are determined substantially in parallel, and estimating the location of the object in the video data based upon the determined particle weights.03-08-2012
20120002840METHOD OF AND ARRANGEMENT FOR LINKING IMAGE COORDINATES TO COORDINATES OF REFERENCE MODEL - A method of linking image coordinates to coordinates in a reference model is disclosed. The method includes acquiring a 2½D or 3D input image representing a body of a living being and including at least two image boundaries of at least two parts within said body, acquiring a 3D reference model representative of a reference living being describing in a reference model coordinate system at least two reference boundaries of the at least two parts within said body, and overlaying the reference model and the input image. The method further includes adjusting at least a portion of one of the reference boundaries and/or at least one of the image boundaries such that this reference boundary and this image boundary substantially coincide, while the adjusted reference boundary does not intersect with the remaining reference boundaries and/or the adjusted image boundary does not intersect with the remaining image boundaries.01-05-2012
20120002842DEVICE AND METHOD FOR DETECTING MOVEMENT OF OBJECT - A device for detecting a movement of an object includes: an image shooting unit to generate a first image and a second image by continuous shooting; a detection unit to detect a movement region based on a difference between the first and second images; an edge detection unit to detect an edge region in the first image; a deletion unit to delete the edge region from the movement region; and a decision unit to determine a degree of object movement in accordance with the movement region in which a part of the movement region being deleted by the deletion unit.01-05-2012
20120002841INFORMATION PROCESSING APPARATUS, THREE-DIMENSIONAL POSITION CALCULATION METHOD, AND PROGRAM - An information processing apparatus includes a region segmentation unit configured to segment each of a plurality of images shot by an imaging apparatus for shooting an object from a plurality of viewpoints, into a plurality of regions based on colors of the object, an attribute determination unit configured to determine, based on regions in proximity to intersections between scanning lines set on the each image and boundary lines of the regions segmented by the region segmentation unit in the each image, attributes of the intersections, a correspondence processing unit configured to obtain corresponding points between the images based on the determined intersections' attributes, and a three-dimensional position calculation unit configured to calculate a three-dimensional position of the object based on the obtained corresponding points.01-05-2012
20110026769PRESENTATION DEVICE - A presentation device comprises an image capture portion for capturing an image of a subject and generating a raw image thereof; a detection portion adapted to analyze whether a first marker is present in the raw image, and if the first marker is present in the raw image, to detect an existing position of the first marker within the raw image; a storage portion for storing a positional relationship of a synthesis position at which a mask image for masking at least a portion of the raw image is synthesized with the raw image relative to the existing position of the first marker; a synthesized image generation portion adapted to determine the synthesis position according to the positional relationship with the detected existing position, and to synthesize the mask image at the determined synthesis position within the raw image to generate a synthesized image; and an output portion for outputting the synthesized image.02-03-2011
20080285802TAILGATING AND REVERSE ENTRY DETECTION, ALARM, RECORDING AND PREVENTION USING MACHINE VISION - Unauthorized entry into controlled access areas using tailgating or reverse entry methods is detected using machine vision methods. Camera images of the controlled area are processed to identify and track objects in the controlled area. In a preferred embodiment, this processing includes 3D surface analysis to distinguish and classify objects in the field of view. Feature extraction, color analysis, and pattern recognition may also be used for identification and tracking of objects. Integration with security monitoring and control systems provides notification when a tailgating or reverse entry event has occurred. More reliable operation in practical circumstances is thus obtained, such as when multiple people are using an entrance or exit under variable light and shadow conditions. Electronic access control systems may further be combined with the machine vision methods of the invention to more effectively prevent tailgating or reverse entry.11-20-2008
20120207351METHOD AND EXAMINATION APPARATUS FOR EXAMINING AN ITEM UNDER EXAMINATION IN THE FORM OF A PERSON AND/OR A CONTAINER - An examination apparatus examines an item including a person or a container and has a determination unit for determining a relevance level which can be assigned to the item under examination, in particular a hazard level, and an image capture unit for capturing an image of the item under examination. The examination apparatus has a database, an automated evaluation unit for automatically evaluating at least one section of the image using the database, an evaluation unit operated by a user for the visual evaluation of a section of the image by the user, and an input unit for inputting at least one evaluation input by the user, and a database processing unit for processing the database. The database processing unit processes a database entry using the evaluation input in conjunction with the determination of the relevance level.08-16-2012
20120008828TARGET-LINKED RADIATION IMAGING SYSTEM - An imaging detection system includes at least one location detection device configured to determine coordinates of a target, at least one detector configured to detect events from a source associated with the target, and a processor coupled in communication with the at least one location detection device and the at least one detector. The processor is configured to receive the coordinates from the at least one location detection device and the events from the at least one detector, translate the events using the coordinates acquired from the at least one location detection device to compensate for a relative motion between the source and the at least one detector, and output a processed data set having the events translated based on the coordinates.01-12-2012
20120008831OBJECT POSITION CORRECTION APPARATUS, OBJECT POSITION CORRECTION METHOD, AND OBJECT POSITION CORRECTION PROGRAM - An object position correction apparatus is provided with an observing device that detects an object to be observed to obtain an observed value, an observation history data base that records an observation history of the object, a position estimation history data base that records the estimated history of the position of the object, a prediction distribution forming unit that forms a prediction distribution that represents an existence probability at the position of the object, an object position estimation unit that estimates the ID and the position of the object, a center-of-gravity position calculation unit that calculates the center-of-gravity position of the observed values, an object position correction unit that carries out a correction on the estimated position of the object, and a display unit that displays the corrected position of the object.01-12-2012
20120008829METHOD, DEVICE, AND COMPUTER-READABLE MEDIUM FOR DETECTING OBJECT IN DISPLAY AREA - Disclosed are a method and a device for detecting an object in a display area. The method comprises a step of generating a first image prepared to be displayed; a step of displaying the generated first image on a screen; a step of capturing a second image of the screen including the display area; and a step of comparing the generated first image with the captured second image so as to detect the object in the display area.01-12-2012
20120008827METHOD AND DEVICE FOR IDENTIFYING OBJECTS AND FOR TRACKING OBJECTS IN A PRODUCTION PROCESS - An object (01-12-2012
20120008830INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus for estimating a position and orientation of a target object in a three-dimensional space, inputs a plurality of captured images obtained by imaging the target object from a plurality of viewpoints, clips, for each of the input captured images, a partial image corresponding to a region occupied by a predetermined partial space in the three-dimensional space, from the captured image, extracts, from a plurality of partial images clipped from the plurality of captured images, feature information indicating a feature of the plurality of partial images, stores dictionary information indicating a position and orientation of an object in association with feature information of the object corresponding to the position and orientation, and estimates the position and orientation of the target object by comparing the feature information of the extracted target object and the feature information indicated in the dictionary information.01-12-2012
20120008825SYSTEM AND METHOD FOR DYNAMICALLY TRACKING AND INDICATING A PATH OF AN OBJECT - A system for dynamically tracking and indicating a path of an object comprises an object position system for generating three-dimensional object position data comprising an object trajectory, a software element for receiving the three-dimensional object position data, the software element also for determining whether the three-dimensional object position data indicates that an object has exceeded a boundary, and a graphics system for displaying the object trajectory.01-12-2012
20120008826METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR DETECTING OBJECTS IN DIGITAL IMAGES - Method, device, and computer program product for detecting an object in a digital image are provided. The method includes providing a detection window and determining at least one area of the object in the digital image by traversing the detection window by a first step size onto a set of pixels. Further, at each pixel, presence of at least one portion of the object in the detection window is detected. Upon detection of the presence of the object, the detection window is shifted by a second step size to neighbouring pixels. Further, the detection window is selected as an area of the object if the at least one portion of the object is present in at least a threshold number of detection windows at the neighbouring pixels. Thereafter, an object area representing the object in the digital image is selected based on the at least one area.01-12-2012
20120008832REGION-OF-INTEREST VIDEO QUALITY ENHANCEMENT FOR OBJECT RECOGNITION - A video-based object recognition system and method provides selective, local enhancement of image data for improved object-based recognition. A frame of video data is analyzed to detect objects to receive further analysis, these local portions of the frame being referred to as a region of interest (ROI). A video quality metric (VQM) value is calculated locally for each ROI to assess the quality of the ROI. Based on the VQM value calculated with respect to the ROI, a particular video quality enhancement (VQE) function is selected and applied to the ROI to cure deficiencies in the quality of the ROI. Based on the enhanced ROI, objects within the defined region can be accurately identified.01-12-2012
20100074471Gesture Processing with Low Resolution Images with High Resolution Processing for Optical Character Recognition for a Reading Machine - A portable reading machine that operates in several modes and performs image preprocessing to prior to optical character recognition. The portable reading machine receives a low resolution image and a high resolution image of a scene and processing the low resolution image to recognize a user-initiated gesture using a gesturing item that indicates a command from the user to the reading machine and the high resolution image to recognize text in the image of the scene, according to the command from the user to the machine.03-25-2010
20100166258METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING HAND SEGMENTATION FOR GESTURE ANALYSIS - A method for providing hand segmentation for gesture analysis may include determining a target region based at least in part on depth range data corresponding to an intensity image. The intensity image may include data descriptive of a hand. The method may further include determining a point of interest of a hand portion of the target region, determining a shape corresponding to a palm region of the hand, and removing a selected portion of the target region to identify a portion of the target region corresponding to the hand. An apparatus and computer program product corresponding to the method are also provided.07-01-2010
20120014560METHOD FOR AUTOMATIC STORYTELLING FOR PHOTO ALBUMS USING SOCIAL NETWORK CONTEXT - A method for automatically selecting and organizing a subset of photos from a set of photos provided by a user, who has an account on at least one social network providing some context, for creating a summarized photo album with a storytelling structure. The method comprises: arranging the set of photos into a three level hierarchy, acts, scenes and shots; checking whether photos are photos with people or not; obtaining an aesthetic measure of the photos; creating and ranking face clusters; selecting the most aesthetic photo of each face cluster; selecting photos with people until complete a predefined number of photos of the summarized album picking the ones which optimize the function:01-19-2012
20120014562EFFICIENT METHOD FOR TRACKING PEOPLE - In accordance with one embodiment, a method to track persons includes generating a first and second set of facial coefficient vectors by: (i) providing a first and second image containing a plurality of persons; (ii) locating faces of persons in each image; and (iii) generating a facial coefficient vector for each face by extracting from the images coefficients sufficient to locally identify each face, then tracking the persons within the images, the tracking including comparing the first set of facial coefficient vectors to the second set of facial coefficient vectors to determine for each person in the first image if there is a corresponding person in the second image. Optically the method includes using estimated locations in combination with the vector distance between facial coefficient vectors to track persons.01-19-2012
20120057746INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A processing device and method are provided. According to illustrative embodiments, the device and method are implemented by detecting a face region of an image, setting at least one action region according to the position of the face region, processing image data corresponding to the at least one action region to determine whether or not a predetermined action has been performed, and performing processing corresponding to the predetermined action when it is determined that the predetermined action has been performed.03-08-2012
20120057755METHOD AND SYSTEM FOR CONTROLLING LIGHTING - A method is provided to control the lighting ambience in a space by means of a plurality of controllable light sources (03-08-2012
20120057753SYSTEMS AND METHODS FOR TRACKING A MODEL - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.03-08-2012
20120057752METHOD OF, AND APPARATUS AND COMPUTER SOFTWARE FOR, IMPLEMENTING IMAGE ANALYSIS PROTOCOLS - A computer-based method for the development of an image analysis protocol for analyzing image data, the image data containing images including image objects, in particular biological image objects such as biological cells. The image analysis protocol, once developed, is operable in an image analysis software system to report on one or more measurements conducted on selected ones of the image objects. The development process includes defining target identification settings to identify at least two different target sets of image objects, defining target identification settings to identify at least two different target sets of image objects, and defining one or more measurements to be performed using said pair-wise linking relationship(s).03-08-2012
20120057750System And Method For Data Assisted Chrom-Keying - The invention illustrates a system and method of displaying a base image and an overlay image comprising: capturing a base image of a real event; receiving an instrumentation data based on the real event; identifying a visual segment within the base image based on the instrumentation data; and rendering an overlay image within the visual segment.03-08-2012
20120057748APPARATUS WHICH DETECTS MOVING OBJECT FROM IMAGE AND METHOD THEREOF - An image processing apparatus includes an input unit configured to input a plurality of time-sequential still images, a setting unit configured to set, in a still image among the plurality of still images, a candidate region that is a candidate of a region in which an object exists, and to acquire a likelihood of the candidate region, a motion acquisition unit configured to acquire motion information indicating a motion of the object based on the still image and another still image that is time-sequential to the still image, a calculation unit configured to calculate a weight corresponding to an appropriateness of the motion indicated by the motion information as a motion of the object, a correction unit configured to correct the likelihood based on the weight, and a detection unit configured to detect the object from the still image based on the corrected likelihood.03-08-2012
20120057747IMAGE PROCESSING SYSTEM AND IMAGE PROCESSING METHOD - An image processing system performs a position-matching operation on first and second images, which are obtained by photographing the same object a plurality of times. A plurality of shift points are detected in the second image. The shift points correspond to fixed points, which are dispersed throughout the whole of the first image. The second image is divided into a plurality of partial images, the vertices of which are positioned at the same coordinates as the fixed points in the first image. Each of the partial images are shifted to the shift points to transform the partial images so that corresponding transformed partial images are produced. The transformed partial images are combined to form a combined image.03-08-2012
20120057745DETECTION OF OBJECTS USING RANGE INFORMATION - A system and method for detecting objects and background in digital images using range information includes receiving the digital image representing a scene; identifying range information associated with the digital image and including distances of pixels in the scene from a known reference location; generating a cluster map based at least upon an analysis of the range information and the digital image, the cluster map grouping pixels of the digital image by their distances from a viewpoint; identifying objects in the digital image based at least upon an analysis of the cluster map and the digital image; and storing an indication of the identified objects in a processor-accessible memory system.03-08-2012
20090034792REDUCING LATENCY IN A DETECTION SYSTEM - A first multi-dimensional digital image of a scan region is generated. The scan region is included in a materials-detection apparatus and is configured to receive and move containers through the materials-detection apparatus. A pre-defined background range of values is accessed, the background range of values representing a range of values associated with non-target materials and the background range of values being distinct from values associated with the target materials. A value of a voxel included in the multi-dimensional digital image is compared to the background range of values to determine whether the value of the voxel is within the background range of values. If the value of the voxel is within the background range of values, the voxel is identified as a voxel representing a low-density material. A second multi-dimensional digital image that disregards the identified voxel is generated to compress the first multi-dimensional digital image.02-05-2009
20130011015BIOMETRIC AUTHENTICATON DEVICE, BIOMETRIC AUTHENTICATION PROGRAM, AND BIOMETRIC AUTHENTICATION METHOD - A biometric authentication device that authenticates a user using biological features of the user, the biometric authentication device includes: an illumination unit configured to illuminate a target which represents the biological features; an image sensor configured to obtain a first captured image by capturing the target illuminated by the illumination unit, and obtain a second captured image by capturing the target not illuminated by the illumination unit; an acquisition unit configured to acquire from a storage unit a mask which has a target area approximating the shape of the target in the first and second captured images obtained by the image sensor; and a detection unit configured to detect light other than illumination light illuminated by the illumination unit based on the mask acquired by the acquisition unit and at least one of the first and second images.01-10-2013
20130011010THREE-DIMENSIONAL IMAGE PROCESSING DEVICE AND THREE DIMENSIONAL IMAGE PROCESSING METHOD - A 3D image processing device comprising: an object detecting unit, for detecting a first location for an object in a first image and a second location for the object in a second image; a disparity determining unit, coupled to the object detecting unit, for computing a disparity result for the object between the first image and the second image according to the first location and the second location; a displacement computing unit, coupled to the disparity determining unit, for computing a first displacement distance of the first image and a second displacement distance of the second image according to the disparity result; and a displacement unit, coupled to the displacement computing unit, for moving the first image and the second image to generate a first displaced image and a second displaced image, according to the first displacement distance and the second displacement distance.01-10-2013
20130011009RECOGNITION SYSTEM BASED ON AUGMENTED REALITY AND REMOTE COMPUTING AND RELATED METHOD THEREOF - A recognition system based on augmented reality and remote computing includes a terminal touch screen, a terminal processing unit, a remote database, and a remote computing unit. The terminal touch screen fetches a recognition characteristic of an object to be recognized. The terminal processing unit transmits the recognition characteristic and an icon of an application program required to be executed to the remote computing unit. The remote database stores data corresponding to the recognition characteristic of the object to be recognized. The remote computing unit receives the recognition characteristic and the icon, fetches name and address information from the remote database according to the recognition characteristic and the icon, and transmits the name and address information to the terminal processing unit to make a terminal module enter the application program.01-10-2013
20110081043USING VIDEO-BASED IMAGERY FOR AUTOMATED DETECTION, TRACKING, AND COUNTING OF MOVING OBJECTS, IN PARTICULAR THOSE OBJECTS HAVING IMAGE CHARACTERISTICS SIMILAR TO BACKGROUND - A system and method to automatically detect, track and count individual moving objects in a high density group without regard to background content, embodiments performing better than a trained human observer. Select embodiments employ thermal videography to detect and track even those moving objects having thermal signatures that are similar to a complex stationary background pattern. The method allows tracking an object that need not be identified every frame of the video, that may change polarity in the imagery with respect to background, e.g., switching from relatively light to dark or relatively hot to cold and vice versa, or both. The methodology further provides a permanent record of an “episode” of objects in motion, permitting reprocessing with different parameters any number of times. Post-processing of the recorded tracks allows easy enumeration of the number of objects tracked with the FOV of the imager.04-07-2011
20120250939OBJECT DETECTION SYSTEM AND METHOD THEREFOR - In an object detection system with a first and a second image processing apparatus, the first image processing apparatus includes a reduction unit configured to reduce an input image, a first detection unit configured to detect a predetermined object from a reduction image reduced by the reduction unit, and a transmission unit configured to transmit the input image and a first detection result detected by the first detection unit to the second image processing apparatus, and the second image processing apparatus includes a reception unit configured to receive the input image and the first detection result from the first image processing apparatus, a second detection unit configured to detect the predetermined object from the input image, and an output unit configured to output the first detection result and a second detection result detected by the second detection unit.10-04-2012
20120250940TERMINAL DEVICE, OBJECT CONTROL METHOD, AND PROGRAM - An apparatus is disclosed comprising a memory storing instructions and a control unit executing the instructions to detect an object of interest within an image of real space, detect an orientation and a position of the object, and generate a modified image. The generating comprises determining a region of the image of real space based on the detected orientation and position. The instructions may further include instructions to display a virtual image of the object in the region, change the virtual image based on a detected user input, the changed virtual image being maintained within the region, and display the modified image.10-04-2012
20120250941SOUND REPRODUCTION PROGRAM AND SOUND REPRODUCTION DEVICE - A sound reproduction program is provided which, in performing reading and sound reproduction of a musical score, precision of musical score recognition is improved. The sound reproduction program is stored in a terminal including the image pickup unit and a display unit and makes a computer execute a function of reading a musical score image at every predetermined time as a sampling image by a camera device 10-04-2012
20120063637ARRAY OF SCANNING SENSORS - An array of image sensors is arranged to cover a field of view for an image capture system. Each sensor has a field of view segment which is adjacent to the field of view segment covered by another image sensor. The adjacent field of view (FOV) segments share an overlap area. Each image sensor comprises sets of light sensitive elements which capture image data using a scanning technique which proceeds in a sequence providing for image sensors sharing overlap areas to be exposed in the overlap area during the same time period. At least two of the image sensors capture image data in opposite directions of traversal for an overlap area. This sequencing provides closer spatial and temporal relationships between the data captured in the overlap area by the different image sensors. The closer spatial and temporal relationships reduce artifact effects at the stitching boundaries, and improve the performance of image processing techniques applied to improve image quality.03-15-2012
20120027255VEHICLE DETECTION APPARATUS - A vehicle detection apparatus comprises an other-vehicle detection module configured to detect points of light in an image captured by a vehicle to which the vehicle detection module is mounted and to detect other vehicles based on the points of light, a vehicle lane-line detection module configured to detect an vehicle lane-line in the captured image, and a region sectioning module configured to section the captured image based on the detected vehicle lane-line into an own vehicle lane region, an oncoming vehicle lane region, and a vehicle lane exterior region. Other vehicles are detected by the other-vehicle detection module by detecting points of light based on respective detection conditions set for each of the sectioned regions.02-02-2012
20120027251DEVICE WITH MARKINGS FOR CONFIGURATION - A device including a network interface is marked for determination of the position or orientation of the device. In particular, the markings can include a pattern and proportions that enable determination of at least one of a position and an orientation of the device relative to a station using appearance of the markings as observed from the station.02-02-2012
20120027259SYNCHRONIZATION OF TWO IMAGE SEQUENCES OF A PERIODICALLY MOVING OBJECT - A method and an apparatus for correlating two image sequences of a periodically moving object with respect to the periodicity is described. A first frame sequence of the object moving with the first periodicity is acquired. Therein the first frame sequence comprises at least one cycle of motion. A second frame sequence of the object moving with the second periodicity is acquired. Therein the second frame sequence comprises at least one cycle of motion. The first and the second frame sequences are synchronized with respect to the respective periodicity such that same phases of motion of the periodically moving object are correlated to be presented simultaneously. The present invention allows to compare sequences representing a periodical motion with a different number of frames in each of the sequences for the same cycle of motion. Thereby, e.g. image sequences of a beating heart acquired before and after a therapy may be presented in a synchronised way and therefore may be easily compared. 02-02-2012
20120027260ASSOCIATING A SENSOR POSITION WITH AN IMAGE POSITION - A system for associating a sensor position with an image position comprises position information means 02-02-2012
20120027256Automatic Media Sharing Via Shutter Click - A computer-implemented method for automatically sharing media between users is provided. Collections of images are received from different users, where each collection is associated with a particular user and the users may be associated with each other. The collections are grouped into one or more albums based on the content of the images in the collection, where each album is associated with a particular user. The albums from the different users are grouped into one or more event groups based on the content of the albums. The event groups are then shared automatically, without user intervention, between the different users based on their associations with each other and their individual sharing preferences.02-02-2012
20120027257METHOD AND AN APPARATUS FOR DISPLAYING A 3-DIMENSIONAL IMAGE - A three-dimensional (3D) image display device may display a perceived 3D image. A location tracking unit may determine a viewing distance from a screen to a viewer. An image processing unit may calculate a 3D image pixel period based on the determined viewing distance, may determine a color of at least one of pixels and sub-pixels displaying the 3D image based on the calculated 3D image pixel period, and may control the 3D image to be displayed based on the determined color.02-02-2012
20120027258OBJECT DETECTION DEVICE - An object detection device including: an imaging unit (02-02-2012
20120027253ILLUMINATION APPARATUS AND BRIGHTNESS ADJUSTING METHOD - An illumination apparatus comprises a control unit, an image capturing unit, a processor unit, a comparison unit, an adjustment unit and an illumination unit. The control unit generates a start signal in a predetermined time. The image capturing unit captures a plurality of images of ambient road condition according to the start signal. The processor unit extracts the edges of the vehicle from the captured images to obtain a current traffic. The adjustment unit generates different pulse voltages according to the different volume of traffic. The illumination unit emits light according to the different pulse voltages.02-02-2012
20120027250DATA DIFFERENCE GUIDED IMAGE CAPTURING - Methods and apparatuses are disclosed. Previously stored images of one or more geographic areas may be viewed by online users. A new low-resolution image may be acquired and aspects of the new low-resolution image may be compared with a corresponding one of the previously stored images to determine an amount of change. A determination may be made regarding whether to acquire a new high-resolution image based on the determined amount of change and a freshness score associated with the one of the previously stored images. In another embodiment, a new image may be captured and corresponding location data may be obtained. A corresponding previously stored image may be obtained and compared with the new image to determine an amount of change. The new image may be uploaded to a remote computing device based on the determined amount of change and a freshness score of the previously stored image.02-02-2012
20120027249Multispectral Detection of Personal Attributes for Video Surveillance - Techniques for detecting an attribute in video surveillance include generating training sets of multispectral images, generating a group of multispectral box features comprising receiving input of a detector size of a width and height, a number of spectral bands in the multispectral images, and integer values representing a minimum and maximum width and height of multispectral box features, fixing a feature width and to height, generating feature building blocks with the fixed width and height, placing a feature building block at a same location for each spectral band level, and enumerating combinations of the feature building blocks through each spectral level until all sizes within the integer values have been covered, and wherein each combination determines a multispectral box feature, using the training sets to select multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance.02-02-2012
20120027254Information Processing Apparatus and Information Processing Method for Drawing Image that Reacts to Input Information - In an information processing apparatus, an external-information acquisition unit acquires external information such as an image, a sound, textual information, and numerical information from an input apparatus. A field-image generation unit generates, as an image, a “field” that acts on a particle for a predetermined time step based on the external information. An intermediate-image memory unit stores an intermediate image that is generated in the process of generating a field image by the field-image generation unit. A field-image memory unit stores the field image generated by the field-image generation unit. A particle-image generation unit generates data of a particle image to be output finally by using the field image stored in the field-image memory unit.02-02-2012
20120027252HAND GESTURE DETECTION - A method for detecting presence of a hand gesture in video frames includes receiving video frames having an original resolution, downscaling the received video frames into video frames having a lower resolution, and detecting a motion corresponding to the predefined hand gesture in the downscaled video frames based on temporal motion information in the downscaled video frames. The method also includes detecting a hand shape corresponding to the predefined hand gesture in a candidate search window within one of the downscaled video frames using a binary classifier. The candidate search window corresponds to a motion region containing the detected motion. The method further includes determining whether the received video frames contain the predefined hand gesture based on the hand shape detection.02-02-2012
20090123029Display-and-image-pickup apparatus, object detection program and method of detecting an object - A display-and-image-pickup apparatus includes: a display-and-image-pickup panel having an image display function and an image pickup function; an image producing means for producing a predetermined processed image on the basis of a picked-up image of a proximity object obtained through the use of the display-and-image-pickup panel; an image processing means for obtaining information about the proximity object through selectively using one of two obtaining modes on the basis of at least one of the picked-up image and the processed image; and a switching means for switching processes so that, in the case where the parameter is increasing, one of the two obtaining modes is switched to the other obtaining mode when the parameter reaches a threshold value, and in the case where the parameter is decreasing, the other obtaining mode is switched to the one obtaining mode when the parameter reaches a smaller threshold value.05-14-2009
20120057749INATTENTION DETERMINING DEVICE - An inattention determining device includes range changing unit and inattention determining unit. When a curve detection result is output from curve detector, the range changing unit changes a first predetermined range to a second predetermined range by the predetermined amount in the curve direction before a turning direction of an acquisition result is changed in the curve direction of the curve detection result. The inattention determining unit determines whether or not a driver is in an inattention state on the basis of the second predetermined range.03-08-2012
20120207347IMAGE ROTATION FROM LOCAL MOTION ESTIMATES - A measure of frame-to-frame rotation is determined. Integral projection vector gradients are determined and normalized for a pair of images. Locations of primary maximum and minimum peaks of the integral projection vector gradients are determined. Based on normalized distances between the primary maximum and minimum peaks, a global image rotation is determined.08-16-2012
20120207355X-RAY CT APPARATUS AND IMAGE DISPLAY METHOD OF X-RAY CT APPARATUS - The X-ray CT apparatus which includes an X-ray generator and an X-ray detector for acquiring projection data of an object from plural angles and creates an arbitrary cross-sectional image of the object on the basis of the projection data includes: an extraction section which extracts a region, which includes a target organ moving periodically, from the cross-sectional image; a synchronous phase determination section which determines a synchronous phase, which is used when creating a synchronous cross-sectional image synchronized with periodic motion of the target organ, on the basis of continuity of the target organ in a direction perpendicular to the cross-sectional image; a synchronous cross-sectional image creating section which creates the synchronous cross-sectional image on the basis of projection data corresponding to the synchronous phase determined by the synchronous phase determination section; and a display unit which displays the synchronous cross-sectional image.08-16-2012
20120207354IMAGE SENSING APPARATUS AND METHOD FOR CONTROLLING THE SAME - Receiving an instruction from a user to start sensing a still image, an image sensing apparatus performs scene determination based on an evaluation value of scene determination from an image sensed immediately after the luminance of the image converges to a predetermined range of a target luminance. The image sensing apparatus can accurately determine a scene of the image even the image sensor with a narrow dynamic range.08-16-2012
20120207352Image Capture and Identification System and Process - A digital image of the object is captured and the object is recognized from plurality of objects in a database. An information address corresponding to the object is then used to access information and initiate communication pertinent to the object.08-16-2012
20120106782Detector for chemical, biological and/or radiological attacks - This specification generally relates to methods and algorithms for detection of chemical, biological, and/or radiological attacks. The methods use one or more sensors that can have visual, audio, and/or thermal sensing abilities and can use algorithms to determine by behavior patterns of people whether there has been a chemical, biological and/or radiological attack.05-03-2012
20120106796CREATING A CUSTOMIZED AVATAR THAT REFLECTS A USER'S DISTINGUISHABLE ATTRIBUTES - A capture system captures detectable attributes of a user. A differential system compares the detectable attributes with a normalized model of attributes, wherein the normalized model of attributes characterize normal representative attribute values across a sample of a plurality of users and generates differential attributes representing the differences between the detectable attributes and the normalized model of attributes. Multiple separate avatar creator systems receive the differential attributes and each apply the differential attributes to different base avatars to create custom avatars which reflect a selection of the detectable attributes of the user which are distinguishable from the normalized model of attributes.05-03-2012
20120106799TARGET DETECTION METHOD AND APPARATUS AND IMAGE ACQUISITION DEVICE - The present invention provides a target detection method comprising the following steps controlling a modulated light emitting device to emit optical pulse signals with a first light intensity and a second light intensity to a target to be detected and a background, wherein the capabilities of reflecting the light pulse signals of the target to be detected and the background are different, controlling an image sensor to acquire images of the target to be detected and the background, wherein the image sensor comprises a plurality of image acquisition regions, and it successively scans the same image acquisition region once in the first light intensity and in the second light intensity respectively to obtain a first light intensity image and a second light intensity image, and stores them into corresponding locations in a first frame image and a second frame image respectively, distinguishing the target to be detected and the background, using the first frame image and the second frame image. The present invention also provides a target detection apparatus and an image acquisition device. This invention can precisely detect targets, even moving targets, in a strong light background.05-03-2012
20120063639INFORMATION PROCESSING DEVICE, RECOGNITION METHOD THEREOF AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing device detects a background region from an image, extracts multiple partial regions from the image, sets multiple local regions for each of the multiple partial regions, selects a local region including a region other than the background region from among the multiple local regions and calculates a local feature amount from the selected local region, and determines a partial region that includes a recognition target object from among the multiple partial regions based on the calculated local feature amount.03-15-2012
20120063644DISTANCE-BASED POSITION TRACKING METHOD AND SYSTEM - A pre-operative stage of a distance-based position tracking method (03-15-2012
20120063642SIMILARITY ANALYZING DEVICE, IMAGE DISPLAY DEVICE, IMAGE DISPLAY PROGRAM STORAGE MEDIUM, AND IMAGE DISPLAY METHOD - A similarity analyzing device includes: an image acquisition section which acquires picked-up images with which image pick-up dates and/or times are associated; and an image registration section which registers a face image showing a picked-up face and with which an image pick-up date and/or time is associated. The device further includes: a degree of similarity calculation section which detects a face in each of picked-up images acquired by the image acquisition section and calculates the degree of similarity between the detected face and the face in the face image registered in the image registration section; and a degree of similarity reduction section in which the larger the difference between the image pick-up date and/or time associated with the picked-up image and that associated with the face image is, the more the degree of similarity of the face calculated by the degree of similarity calculation section is reduced.03-15-2012
20120063640IMAGE PROCESSING APPARATUS, IMAGE FORMING SYSTEM, AND IMAGE FORMING METHOD - An upstream image processing apparatus determines, when geometric conversion is instructed, whether the result of downstream correction processing changes due to the geometric conversion, and if it changes, the apparatus changes the conversion to geometric conversion that does not cause a change in the correction result. Then, the geometric conversion is performed on a target image, and the resultant image is transmitted to a downstream image processing apparatus. Together therewith, instruction information indicating an instruction for correction processing and instruction information indicating geometric transformation processing for performing geometric transformation processing to the instructed degree are transmitted to the downstream image processing apparatus. The downstream image processing apparatus adds an instruction for image processing as appropriate, and thereafter transmits the resultant data to an image forming apparatus. The image forming apparatus forms an image by performing correction processing and geometric transformation processing that have been instructed.03-15-2012
20120170808Obstacle Detection Device - The present invention provides an obstacle detection device that enables stable obstacle detection with less misdetections even when a bright section and a dark section are present in an obstacle and a continuous contour of the obstacle is present across the bright section and the dark section. The obstacle detection device includes a processed image generating unit that generates a processed image for detecting an obstacle from a picked-up image, a small region dividing unit that divides the processed image into plural small regions, an edge threshold setting unit that sets an edge threshold for each of the small regions from pixel values of the plural small regions and the processed image, an edge extracting unit that calculates a gray gradient value of each of the small regions from the plural small regions and the processed image and generates, using the edge threshold for the small region corresponding to the calculated gray gradient value, an edge image and a gradient direction image, and an obstacle recognizing unit that determines presence or absence of an obstacle from the edge image in a matching determination region set in the edge image and the gradient direction image corresponding to the edge image. The small region dividing unit divides the processed image into the plural small regions on the basis of an illumination state on the outside of the own vehicle.07-05-2012
20120170805OBJECT DETECTION IN CROWDED SCENES - Methods and systems are provided for object detection. A method includes automatically collecting a set of training data images from a plurality of images. The method further includes generating occluded images. The method also includes storing in a memory the generated occluded images as part of the set of training data images, and training an object detector using the set of training data images stored in the memory. The method additionally includes detecting an object using the object detector, the object detector detecting the object based on the set of training data images stored in the memory.07-05-2012
20120300985AUTHENTICATION SYSTEM, AND METHOD FOR REGISTERING AND MATCHING AUTHENTICATION INFORMATION - A certain amount of unique data of a target is extracted from image information that was read, and it is determined whether or not the target is valid on the basis of the extracted unique data. Processes are executed by means of an image reading unit which extracts an image by scanning a target, an individual difference data calculating unit which calculates individual difference data from the obtained image, an individual difference data comparing unit which compares the calculated individual difference data, and a determination unit which determines whether or not to grant validation.11-29-2012
20120300978Device and Method for Determining the Orientation of an Eye - In a device or a method for determining the direction of vision of an eye, a starting point or a final point of a light beam reflected by a part of the eye and detected by a detector system, or of a light beam projected by a projection system onto or into the eye two-dimensionally, describes a pattern of a scanning movement in the eye. The inventive method uses a displacement device that guides the center of the pattern of movement into the pupil or macula center of the eye, and a determination device that uses the pattern of movement of the scanning movement to determine the pupil center or macula center.11-29-2012
20120300982IMAGE IDENTIFICATION DEVICE, IMAGE IDENTIFICATION METHOD, AND RECORDING MEDIUM - The invention provides an image identification device that classifies block images obtained by dividing a target image into predetermined categories, using a separating plane learning of which has been completed in advance for each of the categories. The image identification device includes a target image input unit inputs the target image, a block image generation unit divides the target image into blocks to generate the block images, a feature quantity computing unit computes feature quantities of the block images, and a category determination unit determines whether the block images are classified into one of the categories or not, using the separating plane and coordinate positions corresponding to magnitudes of feature quantities of the block images in a feature quantity space, wherein the feature quantity computing unit uses, as a feature quantity of a given target block image, local feature quantities and a global feature quantity.11-29-2012
20120300981METHOD FOR OBJECT DETECTION AND APPARATUS USING THE SAME - A method for object detection and an apparatus using the same are provided, and the method includes: An image is captured, in which the image includes a plurality of sampling-windows. A first-stage sub-classifier of a classifier is used to detect whether the sampling-windows contain an object therein. The classifier is rotated at least one time by a predetermined rotation angle and the first-stage sub-classifier of the classifier is used to detect whether the sampling-windows contain the object after each rotating, wherein when the object is detected within the sampling-windows, keep detecting whether the sampling-windows contain the object therein sequentially by a second-stage sub-classifier to an N11-29-2012
20120300980LEARNING DEVICE, LEARNING METHOD, AND PROGRAM - Disclosed is a learning device. A feature-quantity calculation unit extracts a feature quantity from each feature point of a learning image. An acquisition unit acquires a classifier already obtained by learning as a transfer classifier. A classifier generation unit substitutes feature quantities into weak classifiers constituting the transfer classifier, calculates error rates of the weak classifiers on the basis of classification results of the weak classifiers and a weight of the learning image, and iterates a process of selecting a weak classifier of which the error rate is minimized a plurality of times. In addition, the classifier generation unit generates a classifier for detecting a detection target by linearly coupling a plurality of selected weak classifiers.11-29-2012
20120300979PLANAR MAPPING AND TRACKING FOR MOBILE DEVICES - Real time tracking and mapping is performed using images of unknown planar object. Multiple images of the planar object are captured. A new image is selected as a new keyframe. Homographies are estimated for the new keyframe and each of a plurality of previous keyframes for the planar object that are spatially distributed. A graph structure is generated using the new keyframe and each of the plurality of previous keyframes and the homographies between the new keyframe and each of the plurality of previous keyframes. The graph structure is used to create a map of the planar object. The planar object is tracked based on the map and subsequently captured images.11-29-2012
20120106792USER INTERFACE APPARATUS AND METHOD USING MOVEMENT RECOGNITION - A movement recognition method and a user interface are provided. A skin color is detected from a reference face area of an image. A movement-accumulated area, in which movements are accumulated, is detected from sequentially accumulated image frames. Movement information corresponding to the skin color is detected from the detected movement-accumulated area. A user interface screen is created and displayed using the detected movement information.05-03-2012
20120155704LOCALIZED WEATHER PREDICTION THROUGH UTILIZATION OF CAMERAS - Described herein are various technologies pertaining to predicting an amount of electrical power that is to be generated by a power system at a future point in time, wherein the power system utilizes a renewable energy resource to generate electrical power. A camera is positioned to capture an image of sky over a geographic region of interest. The image is analyzed to predict an amount of solar radiation that is to be received by the power source at a future point in time. The predicted solar radiation is used to predict an amount of electrical power that will be output by the power system at the future point in time. A computational resource of a data center that is powered by way of the power source is managed as a function of the predicted amount of power.06-21-2012
20120155710PAPER-SHEET HANDLING APPARATUS AND PAPER-SHEET HANDLING METHOD - A paper-sheet handling apparatus (06-21-2012
20120155709Detecting Orientation of Digital Images Using Face Detection Information - A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highly orientation sensitive. By applying a detection algorithm to images in various orientations, or alternatively by rotating the classifiers, and comparing the number of successful faces that are detected in each orientation, one may conclude as to the most likely correct orientation. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images.06-21-2012
20120155707IMAGE PROCESSING APPARATUS AND METHOD OF PROCESSING IMAGE - An image processing apparatus includes a first detecting unit configured to detect an object in an image; a determining unit configured to determine a moving direction of the object detected by the first detecting unit; and a second detecting unit configured to perform detection processing of detecting whether the object detected by the first detecting unit is a specific object on the basis of the moving direction of the object determined by the first determining unit.06-21-2012
20120155708APPARATUS AND METHOD FOR MEASURING TARGET POINT IN VIDEO - Disclosed are an apparatus and method for measuring a target point in a video. In the apparatus and method for measuring a target point in a video, a target point is recognized in a video including the target point set as a measuring target, information regarding the target point is extracted by using location information of the recognized target point and map information of the surroundings of the recognized target point, and the extracted target point is displayed in the video while providing detailed map information regarding the target point. Accordingly, a user can be quickly provided with detailed information regarding the location of the target point or an object present in a visual range and geo-spatial information of the surroundings.06-21-2012
20120155706RANGE IMAGE GENERATION APPARATUS, POSITION AND ORIENTATION MEASUREMENT APPARATUS, RANGE IMAGE PROCESSING APPARATUS, METHOD OF CONTROLLING RANGE IMAGE GENERATION APPARATUS, AND STORAGE MEDIUM - A range image generation apparatus comprises: a generation unit adapted to generate a first range image of a target measurement object at one of a predetermined in-plane resolution and a predetermined depth-direction range resolving power; an extraction unit adapted to extract range information from the first range image generated by the generation unit; and a decision unit adapted to decide, as a parameter based on the range information extracted by the extraction unit, one of an in-plane resolution and a depth-direction range resolving power of a second range image to be generated by the generation unit, wherein the generation unit generates the second range image using the parameter decided by the decision unit.06-21-2012
20120155705FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.06-21-2012
20120155702System and Method for Detecting Nuclear Material in Shipping Containers - A system and method for detecting metal contraband such as weapons related material in shipping containers where a container is scanned with at least one penetrating beam, preferably a tomographic x-ray beam, and at least one image is formed. The image can be analyzed by a pattern recognizer to find voids representing metal. The voids can be further classified with respect to their 2 or 3-dimensional geometric shapes. Container ID and contents or bill of lading information can be combined along with other parameters such as total container weight to allow a processor to generate a detection probability. The processor can use artificial intelligence methods to classify suspicious containers for manual inspection.06-21-2012
20120155703MICROPHONE ARRAY STEERING WITH IMAGE-BASED SOURCE LOCATION - Methods and systems for beam forming an audio signal based on a location of an object relative to the listening device, the location being determined from positional data deduced from an optical image including the object. In an embodiment, an object's position is tracked based on video images of the object and the audio signal received from a microphone array located at a fixed position is filtered based on the tracked object position. Beam forming techniques may be applied to emphasize portions of an audio signal associated with sources near the object.06-21-2012
20100278387Passive Electro-Optical Tracker - A passive electro-optical tracker uses a two-band IR intensity ratio to discriminate high-speed projectiles and obtain a speed estimate from their temperature, as well as determining the trajectory back to the source of fire. In an omnidirectional system a hemispheric imager with an MWIR spectrum splitter forms two CCD images of the environment. Three methods are given to determine the azimuth and range of a projectile, one for clear atmospheric conditions and two for nonhomogeneous atmospheric conditions. The first approach uses the relative intensity of the image of the projectile on the pixels of a CCD camera to determine the azimuthal angle of trajectory with respect to the ground, and its range. The second calculates this angle using a different algorithm. The third uses a least squares optimization over multiple frames based on a triangle representation of the smeared image to yield a real-time trajectory estimate.11-04-2010
20120106793METHOD AND SYSTEM FOR IMPROVING THE QUALITY AND UTILITY OF EYE TRACKING DATA - A system and method for interpreting eye-tracking data are provided. The system and method comprise receiving raw data from an eye tracking study performed using an eye tracking mechanism and structural information pertaining to an electronic document that was the subject of the study. The electronic document and its structural information are used to compute a plurality of transition probability values. The eye-tracking data and the transition probability values are used to compute a plurality of gaze probability values. Using the transition probability values and the gaze probability values, a maximally probably transition sequence corresponding to the most likely direction of the user's gaze upon the document is identified.05-03-2012
20120106784APPARATUS AND METHOD FOR TRACKING OBJECT IN IMAGE PROCESSING SYSTEM - A method, apparatus, and system track an object in an image or a video. Pose information is extracted using a relation of at least one feature point extracted in a first Region of Interest (RoI). A pose is estimated using the pose information. A secpmd RoI is set using the pose. And the second RoI is estimated using a filtering scheme.05-03-2012
20120106783OBJECT TRACKING METHOD - An object tracking method includes steps of obtaining multiple first classifications of pixels within a first focus frame in a first frame picture, wherein the first focus frame includes an object to be tracked and has a first rectangular frame in a second frame picture; performing a positioning process to obtain a second rectangular frame; and obtaining color features of pixels around the second rectangular frame sequentially and establishing multiple second classifications according to the color feature. The established second classifications are compared with the first classifications sequentially to obtain an approximation value, compared with a predetermined threshold. The second rectangular frame is progressively adjusted, so as to establish a second focus frame. By analyzing color features of the pixels of the object and with a classification manner, the efficacy of detecting a shape and size of the object so as to update information of the focus frame is achieved.05-03-2012
20120106798SYSTEM AND METHOD FOR EXTRACTING REPRESENTATIVE FEATURE - A representative feature extraction system which selects a representative feature from an input data group includes: occurrence distribution memory means for memorizing an occurrence distribution with respect to feature quantities assumed to be input; evaluation value calculation means for calculating, with respect to each of data items in the data group, the sum of distances to the other data items included in the data group based on the occurrence distribution, to determine an evaluation value for the data item; and data selecting means for selecting the data item having the smallest evaluation value as a representative feature of the data group.05-03-2012
20120207350APPARATUS FOR IDENTIFICATION OF AN OBJECT QUEUE, METHOD AND COMPUTER PROGRAM - In daily life, people are often forced to join a queue in order, for example, to pay at a checkout or to be dealt with at an airport, etc. Because of the various forms of a queue, these are not usually recorded automatically, but are analyzed manually. For example, if a long queue is formed at a supermarket, as a result of which the predicted waiting time for the customers rises above a threshold value, this situation can be identified by the checkout personnel, and a further checkout can be opened. A device 08-16-2012
20120207346Detecting and Localizing Multiple Objects in Images Using Probabilistic Inference - An object detection system is disclosed herein. The object detection system allows detection of one or more objects of interest using a probabilistic model. The probabilistic model may include voting elements usable to determine which hypotheses for locations of objects are probabilistically valid. The object detection system may apply an optimization algorithm such as a simple greedy algorithm to find hypotheses that optimize or maximize a posterior probability or log-posterior of the probabilistic model or a hypothesis receiving a maximal probabilistic vote from the voting elements in a respective iteration of the algorithm. Locations of detected objects may then be ascertained based on the found hypotheses.08-16-2012
20100172542BUNDLING OF DRIVER ASSISTANCE SYSTEMS - A traffic sign recognition system including a detection mechanism adapted for detecting a candidate traffic sign and a recognition mechanism adapted for recognizing the candidate traffic sign as being an electronic traffic sign. A partitioning mechanism may be adapted for partitioning the image frames into a first partition and a second partition. The detection mechanism may use the first portion of the image frames and the recognition mechanism may use the second portion of the image frames. When the candidate traffic sign is detected as an electronic traffic sign, the recognition mechanism may use both the first partition of the image frames and the second portion of the image frames.07-08-2010
20100172541TARGETING METHOD, TARGETING DEVICE, COMPUTER READABLE MEDIUM AND PROGRAM ELEMENT - According to an exemplary embodiment a targeting method for targeting a first object from an entry point to a target point in an object (07-08-2010
20110103645Motion Detecting Apparatus - A motion detecting includes a fetcher which repeatedly fetches an object scene image having a designated resolution. An assigner assigns a plurality of areas each of which has a representative point to the object scene image in a manner to have an overlapping amount different depending on a size of the designated resolution. A divider divides each of a plurality of images respectively corresponding to the plurality of areas assigned by the assigner, into a plurality of partial images, by using the representative points as a base point. A detector detects a difference in brightness between a pixel corresponding to the representative point and surrounding pixels, from each of the plurality of partial images divided by the divider. A creator creates motion information indicating a motion of the object scene image fetched by the fetcher, based on a detection result of the detector.05-05-2011
20110103649Complex Wavelet Tracker - The present invention relates to a video tracker which allows automatic tracking of a selected area over video frames. Motion of the selected area is defined by a parametric motion model. In addition to simple displacement of the area it can also detect motions such as rotation, scaling and shear depending on the motion model. The invention realizes the tracking of the selected area by estimating the parameters of this motion model in the complex discrete wavelet domain. The invention can achieve the result in a non-iterative direct way. Estimation carried out in the complex discrete wavelet domain provides a robust tracking opportunity without being effected by noise and illumination changes in the video as opposed to the intensity-based methods. The invention can easily be adapted to many fields in addition to video tracking.05-05-2011
20110103648Method and apparatus for automatic object identification - A method and system for processing image data to identify objects in an image. A gradient vector image is generated from the image, the gradient vector image identifying a gradient magnitude value and a gradient direction for each pixel of the image. Lines are identified in the gradient vector image. It is determined whether the identified lines are perpendicular, whether more than a predetermined number of pixels on each of the lines identified as perpendicular have a gradient magnitude greater than a predetermined threshold, and whether the individual lines which are identified as perpendicular are within a predetermined distance of each other. A portion of the image is identified as an object if the identified lines are perpendicular, more than the predetermined number of pixels on each of the lines have a gradient magnitude greater than the predetermined threshold, and are within a predetermined distance of each other.05-05-2011
20110103647Device and Method for Classifying Vehicles - Device for classifying objects, in particular vehicles, on a roadway, with a sensor, which operates according to the light-section procedure and is directed onto the roadway to detect the surface contour of an object, and an evaluation unit connected to the sensor that classifies the object on the basis of the detected surface contour.05-05-2011
20110103646PROCEDE POUR GENERER UNE IMAGE DE DENSITE D'UNE ZONE D'OBSERVATION - A method for generating a density image of an observation zone over a given time interval, in which method a plurality of images of the observation zone is acquired, for each image acquired the following steps are carried out: a) detection of zones of pixels standing out from the fixed background of the image, b) detection of individuals, c) for each individual detected, determination of the elementary surface areas occupied by this individual, and d) incrementation of a level of intensity of the elementary surface areas thus determined in the density image.05-05-2011
20110103644METHOD AND APPARATUS FOR IMAGE DETECTION WITH UNDESIRED OBJECT REMOVAL - A method and image detection device are provided for removal of undesired objects from image data. In one embodiment, a method includes detecting image data for a first frame, detecting image data for a second frame, and detecting motion of an undesired object based, at least in part, on image data for the first and second frames. Image data of the first frame may be replaced with image data of the second frame to generate corrected image data, wherein the undesired object is removed from the corrected image data. The corrected image data may be stored.05-05-2011
20120121130FLEXIBLE COMPUTER VISION - A method for flexible interest point computation, comprising: producing multiple octaves of a digital image, wherein each octave of said multiple scale octaves comprises multiple layers; initiating a process comprising detection and description of interest points, wherein said process is programmed to progress layer-by-layer over said multiple layers of each of said multiple octaves, and to continue to a next octave of said multiple octaves upon completion of all layers of a current octave of said multiple octaves; upon the detection and the description of each interest point of said interest points during said process, recording an indication associated with said interest point in a memory, such that said memory accumulates indications during said process; and upon interruption to said process, returning a result being based at least on said indications.05-17-2012
20120121124Method for optical pose detection - The tracking and compensation of patient motion during a magnetic resonance imaging (MRI) acquisition is an unsolved problem. A self-encoded marker where each feature on the pattern is augmented with a 2-D barcode is provided. Hence, the marker can be tracked even if it is not completely visible in the camera image. Furthermore, it offers considerable advantages over a simple checkerboard marker in terms of processing speed, since it makes the correspondence search of feature points and marker-model coordinates, which are required for the pose estimation, redundant. Significantly improved accuracy relative to a planar checkerboard pattern is obtained for both phantom experiments and in-vivo experiments with substantial patient motion. In an alternative aspect, a marker having non-coplanar features can be employed to provide improved motion tracking. Such a marker provides depth cues that can be exploited to improve motion tracking. The aspects of non-coplanar patterns and self-encoded patterns can be practiced independently or in combination.05-17-2012
20100290673IMAGE PROCESSING DEVICE, ELECTRONIC INSTRUMENT, AND INFORMATION STORAGE MEDIUM - An image processing device includes a weighted image generation section that generates a weighted image in which at least one of an object-of-interest area of an input image and an edge of a background area other than the object-of-interest area is weighted, a composition grid generation section that generates a composition grid that includes grid lines that are weighted, and a composition evaluation section that performs composition evaluation calculations on the input image based on the weighted image and the composition grid.11-18-2010
20100290672MOVING OBJECT DETECTING DEVICE, MOVING OBJECT DETECTING METHOD, AND COMPUTER PROGRAM - An apparatus for detecting movement of an object captured by an imaging device, the apparatus includes a moving object detection unit, that is (1) operable to detect movement of an object based on a first moving object detecting process, and (2) operable to detect movement of the object based on a second moving object detecting process. The apparatus also includes an output unit operable to generate an output based on the detection by the moving object detection unit based on at least one of the first and second moving object detecting processes.11-18-2010
20100290671INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An association degree evaluation unit acquires pieces of position information of an image sensing apparatus at respective times within an adjacent time range to an imaging time of a designated image of those sensed by the image sensing apparatus. Furthermore, the association degree evaluation unit acquires pieces of position information of a moving object at the respective times within the adjacent time range. Then, the association degree evaluation unit calculates a similarity between routes of the image sensing apparatus and moving object based on the acquired position information group, and decides a degree of association between the designated image and moving object based on the calculated similarity. An associating unit registers information indicating the degree of association in association with the designated image.11-18-2010
20100290670IMAGE PROCESSING APPARATUS, DISPLAY DEVICE, AND IMAGE PROCESSING METHOD - According to one embodiment, an image processing apparatus includes an extracted coordinates setting module, an image generator, and an output module. The extracted coordinates setting module sets extracted coordinates in a captured image along a direction in which a viewpoint moves with respect to an object in the captured image. The image generator sequentially extracts partial areas from the captured image in which perspective deformation of the object has been corrected based on the extracted coordinates, and generates a plurality of partial area images from the partial areas. The partial areas are in a size corresponding to the viewing angle of the human eye calculated according to an angle of view of the captured image. The output module outputs a moving image including the partial area images as frames.11-18-2010
20120314907SYSTEM AND METHOD FOR PREDICTING OBJECT LOCATION - A system for predicting object location includes a video capture system for capturing a plurality of video frames, each of the video frames having a first area, an object isolation element for locating an object in each of the plurality of video frames, the object being located at a first actual position in a first video frame and being located at a second actual position in a second video frame, and a trajectory calculation element configured to analyze the first actual position and the second actual position to determine an object trajectory, the object trajectory comprising past trajectory and predicted future trajectory, wherein the predicted future trajectory is used to determine a second area in a subsequent video frame in which to search for the object, wherein the second area is different in size than the first area.12-13-2012
20100092039Digital Image Processing Using Face Detection Information - A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values.04-15-2010
20100092038SYSTEM AND METHOD OF DETECTING OBJECTS - The present invention is a system and a method of segmenting and detecting objects which can be approximated by planar or nearly planar surfaces in order to detect one or more objects with threats or potential threats. The method includes capturing imagery of the scene proximate a platform, producing a depth map from the imagery and tessellating the depth map into a number of patches. The method also includes classifying the plurality of patches as threat patches and projecting the threat patches into a pre-generated vertical support histogram to facilitate selection of the projected threat patches having a score value within a sufficiency criterion. The method further includes grouping the selected patches having the score value using a plane fit to obtain a region of interest and processing the region of interest to detect said object.04-15-2010
20100092037METHOD AND SYSTEM FOR VIDEO INDEXING AND VIDEO SYNOPSIS - In a system and method for generating a synopsis video from a source video, at least three different source objects are selected according to one or more defined constraints, each source object being a connected subset of image points from at least three different frames of the source video. One or more synopsis objects are sampled from each selected source object by temporal sampling using image points derived from specified time periods. For each synopsis object a respective time for starting its display in the synopsis video is determined, and for each synopsis object and each frame a respective color transformation for displaying the synopsis object may be determined. The synopsis video is displayed by displaying selected synopsis objects at their respective time and color transformation, such that in the synopsis video at least three points that each derive from different respective times in the source video are displayed simultaneously.04-15-2010
20100092033METHOD FOR TARGET GEO-REFERENCING USING VIDEO ANALYTICS - A method to geo-reference a target between subsystems of a targeting system is provided. The method includes receiving a target image formed at a sender subsystem location, generating target descriptors for a first selected portion of the target image, sending target location information and the target descriptors from a sender subsystem of the targeting system to a receiver subsystem of the targeting system, pointing an optical axis of a camera of the receiver subsystem at the target based on the target location information received from the sending subsystem, forming a target image at a receiver subsystem location when the optical axis is pointed at the target, and identifying a second selected portion of the target image formed at the receiver subsystem location that is correlated to the first selected portion of the target image formed at the sender subsystem location.04-15-2010
20100092036METHOD AND APPARATUS FOR DETECTING TARGETS THROUGH TEMPORAL SCENE CHANGES - A system and method for detecting a target in imagery is disclosed. At least one image region exhibiting changes in at least intensity is detected from among at least a pair of aligned images. A distribution of changes in at least intensity inside the at least one image region is determined using an unsupervised learning method. The distribution of changes in at least intensity is used to identify pixels experiencing changes of interest. At least one target from the identified pixels is identified using a supervised learning method. The distribution of changes in at least intensity is a joint hue and intensity histogram when the pair of images pertain to color imagery. The distribution of changes in at least intensity is an intensity histogram when the pair of images pertain to grey-level imagery.04-15-2010
20100092035AUTOMATIC RECOGNITION APPARATUS - The invention concerns an apparatus for automatic recognition of objects, which includes a device for capturing images of one object, or of a plurality of objects, which are to be recognized. The objects to be evaluated are manually introduced into a field of view of said camera. The invented apparatus possesses an image recognition device, whereby, from an image of an object within the field of view of the camera, an identification-signal representing the object is generated. The data acquired therefrom can serve, for example, a weighing scale, which has been equipped with the invented automatic recognition apparatus.04-15-2010
20100092034METHOD AND SYSTEM FOR POSITION DETERMINATION USING IMAGE DEFORMATION - A method and system of position determination using image deformation is provided. One implementation involves receiving an image of a visual tag, the image captured by an image capturing device, wherein the visual tag has a predefined position associated therewith; based on the image determining a distance of the image capturing device from the visual tag, and determining an angular position of the image capturing device relative to the visual tag; and determining position of the image capturing device based on said distance and said angular position.04-15-2010
20100092032METHODS AND APPARATUS TO FACILITATE OPERATIONS IN IMAGE BASED SYSTEMS - Vision based systems may select actions based on analysis of images to redistribute objects. Actions may include action type, action axis and/or action direction. Analysis may determine whether an object is accessible by a robot, whether an upper surface of a collection of objects meet a defined criteria and/or whether clusters of objects preclude access.04-15-2010
20100092031SELECTIVE AND ADAPTIVE ILLUMINATION OF A TARGET - There are provided a method and a system for illuminating one or more target in a scene. An image of the scene is acquired using a sensing device that may use an infrared sensor for example. From the image, an illumination controller determines an illumination figure, such that the illumination figure adaptively matches at least a position of the target in the image. The target is the selectively illuminated using an illumination device, according to the illumination figure.04-15-2010
20100092030SYSTEM AND METHOD FOR COUNTING PEOPLE NEAR EXTERNAL WINDOWED DOORS - A system for counting objects, such as people, is provided having a camera (04-15-2010
20120121123INTERACTIVE DEVICE AND METHOD THEREOF - An interactive device is provided. The interactive device has a display device; a camera, for continuously filming a plurality of images in front of the display device, wherein the plurality of images includes at least one first object; and a processor, connected to the display device and the camera, for receiving the plurality of images, displaying the plurality of images on the display device, determining occurrence of an interactive movement of the first object in the plurality of images, designating an interactive object in the plurality of images when the interactive movement is detected, analyzing at least one characteristic of the interactive object, and controlling displayed images on the display device according to a trace of the interactive object.05-17-2012
20120163656METHOD AND APPARATUS FOR IMAGE-BASED POSITIONING - Method and apparatus are provided for image based positioning comprising capturing a first image with an image capturing device. Wherein said first image includes at least one object. Moving the platform and capturing a second image with the image capturing device. The second image including the at least one object. Capturing in the first image an image of a surface; capturing in the second image a second image of the surface. Processing the plurality of images of the object and the surface using a combined feature based process and surface tracking process to track the location of the surface. Finally, determining the location of the platform by processing the combined feature based process and surface based process.06-28-2012
20120314900OBJECT TRACKING - The disclosure describes examples of systems, methods, program storage devices, and computer program products for tracking an object, where a reference image of the tracked object is outputted to an operator.12-13-2012
20120314903METHOD OF RE-SAMPLING ULTRASOUND DATA - The present invention relates to multi-dimensional filtering of ultrasound scan data for antialiasing or reconstruction for the purpose of re-sampling. In particular, the present invention provides a method of re-sampling ultrasound scan data, comprising the steps of: a) obtaining sampled ultrasound scan data acquired from a beamforming system, the sampled data being defined by an original n-dimensional sample coordinate system having n axes, that is defined by the ultrasound probe and scan geometry and in which the samples are spaced uniformly along each axis when measured in units appropriate to that axis; b) defining desired target sample positions in a target n-dimensional co-ordinate system, that are uniformly spaced along each axis when measured in units appropriate to that axis; c) mapping the target sample positions defined in step (b) into said original n-dimensional sample co-ordinate system of step (a); d) quantizing the positions of the mapped target samples of step (c) so that they fall on simple exact integer subspacings between the original sample positions; e) designing a set of n-dimensional linear filter kernels according to application of Nyquist- Shannon Sampling Theory, one for each different target sample position relative to the original sample positions of its nearest neighbors, and using the original sample coordinates of the sampled data of step (a) and the desired target sample positions of step (d) in their respective n-dimensional spaces, said n-dimensional filter being separable along each of the original scan dimensions; and f) applying to the sampled data of step (a) the set of n-dimensional linear filter kernels designed in step (e), each filter being applied to calculate the target sample thereby obtaining re-sampled data.12-13-2012
20120314901Fall Detection and Reporting Technology - Fall detection and reporting technology, in which output from at least one sensor configured to sense, in a room of a building, activity associated with a patient falling is monitored and a determination is made to capture one or more images of the room based on the monitoring. An image of the room is captured with a camera positioned to include the patient within a field of view of the camera and the captured image of the room is analyzed to detect a state of the patient at a time of capturing the image. A potential fall event for the patient is determined based on the detected state of the patient and a message indicating the potential fall event for the patient is sent based on the determination of the potential fall event for the patient. Techniques are also described for fall detection and reporting using an on-body sensing device.12-13-2012
20120314906Device for Updating a Photometric Model - A photometric model includes at least one Gaussian model of a measurable physical magnitude in an image supplied by the camera and it is defined by the mean and the variance of the physical magnitude. A device comprises: means for computing the mean based on the current value of the physical magnitude, these means including a first summer mounted in a closed loop; means for measuring the difference between the mean and the current value of the physical magnitude, these means including a second summer; means for reducing the difference, these means including an automatic regulator. The first summer, the second summer and the automatic regulator are assembled in a closed-loop control of the first summer so as to update the model slowly in a period of stability of the observed scene and rapidly in a period of transition of the observed scene. Application: video surveillance, background subtraction.12-13-2012
20120314904IMAGE COLLATION SYSTEM, IMAGE COLLATION METHOD AND COMPUTER PROGRAM - An image collation system includes: a first direction estimating unit for estimating a first imaging direction of a reference object that matches an imaging direction of a collation target object by comparing global characteristics between an image of the collation target object and the three-dimensional data of the reference object; a second direction estimating unit for generating an image corresponding to the first imaging direction of the reference object, and estimating a second imaging direction of the reference object that matches the imaging direction of the collation target object by comparing local characteristics between the image of the collation target object and the generated image corresponding to the first imaging direction; and an image conformity determining unit for generating an image corresponding to the second imaging direction of the reference object, and determining whether the image of the collation target object matches the generated image corresponding to the second imaging direction.12-13-2012
20120213404AUTOMATIC EVENT RECOGNITION AND CROSS-USER PHOTO CLUSTERING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automatic event recognition and photo clustering. In one aspect, methods include receiving, from a first user, first image data corresponding to a first image, receiving, from a second user, second image data corresponding to a second image, comparing the first image data and the second image data, and determining that the first image and the second image correspond to a coincident event based on the comparing.08-23-2012
20120213407IMAGE CAPTURE AND POST-CAPTURE PROCESSING - Image data of a scene is captured. Spectral profile information is obtained for the scene. A database of plural spectral profiles is accessed, each of which maps a material to a corresponding spectral profile reflected therefrom. The spectral profile information for the scene is matched against the database, and materials for objects in the scene are identified by using matches between the spectral profile information for the scene against the database. Metadata which identifies materials for objects in the scene is constructed, and the metadata is embedded with the image data for the scene.08-23-2012
20120163670BEHAVIORAL RECOGNITION SYSTEM - Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.06-28-2012
20120163666Object Processing Employing Movement - Directional albedo of a particular article, such as an identity card, is measured and stored. When the article is later presented, it can be confirmed to be the same particular article by re-measuring the albedo function, and checking for correspondence against the earlier-stored data. The re-measuring can be performed through us of a handheld optical device, such as a camera-equipped cell phone. The albedo function can serve as random key data in a variety of cryptographic applications. The function can be changed during the life of the article. A variety of other features are also detailed.06-28-2012
20120163660PROCESSING SYSTEM - A processing system for plate-like objects is provided, with an exposure device and an object carrier with an object carrier surface for receiving the object. The exposure device and the carrier are movable relative to one another, such that the exact position of the object relative to the carrier is determinable. An edge detection device is provided which comprises at least one edge illumination unit having an illumination area, within which an object edge located in the respective object edge area has light directed onto it from the side of the carrier. At least one edge image detection unit is provided on a side of the object located opposite the carrier, the edge image detection unit imaging an edge section of the object edges located in the illumination area as an edge image, such that the respective edge image is detectable in its exact position relative to the carrier.06-28-2012
20120128211DISTANCE CALCULATION DEVICE FOR VEHICLE - Provided is a distance calculation device for a vehicle, which can accurately calculate the distance to an object, for example, even when the sunshine condition in an image capture environment changes. In the device, an image quality estimation means (05-24-2012
20120128208Human Tracking System - An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.05-24-2012
20120128203MOTION ANALYZING APPARATUS - A sensor unit is installed to a target object and detects a given physical amount. A data acquisition unit acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals of the physical amount is known and a second period that is a target for motion analysis. An error time function estimating unit performs m time integrals