Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Local or regional features

Subclass of:

382 - Image analysis

382181000 - PATTERN RECOGNITION

382190000 - Feature extraction

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
382199000 Pattern boundary and edge measurements 231
382203000 Shape and form analysis 104
382201000 Point features (e.g., spatial coordinate descriptors) 84
382197000 Directional codes and vectors (e.g., Freeman chains, compasslike codes) 56
382202000 Linear stroke analysis (e.g., limited to straight lines) 15
382205000 Local neighborhood operations (e.g., 3x3 kernel, window, or matrix operator) 4
20130163885INTERPOLATING SUB-PIXEL INFORMATION TO MITIGATE STAIRCASING - A surface model may be created from aerial photographs, while mitigating the staircase effect, by interpolating sub-pixel values in the photographs and using those sub-pixel values to calculate the offset between overlapping photographs. Aerial photographs are taken, which include overlapping regions. For a pair of photographs, the photographs are rectified to create coordinate systems in which the photographs have x-axes that coincide and y-axes that are parallel to each other, so that overlapping regions in one photograph are a fixed offset along the x-axis from the other photograph. Analytic functions are used to interpolate values between pixels, and the offset distance is calculated by finding the offset that maximizes a similarity function over the analytically interpolated values. The calculated offset may then be used to calculate the height of a point on the photographed surface. A surface model may be built by calculating the heights of many points.06-27-2013
20090123077COEFFICIENT DETERMINING METHOD, FEATURE EXTRACTING METHOD, SYSTEM, AND PROGRAM, AND PATTERN CHECKING METHOD, SYSTEM, AND PROGRAM - [PROBLEMS] To provide a feature extracting method for quickly extracting a feature while preventing lowering of the identification performance of the kernel judgment analysis, a feature extracting system, and a feature extracting program. [MEANS FOR SOLVING PROBLEMS] Judgment feature extracting device (05-14-2009
200902384682-D encoded symbol quality assessment - An 2-D symbol orientation guide with parallel and spaced right angle guidelines with chevron-like spaces provided therebetween is selectively displayed in plural selected dispositions on a monitor screen as an overlay for the display on the same monitor screen of a 2-D Data Matrix symbol. Manual rotation of the symbol is viewed on the monitor screen as the symbols solid line border is moved into alignment with a guide line at which time the symbol is imaged and its quality graded. Display of the orientation guide in at least five selected rotational dispositions, alignment of the symbol solid line border therewith and imaging and grading of the symbol quality in each such position provides multiple grade scores for averaging into an overall grade score.09-24-2009
20100158390PARALLEL PROCESSING FOR GENERATING A THINNED IMAGE - A thinned output image is generated from an input image. Values of pixels surrounding a pixel of interest in the input image are determined, and first and second neighboring pixel patterns surrounding the pixel of interest are established based on the values of the pixels surrounding the pixel of interest. The first neighboring pixel pattern may be compared to each of a set of purge patterns to determine whether to eliminate the pixel, and the second neighboring pixel pattern may be compared to each of a set of conservation patterns to determine whether to conserve the pixel. The comparisons to the purge and conservation patterns are performed for each pixel independently, and in parallel for all pixels of the input image.06-24-2010
382196000 Slice codes 1
20120121188IMAGE PROCESSING DEVICE AND METHOD - The present invention relates to an image processing device and method whereby deterioration of effects of filter processing due to local control of filter processing when encoding or decoding can be suppressed.05-17-2012
Entries
DocumentTitleDate
20130039583IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM FOR CAUSING COMPUTER TO EXECUTE CONTROL METHOD OF IMAGE PROCESSING APPARATUS - Object recognition is executed by using, of feature data classified into a plurality of groups, only feature data belonging to a selected group. Hence, it is unnecessary to compare and refer to all feature data so that object recognition processing can be speeded up.02-14-2013
20130028519FEATURE BASED IMAGE REGISTRATION - Example embodiments disclosed herein relate to feature based image registration. Feature based image registration determines correspondence between image features such as points, lines, and contours to align or register a reference or first image and a target or second image. The examples disclosed herein may be used in mobile devices such as cell phones, personal digital assistants, personal computers, cameras, and video recorders.01-31-2013
20120201466ROBUST INTEREST POINT DETECTOR AND DESCRIPTOR - A method for operating on images is described for interest point detection and/or description working under different scales and with different rotations, e.g. for scale-invariant and rotation-invariant interest point detection and/or description.08-09-2012
20130028521IMAGE EVALUATION DEVICE, IMAGE EVALUATION METHOD, PROGRAM, INTEGRATED CIRCUIT - A template includes a frame a and a frame b, in each of which an image is to be inserted. An object introduction degree (OI) is associated with a frame set ab, which is a combination of the frames a and b. For a pair of images arranged with respect to the frame set ab, the object introduction degree (OI), which is associated with the frame set ab, is calculated according to a characteristic value of each of the images in the pair of images. The calculated object introduction degree is used as an evaluation value of an arrangement pattern in which the pair of images is arranged with respect to the frame set ab.01-31-2013
20130028520IMAGE PROCESSING DEVICE IDENTIFYING ATTRIBUTE OF REGION INCLUDED IN IMAGE - An image processing device performs: preparing image data representing an image, the image including a target region consisting of a plurality of target pixels, each of the plurality of target pixels having a pixel value; classifying each of a plurality of target pixels as one of an object pixel and a background pixel other than the object pixel, the object pixel constituting an object represented in the target region; determining whether or not the target region satisfies a first condition related to a relationship between the object pixel and the background pixel to make a first determination result; and judging whether or not the target region is a letter region representing at least one letter based on the first determination result.01-31-2013
20120163720IMAGE PROCESSING APPARATUS AND METHOD THEREOF - A specifying unit specifies a pixel value at a pixel position of a reference image to a pixel of a sample enlarged image at a position relative to a pixel position of the reference image as an initial pixel value. A selecting unit selects a target pixel from pixels other than pixels having the pixel values specified therefor including the pixels having the initial pixel values are set in the sample enlarged image. An allocating unit configured to allocate the pixel value of the pixel at a similar region position of the reference image to the pixel value of the target value in the sample enlarged image when a searching unit searches the similar region position similar to a pixel value pattern including a set of already specified pixels in a spatial neighborhood of the target pixel from the reference image is provided.06-28-2012
20100086213IMAGE RECOGNITION APPARATUS AND IMAGE RECOGNITION METHOD - An image recognition apparatus that recognizes an object related to a certain object in an image sequentially recognizes an object from the image in accordance with recognition-order information that indicates an object order in an object sequence including the certain object, the related object, and an object connected between those objects. The apparatus determines whether or not an object recognized in a current turn of recognition has a connective relationship with an extracted object obtained in a previous turn of recognition, and obtains the object that has been determined as having a connective relationship as an extracted object. Based on an object extracted by a repetition of the above processing, that is, recognition, connective relationship determination, and obtaining, in the above-described recognition order, the related object is associated with the certain object.04-08-2010
20090196506SUBWINDOW SETTING METHOD FOR FACE DETECTOR - Disclosed herein is a subwindow setting method for a face detector for detecting whether one or more facial images exist in each of subwindows having a set size while sequentially setting the subwindows in the width direction of an input image. A scan interval between two neighboring subwindows under consideration in the width direction is determined based on the facial color density of a first subwindow of the two neighboring subwindows. Further, a scan interval between the first and second rows in a height direction of the input image is determined based on the facial color density of the subwindows included in the first row.08-06-2009
20080260257SYSTEMS, METHODS, AND APPARATUS FOR AUTOMATIC IMAGE RECOGNITION - A method of improving the accuracy and computation time of automatic image recognition by the implementation of association graphs and a quantum processor.10-23-2008
20080260256METHOD AND APPARATUS FOR ESTIMATING VANISH POINTS FROM AN IMAGE, COMPUTER PROGRAM AND STORAGE MEDIUM THEREOF - The present invention discloses a method and apparatus for estimating vanish points from an image, computer program and storage medium thereof. One of the method for detecting the vanishing points from an image according to the present invention comprising a dividing step for dividing the image into small patches; a first detecting step for detecting each patch's local orientations; a composing step for composing lines of pencils based on the local orientations detected in the first detecting step; and a first computing step for computing at least one vanishing point based on the lines of pencils composed in said composing step. On the basis of the vanishing points found by the present invention, the perspective rectification on a document image can be executed accurately and fastly.10-23-2008
20090324087SYSTEM AND METHOD FOR FINDING STABLE KEYPOINTS IN A PICTURE IMAGE USING LOCALIZED SCALE SPACE PROPERTIES - A method and system is provided for finding stable keypoints in a picture image using localized scale properties. An integral image of an input image is calculated. Then a scale space pyramid layer representation of the input image is constructed at mulitple scales, wherein at each scale, a set of specific filters are applied to the input image to produce an approximation of at least a portion of the input image. Outputs from filters are combined together to form a single function of scale and space. Stable keypoint locations are identified in each scale at pixel locations at which the single function attains a local peak value. The stable keypoint locations which have been identified are then stored in a memory storage.12-31-2009
20100074530IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing method performs reduction processing on an input image to acquire a reduced image, extracts a feature point from a group of images including the input image and one or more of the reduced images, determines as a matched feature point the feature point extracted from a matching position in each of two or more images in the group of images, calculates a local feature quantity of the matched feature point determined by the determination unit, and registers the calculated local feature quantity as a local feature quantity of the input image.03-25-2010
20130077869IMAGE PROCESSING APPARATUS FOR CONVERTING IMAGE IN CHARACTERISTIC REGION OF ORIGINAL IMAGE INTO IMAGE OF BRUSHSTROKE PATTERNS - An object of the present invention is to obtain an image that is more similar to a real ink-wash painting. An ink-wash painting conversion unit 03-28-2013
20130077868DATA PROCESSING APPARATUS, DATA PROCESSING METHOD AND STORAGE MEDIUM - A data processing apparatus obtains an input pixel region contained in image data, inputs a pixel value contained in the input pixel region into an image processor, obtains the image-processed pixel value from the image processor, and outputs an output pixel region. Data of the input pixel region and data of the output pixel region are temporarily stored, and the size of an input area that stores the data of the input pixel region and the size of an output area that stores the data of the output pixel region are set based on the number of pixels in the input pixel region and the number of pixels in the output pixel region.03-28-2013
20130077867IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND METHOD OF CONTROLLING IMAGE PROCESSING APPARATUS - When the processed pixel value is the last pixel value to be output for a unit region of interest, the apparatus stands by to output this pixel value until all pixels in the unit region of interest are input, and enables output of the pixel value on standby when all the pixels in the unit region of interest have been input.03-28-2013
20130039582APPARATUS AND METHOD FOR DETECTING IMAGES WITHIN SPAM - A method is described that includes comparing a characteristic of an image to stored characteristics of spam images. The method also includes generating a signature of the present image. The method further includes comparing the signature of the present image to stored signatures of spam images. The method also includes determining the spam features corresponding to the stored signatures of spam images that match the signature of the present image.02-14-2013
20130039581Image Processing Device, Image Processing Method, and Image Processing Program - An image processing device that executes deformation of an image. A candidate area setting unit sets candidate areas, each of which includes a specific image, on a target image used as a target for a deformation process. An exclusion determination unit, when there is a candidate area that partially extends off a cropped image that is clipped from the target image through predetermined cropping, excludes the candidate area, which at least partially extends off the cropped image, from the target for the deformation process. A deformation processing unit performs deformation of an image on the candidate areas other than the excluded candidate areas.02-14-2013
20120183226DATA CAPTURE FROM MULTI-PAGE DOCUMENTS - A method for processing a batch of scanned images is provided. The method comprises processing the scanned images into documents. For documents comprising multiple pages, the method maintains a page-based coordinate system to specify a location of structures within a page and joins the pages to form a multi-page sheet having a sheet-based coordinate system to specify a location of structures within the multi-page sheet. Data may be extracted from each document, such operation comprising a page mode wherein structures are detected on individual pages using the page-based coordinate system and a document mode wherein structures are detected within the entire document using the sheet-based coordinate system.07-19-2012
20120183225ROUGH WAVELET GRANULAR SPACE AND CLASSIFICATION OF MULTISPECTRAL REMOTE SENSING IMAGE - Shift-invariant wavelet transform with properly selected wavelet base and decomposition level(s), is used to characterize rough-wavelet granules producing wavelet granulation of a feature space for a multispectral image such as a remote sensing image. Through the use of the granulated feature space contextual information in time and/or frequency domains are analyzed individually or in combination. Neighborhood rough sets (NRS) are employed in the selection of a subset of granulated features that further explore the local and/or contextual information from neighbor granules.07-19-2012
20120183224INTEREST POINT DETECTION - Interest points are markers anchored to a specific position in a digital image of an object. They are mathematically extracted in such a way that, in another image of the object, they will appear in the same position on the object, even though the object may be presented at a different position in the image, a different orientation, a different distance or under different lighting conditions. Methods are disclosed that are susceptible to implementation in hardware and corresponding hardware circuits are described.07-19-2012
20100104197IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing apparatus reduces two input images to be compared by the predetermined number of times to generate two image groups, extracts a plurality of feature points and a local feature amount of each feature point from these image groups, and determines a combination of feature points in which local feature amounts are similar to each other, between the image groups. Then, the image processing apparatus determines a relation of a reasonable combination, assigns high weights to the reasonable combination, and calculates a similarity degree between the two input images.04-29-2010
20100104193MILLIMETRIC WAVE IMAGING DEVICE AND CAPTURED IMAGE DISPLAY DEVICE - A millimetric wave imaging device includes: a lens antenna; a polygon mirror; a receiving portion; a scanning unit; and an image data generating unit. The receiving portion receives millimetric wave radiated from an object, transmitted through the lens antenna, and reflected on a mirror surface of the polygon mirror rotated by the scanning unit to detect a signal level of the millimetric wave. The image data generating unit generates image data representing an object image by receiving a detection signal from the receiving portion while driving the polygon mirror through the scanning unit.04-29-2010
20100104194IMAGE PROCESSING APPARATUS, ELECTRONIC MEDIUM, AND IMAGE PROCESSING METHOD - An image processing apparatus include: a storage unit storing an image of a processing target; a tangent calculating unit extracting contours as a bent lines represented by sets of contour points from an image read from the storage unit and computing tangents to the extracted contour; a projecting unit projecting computed tangents to axes in directions orthogonal to the corresponding tangents, and computing coordinates of intersections where the tangents intersect the axes; and a rectangle calculating unit selecting intersections with maximum values and minimum values of coordinates among intersections computed by the projecting unit for each direction of the axis, and computing a rectangle formed by a pair of parallel tangents passing through two intersections with maximum values and minimum values selected for a first axis and another pair of tangents passing through two intersections with maximum values and minimum values selected for a second axis orthogonal to the first axis.04-29-2010
20100104196Method and System for Extracting Information from an Analog Graph - Disclosed herein is a method for extracting information from an analog graph on a driver log sheet. The method includes providing an electronic image of an analog graph, identifying a graph height dimension and a graph width dimension, dividing the height dimension into a number of activity rows, and dividing the width dimension into a number of time columns. An array of cells defined by the intersections of the time columns and the activity rows is established, where each cell includes a plurality of pixels. For each cell, a probability is determined corresponding to the probability that a substantially horizontal line formed by black pixels extends substantially across at least a portion of that cell. For each time column, the respective probabilities of the cells in that time column are compared, the cell with the highest probability is flagged, and the activity row of the flagged cell is determined.04-29-2010
20100104195Method for Identifying Dimensions of Shot Subject - The present invention relates to a method for identifying dimensions of shot subject, implemented on an identification system including a photo shooting unit capable of adjusting focal lengths. The method includes steps of using the photo shooting unit to focus on plural positions respectively having different field depths on a shot subject and respectively capture a image thereof, determining whether resolutions of the captured images are same, and if so, the shot subject is a two dimensional object, otherwise, the shot subject is a three dimensional object.04-29-2010
20090154811IMAGE PROCESSING APPARATUS AND COMPUTER READABLE MEDIUM - An image processing apparatus includes: a region extracting unit extracts a character region on an image; a character recognizing unit that recognizes characters in the character region extracted by the region extracting unit; a translating unit that translates a recognition result obtained by the character recognizing unit; and a changing unit that changes a constitution of the image with respect to the character region extracted by the region extracting unit according to a direction of the characters in the character region extracted by the region extracting unit, and according to a direction of the characters of the language translated by the translating unit.06-18-2009
20100040288SYSTEM AND METHOD FOR VALIDATION OF FACE DETECTION IN ELECTRONIC IMAGES - The subject application is directed to a system and method for validation of face detection in electronic images. Image data is first received along with at least one image portion that includes a possible facial depiction. Eye position data, nose position data, and mouth position data are also received. A reference point at a central location of the at least one image portion is then isolated. A width of the image portion is then isolated, and a facial region is isolated in accordance with the eye, nose, and mouth position data. The eye distance is then determined from the received eye position data. The isolated facial region data is then tested against the reference point and eye distance is tested against a width of the image portion. An output is generated corresponding to the accuracy of isolated facial region in accordance with the tests.02-18-2010
20130044957METHODS AND APPARATUSES FOR ENCODING AND/OR DECODING MAPPED FEATURES IN AN ELECTRONIC MAP OF A STRUCTURE - Methods, apparatuses and articles of manufacture are provided that may be implemented or used in one or more electronic devices to generate an encoded map feature description corresponding to one or more mapped features of an electronic map of at least a portion of a structure. Methods, apparatuses and articles of manufacture are also provided that may be implemented or used in one or more electronic devices to decode at least a portion of an encoded map feature description.02-21-2013
20130084013SYSTEM AND METHOD FOR SALIENCY MAP GENERATION - A system and a method are disclosed for generating a saliency map of an image. The method includes receiving image data representative of image forming elements of an image and determining saliency values for image forming elements by an iterative method. The iterative, method includes computing a norm of the image data, computing values of deviation from the norm of the image data of the image forming elements, identifying the image forming elements corresponding to the image data having magnitudes of deviation that meet a pre-determined condition, assigning saliency values to the identified image forming elements based on the values of deviation, and repeating the computing the norm and deviation, identifying image forming elements and assigning saliency values using the image data of image forming elements that have no assigned saliency value. A saliency map of the image based on the assigned saliency values.04-04-2013
20100067803ESTIMATING A LOCATION OF AN OBJECT IN AN IMAGE - An implementation provides a method for determining a trajectory of an object in a particular image in a sequence of digital images, the trajectory being based on one or more previous locations of the object in one or more previous images in the sequence. A weight is determined, for a particle in a particle-based framework for tracking the object, based on distance from the trajectory to the particle. A location estimate is determined for the object using the particle-based framework, the location estimate being based on the determined particle weight.03-18-2010
20100067802ESTIMATING A LOCATION OF AN OBJECT IN AN IMAGE - An implementation provides a method for estimating a location for an object in a particular image of a sequence of images. The location is estimated using a particle-based framework, such as a particle filter. It is determined that the estimated location for the object in the particular image is occluded. A trajectory is estimated for the object based on one or more previous locations of the object in one or more previous images in the sequence of images. The estimated location of the object is changed based on the estimated trajectory.03-18-2010
20130051680IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER READABLE RECORDING DEVICE - An image processing device includes: a non-target region detecting unit that detects a region that is not to be examined as a non-target region from an image; a pixel-of-interest region setting unit that sets a pixel-of-interest region in a predetermined area including a pixel-of-interest position in the image; a surrounding region determining unit that determines a surrounding region, which is an area for acquiring information for use in forming a reference plane with respect to the pixel-of-interest position, based on the non-target region; a reference plane forming unit that forms the reference plane based on the information in the surrounding region; and an outlier pixel detecting unit that detects an outlier pixel having a pixel value numerically distant from circumferential values based on a difference between corresponding quantities of the reference plane at each pixel position and of the original image.02-28-2013
20130051679IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus includes a correction process portion which corrects a correction target region in an input image using an image signal in a region for correction, a detecting portion which detects a specified object region in which a specified type of object exists in the input image, and a setting portion which sets the region for correction based on positions of the correction target region and the specified object region.02-28-2013
20130051678Systems and Methods for Performing Facial Detection - Various embodiments are disclosed for detecting facial regions in a plurality of images. In one embodiment, a method comprises assigning at least one of the possible facial regions an assurance factor, forming clusters of possible facial regions based on a connection factor between the facial regions, and determining facial regions from the possible facial regions based on the assurance factor and the clusters of possible facial regions.02-28-2013
20120219225REGION-OF-INTEREST EXTRACTION APPARATUS AND METHOD - According to one embodiment, a region-of-interest extraction apparatus includes following units. The acquisition unit acquires a structured document including elements. The first extraction unit extracts block regions including specific elements, from the structured document. The second extraction unit extracts, as a display region, parts of the elements being displayed on a display screen, from the structured document. The third extraction unit extracts a region of interest, in which a user is interested, from the structured document, based on relations between the block regions and the display region.08-30-2012
20120219224Local Difference Pattern Based Local Background Modeling For Object Detection - Systems and methods for object detection that consider background information are presented. Embodiments of the present invention utilizing a feature called Local Difference Pattern (LDP), which is more discriminative for modeling local background image features. In embodiments, the LDP feature is used to train detection models. In embodiments, the LDP feature may be used in detection to differentiate different image background conditions and adaptively adjust classification to yield higher detection rates.08-30-2012
20090041358Information storage apparatus and travel environment information recognition apparatus - An information storage apparatus recognizes a traffic sign on a road expressed by a road surface image, and stores recognized traffic sign in association with its position information in a hard disk drive. By recognizing and storing information on the traffic sign in combination with the position information for the sake of traffic safety, information coverage in terms of traffic safety is updated to the most up-to-date condition for roads in the entire country.02-12-2009
20090041357FACE IMAGE DETECTING DEVICE, FACE IMAGE DETECTING METHOD, AND FACE IMAGE DETECTING PROGRAM - An extraction-pattern storing unit stores therein information related to a plurality of different extraction patterns for extracting a predetermined number of pixels from pixels surrounding a pixel that is a target for detecting a face part image. A face-part-image detecting unit extracts a pixel using the different extraction patterns stored in the extraction-pattern storing unit, and detects the face part image included in an image using a feature amount of an extracted pixel. A face-image detecting unit detects a face image from the image based on the face part image detected by the face-part-image detecting unit.02-12-2009
20130071032IMAGE PROCESSING APPARATUS,IMAGE PROCESSING METHOD, AND PROGRAM - Provided is an image processing apparatus including a generation unit configured to generate a background model of a multinomial distribution from an acquired image, and generate a background, and a determination unit configured to determine whether a background with high reliability can be generated from the background model generated by the generation unit.03-21-2013
20130058579IMAGE INFORMATION PROCESSING APPARATUS - An image information processing apparatus comprising: an extraction unit that extracts an object from a photographed image; a calculation unit that calculates an orientation of the object as exhibited in the image; and a provision unit that provides a tag to the image according to the orientation of the object.03-07-2013
20130058578SYSTEMS AND METHODS FOR ANALYZING FACIAL EXPRESSIONS, IDENTIFYING INTENT AND TRANSFORMING IMAGES THROUGH REVIEW OF FACIAL EXPRESSIONS - Methods of analyzing a plurality of facial expressions are disclosed that include: identifying a subject person, utilizing the subject person to create an image of a known target, removing at least one distracter expression from the target image to form a revised target image, and reviewing the revised target image with at least one third party participant to form a final target image. Additional methods of analyzing a plurality of facial expressions include: identifying a subject person, utilizing the subject person to create an image of a known target, digitizing the target image, removing at least one distracter expression from the target image to transform the target image to a revised target image, and reviewing the revised target image with at least one third party participant to transform the revised target image to a final target image. Software for implementing contemplated methods include: a set speed function, a pre-test phase function, an instruction phase function, a practice phase function, and a post-test phase function.03-07-2013
20130058577EVENT CLASSIFICATION METHOD FOR RELATED DIGITAL IMAGES - A method for determining an event classification for a set of related digital images, comprising: receiving a set of related digital images; detecting one or more man-made light emitting sources within at least one of the digital images; using a data processor to automatically determine an event classification responsive to analyzing a spatial arrangement of the detected man-made light emitting sources in the one or more digital images; and storing metadata in a processor-accessible memory associating the determined event classification with each of the digital images in the set of digital images.03-07-2013
20090268967EFFICIENT MODEL-BASED RECOGNITION OF OBJECTS USING A CALIBRATED IMAGE SYSTEM - A model-based object recognition system operates to recognize an object on a predetermined world surface within a world space. An image of the object is acquired. This image is a distorted projection of the world space. The acquired image is processed to locate one or more local features of the image, with respect to an image coordinate system of the image. These local features are mapped a world coordinate system of the world surface, and matched to a model defined in the world coordinate system. Annotations can be arranged as desired relative to the object in the world coordinate system, and then inverse-mapped into the image coordinate system for display on a monitor in conjunction with the acquired image. Because models are defined in world coordinates, and pattern matching is also performed in world coordinates, one model definition can be used by multiple independent object recognition systems.10-29-2009
20120224773REDUNDANT DETECTION FILTERING - Systems and methods are described herein for identifying and filtering redundant database entries associated with a visual search system. An example of a method of managing a database associated with a mobile device described herein includes identifying a captured image; obtaining an external database record from an external database corresponding to an object identified from the captured image; comparing the external database record to a locally stored database record; and locally discarding one of the external database record or the locally stored database record if the comparing indicates overlap between the external database record and the locally stored database record.09-06-2012
20130064455INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS AND STORAGE MEDIUM - An information processing apparatus comprising a setting unit configured to set a plurality of local regions on an image; an extraction unit configured to extract feature amounts from the respective local regions; a calculation unit configured to calculate dissimilarities between the local regions based on probability densities for the respective feature amounts; and an integration unit configured to integrate the plurality of local regions as region groups based on the dissimilarities.03-14-2013
20120195506REGIONAL INFORMATION EXTRACTION METHOD, REGION INFORMATION OUTPUT METHOD AND APPARATUS FOR THE SAME - Provided are a regional information extraction method, a regional information output method, and an apparatus for the same. The regional information output method includes obtaining a regional image through the processing unit, transmitting the regional image to a server through the transmitting/receiving unit, receiving regional information on a geographical position that a regional image feature extracted from the regional image represents through the transmitting/receiving unit, and outputting the regional information through the output unit, wherein the geographical position represents one feature matching to the regional image feature, which is selected from a plurality of features representing a plurality of regional images.08-02-2012
20090238465APPARATUS AND METHOD FOR EXTRACTING FEATURES OF VIDEO, AND SYSTEM AND METHOD FOR IDENTIFYING VIDEOS USING SAME - An apparatus for extracting features from a video includes a frame rate converter for performing a frame rate conversion on the video to a preset frame rate, a gray scale converter for performing a grey scale conversion on the frame rate-converted video, a frame size normalizer for performing a frame size normalization on the gray scale-converted video to a preset image size, and a feature extractor for partitioning the normalized video into image blocks of a predetermined size, and extracting features from the image blocks on the basis of luminance values of the image blocks. A video identification system employs the feature extracting apparatus to identify an original video and an object video.09-24-2009
20090232400IMAGE EVALUATION APPARATUS, METHOD, AND PROGRAM - An image evaluation apparatus including a face detection unit for detecting, from an image including at least one face, each of the at least one face; a characteristic information obtaining unit for obtaining a plurality of characteristic information representing characteristics of each face; an expression level calculation unit for calculating an expression level representing the level of a specific expression of each face; and an evaluation value calculation unit for calculating an expression-based evaluation value for the image based on the characteristic information and the expression level of each face.09-17-2009
20080317353METHOD AND SYSTEM FOR SEARCHING IMAGES WITH FIGURES AND RECORDING MEDIUM STORING METADATA OF IMAGE - A method and a system for searching images with figures and a recording medium storing metadata of image are provided. The searching method is divided into an image analysis stage and an image search stage. In the image analysis stage, figures between images are compared with each other and assigned with an identity respectively. A representative image of each identity is then evaluated from the image collection. In the image search stage, the representative images are displayed for user to select some of them as a searching criterion, so as to search and display the images matching the searching criterion in the image collection. Accordingly, the images required by user can be found through intelligent analysis of figures, intuitive definition of searching criterion, and simple comparison of identities so that both time and effort of organization for searching images with figures can be substantially saved.12-25-2008
201201142513D Object Recognition - A method, device, system, and computer program for object recognition of a 3D object of a certain object class using a statistical shape model for recovering 3D shapes from a 2D representation of the 3D object and comparing the recovered 3D shape with known 3D to 2D representations of at least one object of the object class.05-10-2012
20120114250METHOD AND SYSTEM FOR DETECTING MULTI-VIEW HUMAN FACE - Disclosed are a system and a method for detecting a multi-view human face. The system comprises an input device configured to input image data; a hybrid classifier including a non-human-face rejection classifier configured to roughly detect non-human-face image data and plural angle tag classifiers configured to add an angle tag into the image data having a human face; and plural cascade angle classifiers. Each of the plural cascade angle classifiers corresponds to a human face angle. One of the plural cascade angle classifiers receives the image data with the angle tag output from the corresponding angle tag classifier, and further detects whether the received image data with the angle tag includes the human face.05-10-2012
20100086212Method and System for Dispositioning Defects in a Photomask - A method and system for dispositioning defects in a photomask are provided. A method for dispositioning defects in a photomask includes analyzing photomask topography data including data representing a design topology of at least a first photomask, the first photomask corresponding to a first layer in a photolithographic process. Based at least on the analysis, one or more safe regions of the first photomask are identified, each safe region corresponding to a region of the first layer insensitive to potential defects located in the first photomask.04-08-2010
20110019921IMAGE MATCHING DEVICE, IMAGE MATCHING METHOD AND IMAGE MATCHING PROGRAM - Image matching device 01-27-2011
20130163878APPARATUS AND METHOD FOR RECOGNIZING OBJECTS USING FILTER INFORMATION - An object recognition method using filter information includes acquiring object image information including an object of interest, acquiring filter information for recognizing the object of interest from the object image information, and recognizing the object of interest using the filter information. An object recognition apparatus using filter information including an object information acquiring unit to acquire object image information comprising an object of interest, a filter information input unit to acquire filter information, an output unit to output the image information and the filter information, and a controller to recognize the object of interest in the object image information using the filter information.06-27-2013
20130163879METHOD AND SYSTEM FOR EXTRACTING THREE-DIMENSIONAL INFORMATION - A method of extracting three-dimensional (3D) information from an image of a scene is disclosed. The method comprises: comparing the image with a reference image associated with a reference depth map, so as to identify an occluded region in the scene; analyzing an extent of the occluded region; and based on the extent of the occluded region, extracting 3D information pertaining to an object that occludes the occluded region. In some embodiments the 3D information is extracted, based, at least in part, on parameters of the imaging system that acquires the image.06-27-2013
20100266210Predictive Determination - Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.10-21-2010
20100266207Focus enhancing method for portrait in digital image - A focus enhancing method for a portrait in a digital image is applied to an electronic device capable of receiving or reading a digital image and executing a focus enhancing process with respect to a portrait in the digital image, wherein the focus enhancing process includes a foreground definition procedure for defining a head region and a body region of the portrait as a foreground of the digital image, a foreground and background segmentation procedure for cutting away an image other than the foreground from the digital image and defining the result as a background of the digital image, and a foreground and background blending procedure for blurring the background, feathering a transition region coupled to the foreground and the background, and blending the foreground, the transition region and the background to form a new digital image with a prominent portrait.10-21-2010
20110129155VIDEO SIGNATURE GENERATION DEVICE AND METHOD, VIDEO SIGNATURE MATCHING DEVICE AND METHOD, AND PROGRAM - A problem of degradation in the accuracy of video matching, which is caused when videos contain video patterns commonly appearing in various videos or video patterns in which features cannot be acquired stably, is solved. In order to solve this problem, a visual feature extraction unit extracts a visual feature to be used for identification of a video based on features of a plurality of pairs of sub-regions in the video, and a confidence value calculation unit calculates a confidence value of the visual feature based on the features of the plurality of pairs of sub-regions. When matching is performed, visual features are compared with each other in consideration of the confidence value.06-02-2011
20120288204IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An area where a specific object is captured is extracted as a specific area from the image of a frame of interest, and the evaluation value of the specific area is obtained using a predetermined evaluation formula. It is determined whether the evaluation value of the specific area in a frame preceding the frame of interest has exceeded a predetermined threshold. When it is determined that the evaluation value of the specific area has exceeded the predetermined threshold, the frame of interest is encoded to set the code amount of the specific area in the image of the frame of interest to be smaller than that of the specific area in the image of the frame preceding the frame of interest.11-15-2012
20110299783Object Detection in an Image - The invention concerns a method of performing, by an image processing device, object detection in an image comprising: performing one or more tests of a test sequence for detection of a first object on pixels values of a plurality of at least partially overlapping sub-regions (12-08-2011
20120002881IMAGE MANAGEMENT DEVICE, IMAGE MANAGEMENT METHOD, PROGRAM, RECORDING MEDIUM, AND INTEGRATED CIRCUIT - An image management device acquires an image group with an image acquisition unit, extracts objects and feature amounts from each image in the image group with an object detection unit, and sorts the objects into relevant clusters with an object sorting unit. Next, a similarity calculation unit calculates a similarity between the feature amounts of each object and each relevant cluster, a co-occurrence information generation unit finds co-occurrence information for each cluster, and then an accuracy calculation unit and an evaluation value calculation unit find an evaluation value for each object with respect to each cluster from the similarity and co-occurrence information. An object priority evaluation unit evaluates the object priority of each object with the evaluation value, and an image priority evaluation unit evaluates the priority of each image from the object priority.01-05-2012
20110299782FAST SUBSPACE PROJECTION OF DESCRIPTOR PATCHES FOR IMAGE RECOGNITION - A method for generating a feature descriptor is provided. A set of pre-generated sparse projection vectors is obtained. A scale space for an image is also obtained, where the scale space having a plurality scale levels. A descriptor for a keypoint in the scale space is then generated based on a combination of the sparse projection vectors and sparsely sampled pixel information for a plurality of pixels across the plurality of scale levels.12-08-2011
20110299781SCENE CHANGE DETECTION AND HANDLING FOR PREPROCESSING VIDEO WITH OVERLAPPED 3D TRANSFORMS - In one method embodiment, receiving noise-filtered plural blocks of a first frame and noise-filtered plural blocks of a second frame; for each of the plural blocks to be matched, determining whether an indication of closeness in match between the each of the plural blocks exceeds a first threshold; incrementing a counter value each time the first threshold is exceeded for closeness of the block matching of a particular block; determining whether the counter value exceeds a second threshold, the exceeding of the second threshold indicating that a defined quantity of blocks has exceeded the first threshold; and responsive to determining that the counter value exceeds the second threshold, triggering a scene change detection.12-08-2011
20110044546Image Reconstruction From Limited or Incomplete Data - A system and method are provided for reconstructing images from limited or incomplete data, such as few view data or limited angle data or truncated data (including exterior and interior data) generated from divergent beams. In one aspect of the invention, the method and apparatus iteratively constrains the variation of an estimated image in order to reconstruct the image. As one example, a divergent beam maybe used to generate data (“actual data”). As discussed above, the actual data may be less than sufficient to exactly reconstruct the image by conventional techniques, such as FBP. In order to reconstruct an image, a first estimated image may be generated. Estimated data may be generated from the first estimated image, and compared with the actual data. The comparison of the estimated data with the actual data may include determining a difference between the estimated and actual data. The comparison may then be used to generate a new estimated image. For example, the first estimated image may be combined with an image generated from the difference data to generate a new estimated image. In order to generate the image for the next iteration, the variation of the new estimated image may be constrained. For example, the variation of the new estimated image may be at least partly constrained in order to lessen or reducing the total variation of the image.02-24-2011
20110286670IMAGE PROCESSING APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus extends the edge portion of an image in a prescribed range, detects, from the image, a plurality of feature points that each indicate a setting position of a local region, sets a local region corresponding to each of the feature points in the image on which region extension has been performed, and calculates a local feature amount corresponding to each feature point based on image information in the local region.11-24-2011
20080310731Methods and Apparatus for Providing a Scalable Identification of Digital Video Sequences - Scaleable video sequence processing with various filtering rules is applied to extract dominant features, and generate unique set of signatures based on video content. Video sequence structuring and subsequent video sequence characterization is performed by tracking statistical changes in the content of a succession of video frames and selecting suitable frames for further treatment by region based intra-frame segmentation and contour tracing and description. Compact representative signatures are generated on the video sequence structural level as well as on the selected video frame level, resulting in an efficient video database formation and search.12-18-2008
20080310730IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - There are provided: a pattern detection process section for extracting a partial image made of pixels including a target pixel from input image data; a displaced image generation section for generating a self-displaced image by displacing at least a part of the partial image through a predetermined method; and a matching test determination section for determining whether an image pattern included in the partial image matches an image pattern included in the self-displaced image or not. When the matching test determination section determines that the matching exists, a target pixel in the partial image or a block made of pixels including the target pixel is regarded as a feature point. Consequently, even when image data is subjected to a process such as enlarging and reducing, it is possible to extract a feature point that properly specifies the image data regardless of the enlarging/reducing process.12-18-2008
20100040289Load Sign Recognition Apparatus and Load Sign Recognition Method - A road sign recognition apparatus generates a mosaic image formed by connecting accumulated images from a camera in time-series order, determines multiple road signs contained in the mosaic image by template matching, and generates positional information for the use of knowing a relative position of a vehicle to the road sign.02-18-2010
20100034466Object Identification in Images - A first indication of a portion of an image presented on a display device associated with a first user is received in response to a prompt to identify an object. A second indication of a portion of the image presented on a display device associated with a second user is received in response to a prompt to identify the object. A region-of-interest in the image is identified based on the first indication and the second indication. The region-of-interest is associated with an identifier of the object. A designator is associated with the region-of-interest in the image, the designator being configured to present information related to the object. Presentation of the designator associated with the region-of-interest in the image is enabled in subsequent presentations of the image.02-11-2010
20100008587IMAGE PROCESSING IMPROVING POSTPROCESSING RATE OF CHARACTER RECTANGLE EXTRACTION AND IMPROVING CHARACTER RECOGNITION ACCURACY - An image processing device is provided by the present invention, which includes a unit configured to acquire a pixel block contacting an enclosing border of a character rectangle extracted from an image, a determination unit configured to determine whether or not the acquired pixel block has a likelihood of noise, a unit configured to generate a noise candidate removed character rectangle by removing from the character rectangle the pixel block as to which it is determined to have the likelihood of noise, and an outputting unit configured to assess validities by performing character recognition for both of the noise candidate removed character rectangle and the character rectangle, and configured to output a recognition result for one of them having greater validity assessed.01-14-2010
20080285856Method for Automatic Detection and Classification of Objects and Patterns in Low Resolution Environments - The invention is a method of using Wavelet Transformation and Artificial Neural Network (ANN) systems for automatic detecting and classifying objects. To train the system in object recognition different images, which usually contain desired objects alongside other objects are used. These objects may appear at different angles. Different characteristics regarding the objects are extracted from the images and stored in a data bank. The system then determines the extent to which each inserted characteristic will be useful in future recognition and determines its relative weight. After the initial insertion of data, the operator tests the system with a set of new images, some of which contain the class objects and some of which contain similar and/or dissimilar objects of different classification. The system learns from the images containing similar objects of different classes as well as from the images containing the class objects, since each specific class characteristic needs to be set apart from other class characteristic. The system may be tested and trained again and again until the operator is satisfied with the system's success rate of object recognition and classification.11-20-2008
20120141033IMAGE PROCESSING DEVICE AND RECORDING MEDIUM STORING IMAGE PROCESSING PROGRAM - An image processing device includes a storage unit that stores dictionary information; a generating unit that extracts, from an input image, a plurality of characteristic point candidates, and generates a plurality of combinations that each include a plurality of characteristic point candidates; a removing unit that removes, for each of the combinations, at least one characteristic point candidate based on at least one of the dictionary information and information obtained by analyzing the input image; and a determining unit that acquires, for each of the combinations, results of matching the dictionary information with the combination of which the at least one characteristic point candidate has been removed, selects a combination of characteristic point candidates based on the acquired matching results, and determines, as the characteristic points, the plurality of characteristic point candidates included in the selected combination.06-07-2012
20100266208Automated Image Cropping to Include Particular Subjects - A digital image is automatically cropped to fit within a desired frame. The cropping is based on one or more of two identified portions of the image. One of the portions is an all-subjects portion that includes all the identified subjects of a particular type in the image. The other portion is an attention portion that identifies an intended focus of the image. An attempt to crop the image to include both of these portions is made, and if unsuccessful then an attempt to crop the image to include at least the all-subjects portion is made. If neither of these attempts is successful, then the image is cropped to include one or more, but less than all, of the identified subjects of the particular type in the image.10-21-2010
20110135205METHOD AND APPARATUS FOR PROVIDING FACE ANALYSIS SERVICE - The present invention provides a method and apparatus for face analysis service. The method includes a point-designation interface being transmitted to the user client for designating multiple points on an image of the user's face, coordinate information on the multiple points designated on the face image being received, and measured values being determined against the distance ratios or the angles between the predetermined points using the coordinate information. The method is convenient and allows for the objective analysis of a face.06-09-2011
20100329569IMAGE PROCESSING PROGRAM, IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing program, which is executable by a computer and is stored in a recording medium readable by the computer, includes: storing one or more features of a plurality of images, each image having a shape defined by a central angle whose origin is a center of a certain image; setting at least one representative shape defined by a central angle whose origin is a center of a search window for scanning a target image; extracting a first value of the one or more features of the representative shape; extracting a second value of the one or more features of one of the plurality of images; and comparing the extracted first value and the extracted second value and determining an image in the search window as an image candidate for the certain image based on a comparison result.12-30-2010
20100080467VANISHING POINT DETECTING SYSTEM, VANISHING POINT DETECTING METHOD, AND VANISHING POINT DETECTING PROGRAM - Disclosed is a vanishing point detecting system that includes a straight line detecting unit, a vanishing point detecting unit, and a vanishing point outputting unit. In the vanishing point detecting unit, a vanishing point is detected with one evaluation index of vanishing point plausibility being whether or not angles of plural straight lines passing through a point in question or a vicinity thereof are sparsely distributed over a relatively wide range.04-01-2010
20120033891Methods, Systems, And Computer Program Products For Associating An Image With A Communication Characteristic - Methods, systems, and computer program products for associating an image with a communication characteristic are disclosed. According to one method, a content characteristic of a first image is identified. A communication characteristic of a communication associated with the first image is identified. The content characteristic is associated with the communication characteristic. The content characteristic of the first image is identified in a second image. The communication characteristic is associated with the second image based on the association between the content characteristic and the communication characteristic.02-09-2012
20100086210DIGITIZING DOCUMENTS - Techniques for performing page verification of a document are provided. The techniques include performing a recognition technique on a document to recognize one or more objects in the document, excluding the one or more recognized objects from the document, and performing page verification of the document, wherein page verification comprises visual inspection of the document excluding the one or more recognized objects.04-08-2010
20130064456OBJECT CONTROL DEVICE, COMPUTER READABLE STORAGE MEDIUM STORING OBJECT CONTROL PROGRAM, AND OBJECT CONTROL METHOD - An object control device includes an object of interest specifying unit configured to specify an object of interest to obtain position information on the object of interest, an obstacle determining unit configured to determine whether there is an obstacle between the object of interest and the object, and a time measuring unit configured to measure a period after determining that there is the obstacle, and a holding unit configured to hold position information of the object of interest when the period reaches a predetermined period, and an object action control unit configured to control a direction of a part of the object, based on the position information obtained by the object of interest specifying unit before the period reaches the predetermined period, and based on the position information on the object of interest held in the holding unit after the period reaches the predetermined period.03-14-2013
20100080468METHOD AND INSTALLATION FOR IMAGING - (a) a measurement F04-01-2010
20110194776COLLATING DEVICE, COLLATING METHOD, AND PROGRAM - Provided are a collating device, a processing method and a collation program, in which a reference line is extracted from an image and each partial image is moved in a manner that the reference line becomes a predetermined one thereby to correct the image, and in which the corrected image is collated so that an authentication result can be obtained for a short time period without any rotating operation. At first, a reference line extracting unit extracts the center line or the contour line of the image as the reference line. Next, an image correcting unit moves each partial image in parallel thereby to correct the image so that the reference line obtained by the reference line extracting unit becomes a predetermined one. Moreover, an image collating unit collates the image corrected by the image correcting unit and a predetermined image to acquire an authentication result.08-11-2011
20120106850COMPUTATION OF INTRINSIC PERCEPTUAL SALIENCY IN VISUAL ENVIRONMENTS, AND APPLICATIONS - Detection of image salience in a visual display of an image.05-03-2012
20120106849INFORMATION PROCESSING APPARATUS, PROCESSING METHOD THEREFOR, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus acquires an image; sets a plurality of partial regions for the acquired image, and acquiring an image feature amount including a plurality of frequency components from each of the partial regions; compares the acquired image feature amount with an image feature amount of a background model which holds, for each of the partial regions, an image feature amount of an image as a background; updates, based on the comparison result, each of a plurality of frequency components included in the image feature amount held in the background model using the acquired image feature amount by a degree according to each of the frequency components; and detects, using the background model updated in the updating, for each of the partial regions, a region where a target object to be detected exists.05-03-2012
20120106848System And Method For Assessing Photgrapher Competence - A method for automatically assessing the competence of a photographer includes assigning a competency level to the photographer based on a statistical comparison of image features between a collection of the photographer's images and a collection of high competency images. Service and product offerings can be tailored to the photographer based on the competency level assigned by the statistical comparison.05-03-2012
20120106847SYSTEMS AND METHODS TO IMPROVE FEATURE GENERATION IN OBJECT RECOGNITION - Present embodiments contemplate systems, apparatus, and methods to improve feature generation for object recognition. Particularly, present embodiments contemplate excluding and/or modifying portions of images corresponding to dispersed pixel distributions. By excluding and/or modifying these regions within the feature generation process, fewer unfavorable features are generated and computation resources may be more efficiently employed.05-03-2012
20090274371EFFICIENT MODEL-BASED RECOGNITION OF OBJECTS USING A CALIBRATED IMAGE SYSTEM - A model-based object recognition system operates to recognize an object on a predetermined world surface within a world space. An image of the object is acquired. This image is a distorted projection of the world space. The acquired image is processed to locate one or more local features of the image, with respect to an image coordinate system of the image. These local features are mapped a world coordinate system of the world surface, and matched to a model defined in the world coordinate system. Annotations can be arranged as desired relative to the object in the world coordinate system, and then inverse-mapped into the image coordinate system for display on a monitor in conjunction with the acquired image. Because models are defined in world coordinates, and pattern matching is also performed in world coordinates, one model definition can be used by multiple independent object recognition systems.11-05-2009
20090141983IMAGE ENHANCEMENT SYSTEM AND METHOD USING AUTOMATIC EMOTION DETECTION - An image enhancement system and method using automatic emotion detection, the image enhancement system including: an emotional scale detection unit to analyze a pixel value of one or more frames of an input image in order to automatically detect an emotional scale of the input image; and an image enhancement unit to enhance a quality of the input image based on an image mode selected according to the emotional scale.06-04-2009
20090279786FACE CENTER POSITION DETECTING DEVICE, FACE CENTER POSITION DETECTING METHOD, AND COMPUTER-READABLE MEDIUM - It is possible to accurately detect a face center position based on an image of a person when the person is wearing glasses even when the person is facing sideways. Face images obtained as a result of photographing a face of a driver using a camera (11-12-2009
20090285490Dictionary creating apparatus, recognizing apparatus, and recognizing method - A dictionary creating apparatus registers probability distributions each including an average vector and a covariance matrix, in a dictionary. The dictionary creating apparatus organizes plural distribution profiles of character categories having similar feature vectors into one typical distribution profile, and registers the typical distribution profile and the character categories to be organized, associated with each other, in the dictionary, without registering eigenvalues and eigenvectors of all character categories, associated with each other, in the dictionary.11-19-2009
20090279787MICROBEAD AUTOMATIC RECOGNITION METHOD AND MICROBEAD - A microbead automatic recognition method includes the steps of: acquiring an image of a circular surface of a cylindrical microbead having a recognition pattern created on the circular surface and a plurality of reference points also created on the circular surface; and acquiring information on the rear/front and/or orientation of the cylindrical microbead from the acquired image on the basis of the positions of the reference points.11-12-2009
20090245650Image Processing Device, Image Processing Method and Image Processing Program - An image processing device includes a face region extracting unit that extracts a face region of a person included in an image to be corrected. A correction region specifying unit specifies a region including the extracted face region as a reduction region and specifies a region excluding the reduction region as an enlargement region. A correction execution unit generates a correction image in which an image in the reduction region is reduced based on a predetermined reduction ratio and an image in the enlargement region is enlarged according to a ratio of the reduction region to the enlargement region.10-01-2009
20090290798IMAGE SEARCH METHOD AND DEVICE - An image search method that is robust and fast (with computational complexity of logarithmic order relative to the number of models). The image search method including: extracting a plurality of specific regions possessing such a property that a shape can be normalized regardless of an affine transformation thereof, as affine-invariant regions from one or more learning images; calculating, with respect to a reference affine-invariant region, other neighboring affine-invariant regions as a set; deforming the neighboring affine-invariant regions by a transformation to normalize the shape of the reference affine-invariant region; and outputting the deformed shapes of the neighboring affine-invariant regions, together with combination of the reference affine-invariant region and the neighboring affine-invariant regions.11-26-2009
20090290799Detection of Organ Area Corresponding to Facial Organ Image in Image - An image processing apparatus. A face area detecting unit detects a face area corresponding to a face image in a target image. An organ area detecting unit detects an organ area corresponding to a facial organ image in the face area. An organ detection omission ratio, which is a probability that the organ area detecting unit does not detect the facial organ image as the organ area, is smaller than a face detection omission ratio, which is a probability that the face area detecting unit does not detect the face image as the face area.11-26-2009
20110200257CHARACTER REGION EXTRACTING APPARATUS AND METHOD USING CHARACTER STROKE WIDTH CALCULATION - A character region extracting apparatus and method which extract a character region through the calculation of character stroke widths are provided. The method includes producing a binary image including a candidate character region from an original image; extracting a character outline from the candidate character region; acquires character outline information for the extracted outline; setting a representative character stroke width and a representative character angle in each of the pixels forming the outline, based on the character outline information; and determining a character existing region in the candidate character region by confirming the ratio of effective representative stroke widths and effective angles as compared to the entire length of the outline. Accordingly, it is possible to efficiently determine whether one or more characters exist in the candidate character region.08-18-2011
20110200256OPTICAL MARK CLASSIFICATION SYSTEM AND METHOD - A system, method, and apparatus for mark recognition in an image of an original document are provided. The method/system takes as input an image of an original document in which at least one designated field is provided for accepting a mark applied by a user (which may or may not have been marked). A region of interest (RoI) is extracted from the image, roughly corresponding to the designated field. A center of gravity (CoG) of the RoI is determined, based on a distribution of black pixels in the RoI. Thereafter, for one or more iterations, the RoI is partitioned into sub-RoIs, based on the determined CoG, where at a subsequent iteration, sub-RoIs generated at the prior iteration serve as the RoI partitioned. Data is extracted from the RoI and sub-RoIs at one or more of the iterations, which allows a representation of the entire RoI to be generated which is useful in classifying the designated field, e.g., as positive (marked) or negative (not marked).08-18-2011
20090297031Selecting a section of interest within an image - Some embodiments provide a method of selecting a section of interest in an image that includes numerous pixels. the method draws a curvilinear boundary about the section of interest. From the curvilinear boundary, the method generates a two-dimensional transition tunnel region about the section of interest. The method analyzes image data based on the tunnel region to identify a subset of pixels in the region that should be associated with the section of interest. In some embodiments, the tunnel region includes a pair of curves bounding the tunnel region. In some embodiments, the curvilinear boundary has a particular shape, and generating the tunnel region includes determining whether the tunnel can be generated at a specified width with both curves of the tunnel having the same particular shape as the defined border. In some embodiments, analyzing image data includes comparing pixels inside the transition tunnel region to pixels outside the transition tunnel region.12-03-2009
20090297032SEMANTIC EVENT DETECTION FOR DIGITAL CONTENT RECORDS - A system and method for semantic event detection in digital image content records is provided in which an event-level “Bag-of-Features” (BOF) representation is used to model events, and generic semantic events are detected in a concept space instead of an original low-level visual feature space based on the BOF representation.12-03-2009
20120294532COLLABORATIVE FEATURE EXTRACTION SYSTEM FOR THREE DIMENSIONAL DATASETS - A collaborative feature extraction system uses crowdsourced feedback to improve its ability to recognize objects in three-dimensional datasets. The system accesses a three-dimensional dataset and presents images showing possible objects to a group of users, along with potential identifiers for the objects. The users provide feedback as to the accuracy of the identifiers, and the system uses the feedback to adjust parameters for candidate identifiers to improve its recognition of three-dimensional assets in future iterations.11-22-2012
20120294533IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - A face detection processing unit performs a face detection process by rotating an image in increments of a predetermined angle and acquires a rotation angle at which a face is detected. An angle correction unit acquires an angle between the face and a shoulder by pattern matching and corrects the rotation angle of the image. A human-image orientation identification unit identifies a correct orientation of a human image based on the rotation angle. An image-characteristic analysis unit analyzes a frequency distribution and a brightness distribution of a non-human image. A non-human image orientation identification unit identifies the correct orientation of a non-human image based on distribution characteristics of a frequency of brightness with respect to an axis in a predetermined direction. An image-data updating unit incorporates information regarding the correct orientation in the image data.11-22-2012
20100278433INTERMEDIATE IMAGE GENERATING APPARATUS AND METHOD OF CONTROLLING OPERATION OF SAME - An intermediate image is generated between a reference image and a corresponding image. To achieve this, moving subject images are detected in respective ones of a first image and second image captured at a fixed interval. A moving subject image of an intermediate image is positioned at a position that is intermediate the moving subject images. The intermediate image is generated utilizing the reference image in a portion of the image other than occupied by the moving subject image. A correction is applied in such a manner that the second image will coincide with the first image with the exception of the portion of the second image occupied by the moving subject image.11-04-2010
20090003710Feature Extraction Apparatus, Feature Extraction Method, and Feature Extraction Program - To extract a feature advantageous for classification and correlation by using the information difficult to be acquired even when it is impossible to acquire the information difficult-to-be-acquired from all individuals. Sub-information input device inputs information difficult to be acquired and accumulates the inputted sub-information. Main information input device inputs information easy to be acquired as main information, and accumulates the inputted main information. Sub-information selection device evaluates a category attribution degree of each sub-information accumulated and selects the sub-information of a high category attribution degree. The correlation feature extraction device uses the sub-information selected by the sub-information selection device as the feature extraction filter, and extracts a feature corresponding to the main information from a correlation between the main information and the sub-information.01-01-2009
20100135580METHOD FOR ADJUSTING VIDEO FRAME - A method for adjusting a face area of a user in the video frame captured by a camera lens is disclosed. In the method, an edge of an image in the video frame is detected, wherein the image contains a face area representing a face of a user. Then, a plurality of facial features of the face are extracted from the face area according to the image edge. Then, a facial feature database is referred to for the facial features to estimate a tilt angle between a plane corresponding to the face of the user and a focusing direction of the camera lens towards the face on capturing the video frame. Finally, the estimated tilt angle is used for adjusting relative proportion between image parts in the face area. As a result, the video frame displaying an image effect of the face as seen by a person from a position in front of and level with the face and having no tilted image of the face is obtained.06-03-2010
20100135582SYSTEM AND METHOD FOR SEARCH PORTIONS OF OBJECTS IN IMAGES AND FEATURES THEREOF - Embodiments enable searching of portions of objects in images, including programmatically analyzing each image in a collection in order to determine image data that, for individual images in the collection, represents one or more visual characteristics of a portion of an object shown in that image. A user is enabled to specify one or more search criteria that includes image data, and a search result may be determined based on one or more images in the collection that show a corresponding object that has a portion that satisfies a threshold. The threshold is defined at least in part by the one or more search criteria.06-03-2010
20090169109DEVICE AND PROCESS FOR RECOGNIZING AN OBJECT IN AN IMAGE - The present invention refers to a process and a device for recognizing one or more objects in an image composed by a plurality of pixels. The recognizing of an object in an image comprises at least three phases. In a first phase, a plurality of possible configurations of pixels different therebetween is pre-defined and to everyone of those at least one symbolic coding identifying and describing the features of the corresponding configuration of pixels is associated. In a second phase, the image is processed by means of processing to associate thereto a corresponding sequence of specific configurations detected among said plurality of configurations of pixels in an univocal way. Finally, in a third phase , the symbolic codings associated to the corresponding specific configurations of said sequence are interpreted automatically and correlated therebetween to allow to recognize said object.07-02-2009
20080304751IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - There are provided: a pattern detection process section for extracting a partial image made of pixels including a target pixel from input image data; a rotated image generating section for generating a self-rotated image by rotating the partial image; and a matching test determination section for determining whether an image pattern included in the partial image matches an image pattern included in the self-rotated image. When it is determined that matching exists, a target pixel in the partial image or a block made of pixels including the target pixel is regarded as a feature point. Consequently, even when image data has been read while skewed with respect to a predetermined positioning angle of a reading position of an image reading apparatus or image data has been subjected to enlarging, reducing etc., a feature point properly specifying the image data can be extracted regardless of skew, enlarging, reducing etc.12-11-2008
20120294534GEOMETRIC FEATURE EXTRACTING DEVICE, GEOMETRIC FEATURE EXTRACTING METHOD, STORAGE MEDIUM, THREE-DIMENSIONAL MEASUREMENT APPARATUS, AND OBJECT RECOGNITION APPARATUS - A geometric feature extracting device comprising: first input means for inputting a three-dimensional shape model of a measurement object; generation means for generating two-dimensional parameter planes corresponding to curved surface patches that configure the three-dimensional shape model; first calculation means for calculating normal directions to points on the curved surface patches; holding means for holding the parameter planes and the normal directions in association with each other; second input means for inputting an observation direction used to observe the measurement object from an observation position; selection means for selecting regions in each of which the normal direction and the observation direction satisfy a predetermined angle condition from the parameter planes; and second calculation means for calculating geometric features on three-dimensional shape model corresponding to regions selected by the selection means as geometric features that configure geometric feature regions on the three-dimensional shape model, which are observable from the observation position.11-22-2012
20110007972IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND COMPUTER-READABLE MEDIUM - An image-processing device comprises an acquisition section that acquires a binary image; a figure part identifying section that identifies a figure part in the binary image; a line segment identifying section that identifies line segments included in the figure part; a specific line segment extracting section that determines whether each line segment has an end portion having a specific shape, and extracts a line segment with an end portion having the specific shape as a specific line segment; and a table region determining section that determines whether the figure part is a table region based on the line segments identified by the line segment identifying section excluding the specific line segment.01-13-2011
20120294535FACE DETECTION METHOD AND APPARATUS - Image fragments are formed in regions corresponding to circles searched from an input image. In a cascade of homogeneous classifiers, each classifier classifies input vectors corresponding to the image fragments into a face type and a non-face type. This procedure is performed on all images included in an image pyramid and the coordinates of a face detected based on the results of the procedures on all images.11-22-2012
20120269443METHOD, APPARATUS, AND PROGRAM FOR DETECTING FACIAL CHARACTERISTIC POINTS - First, a face within an image, which is a target of detection, is detected. Detection data of the face is employed to detect eyes which are included in the face. Detection data of the eyes are employed to detect the inner and outer corners of the eyes. Detection data of the inner and outer corners of the eyes is employed to detect characteristic points of the upper and lower eyelids that represent the outline of the eyes.10-25-2012
20120269442Method and System for Detecting the Open or Closed State of the Eyes of a Face - Method of detecting the open or closed state of at least one eye of a face, comprising a detection of an eye of a face and associated system. The method comprises a detection of vertical contours of the eye, and a determination of the open or closed state of the eye on the basis of the vertical contours detected.10-25-2012
20100135581Depth estimation apparatus and method - A depth estimation apparatus is provided. The depth estimation apparatus may estimate a depth value of at least one pixel composing an input video based on feature information about at least one feature of the input video, a position of the at least one pixel, and a depth relationship among the at least one pixel and neighboring pixels.06-03-2010
20120070086INFORMATION READING APPARATUS AND STORAGE MEDIUM - An information reading apparatus that reads information from an image. The apparatus includes: an acquiring module, a first processing module, a second processing module, and an adding module. The acquiring module acquires a whole image containing plural reading subjects. The first processing module performs processing of extracting particular patterns from the respective reading subjects by performing a pattern analysis on the whole image to identify the reading subjects contained in the whole image. The second processing module performs processing of reading pieces of information from the respective reading subjects and recognizing the read-out pieces of information by analyzing the respective particular patterns extracted by the first processing module. The adding module adds current processing statuses for the respective reading subjects contained in the whole image based on at least one of sets of processing results of the first processing module and the second processing module.03-22-2012
20100142824METHOD AND APPARATUS FOR REAL-TIME/ON-LINE PERFORMING OF MULTI VIEW MULTIMEDIA APPLICATIONS - A method and apparatus for real-time/on-line performing of multi-view multimedia applications are disclosed. In one aspect, a method of computing a disparity value of a pixel includes computing from two input images a plurality of first costs for a pixel, each cost associated with a region selected from a plurality of regions a first type, the regions covering the pixel and being substantially equal in size and shape. The method also includes computing from the first costs a plurality of second costs each associated with a region selected from a plurality of regions of a second type, the regions of the second type covering the pixel, at least some of the regions of the second type having a substantially different size and/or shape. The method further includes selecting from the second costs the minimal cost and selecting the corresponding disparity value as the disparity value.06-10-2010
20090080777Methods and Apparatus for Filtering Video Packets for Large-Scale Video Stream Monitoring - A method of filtering video packets for video stream monitoring is provided. A video packet of a video stream is received. One or more features are extracted from a specified frame of the video packet via one or more histograms and frequency domain coefficients of the specified frame. One or more concept detectors are implemented on the one or more features creating one or more confidence values. The confidence values are transmitted to a display module for filtering of video packets.03-26-2009
20110206282Device, Method, and Program for Image Processing - An image processing device includes a subject region detector that detects a subject region from an input image; a cutting unit that cuts an image of the subject region from the input image; a priority calculator that calculates a priority of each of predetermined regions on a boundary with respect to the subject region, for the input image; a retrieval unit that retrieves a region similar to an image of a predetermined region with a top priority from among the priorities, from the input image after the image of the subject region is cut; a recovery unit that recovers the subject region by copying an image of an adjacent region that is adjacent to the region similar to the predetermined region retrieved by the retrieval unit and includes a region corresponding to a region cut as the subject region, and by pasting the image obtained by copying onto the region that is adjacent to the predetermined region with the top priority and cut as the subject region; and a composition unit that combines the image of the subject region cut by the cutting unit with the image with the subject region recovered by the recovery unit.08-25-2011
20090161961Apparatus and method for trimming - Apparatuses and methods for processing a digital image are provided. More particularly, apparatuses and methods for trimming an image, whereby a composition of the image is located and trimmed from a displayed image, are provided. The apparatuses may include a digital signal processing unit which retrieves composition information similar to the composition information of a displayed image, and trims the image to obtain a trimmed image corresponding to an area of the image matching the retrieved composition information.06-25-2009
20090129680IMAGE PROCESSING APPARATUS AND METHOD THEREFOR - Replacement target image data and image data for replacement are stored, character images of the replacement target image data and character images of the image data for replacement are extracted, and character recognition is performed for each page on character strings contained in the extracted character images. Then, a comparison is performed for each page of the character strings of pages of the replacement target image data and the character strings of pages of the image data for replacement, which have undergone character recognition, and a degree of similarity therebetween is determined. Then, based on a determination result, at least a portion of pages of the replacement target image data is replaced with at least a portion of pages of the image data for replacement.05-21-2009
20090136137IMAGE PROCESSING APPARATUS AND METHOD THEREOF - The invention includes a reference oint setting unit configured to extract a plurality of reference points from an input image; a pattern extractor configured to extract a local pattern of the reference points; a characteristic set holder configured to hold a group of characteristic sets having both local patterns of the reference points extracted from a learned image and vectors from the reference points to characteristic points to be detected; a matching unit configured to compare the local patterns extracted from the reference points and the group of characteristic sets and select the nearest characteristic set as a characteristic set having the most similar pattern; and a characteristic point detector configured to detect a final position of the characteristic point based on a vector from the reference point to the characteristic point included in the selected nearest characteristic set.05-28-2009
20120070087Real-Time Face Tracking in a Digital Image Acquisition Device - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image.03-22-2012
20080317354IMAGE SELECTION SUPPORT SYSTEM FOR SUPPORTING SELECTION OF WELL-PHOTOGRAPHED IMAGE FROM PLURAL IMAGES - A feature area extracting section extracts an area having a unique feature in an image input to an image selection support apparatus. A specific area feature collating and determining section determines whether or not the area having a feature and extracted by the feature area extracting section is a specific area. A specific area image reading section decides a rectangular area including the specific area, and reads image information of the rectangular area. The specific area image reading section has at least one of an enlargement displaying section which enlarges and displays the image information read by the specific area image reading section, a thumbnail display section which reduces and displays the input image, and an original image displaying section which enlarges and displays the input image.12-25-2008
20090175540Controlled human pose estimation from depth image streams - A system, method, and computer program product for estimating upper body human pose are described. According to one aspect, a plurality of anatomical features are detected in a depth image of the human actor. The method detects a head, neck, and torso (H-N-T) template in the depth image, and detects the features in the depth image based on the H-N-T template. An estimated pose of a human model is estimated based on the detected features and kinematic constraints of the human model.07-09-2009
20090022403IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - There is provided an image processing apparatus for processing a moving image including a plurality of moving-image-component images. The image processing apparatus includes a characteristic region detecting section that detects a characteristic region in one or more of the plurality of moving-image-component images, and a characteristic region identifying section that identifies a position of the characteristic region in a non-selected image with reference to a position of the characteristic region in a selected image. Here, the selected image is selected from the plurality of moving-image-component images of the moving image, and the non-selected image is a different one of the plurality of moving-image-component images than the selected image. The characteristic region identifying section identifies the position of the characteristic region in the non-selected image with reference to a position of the characteristic region in a selected image preceding the non-selected image and a position of the characteristic region in a selected image following the non-selected image.01-22-2009
20110142348Signature Derivation for Images - Deriving a fingerprint of an image corresponding to media content involves selecting at least two different regions of the same image, determining a relationship between the two regions, and deriving a fingerprint of the image based on the relationship between the two regions of the image.06-16-2011
20130216136IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, STORAGE MEDIUM AND IMAGE PROCESSING SYSTEM - An image processing method includes shooting a subject and an object different from the subject, extracting a characteristic of the object, and converting an image of the subject into a character image according to the extracted characteristic of the object.08-22-2013
20090080778PATTERN RECOGNITION METHOD AND APPARATUS FOR DATA PROTECTION - Provided a secure pattern recognition method. The method includes: receiving data and generating a probe by converting the received data into a template for pattern recognition; accessing a gallery that is a template registered and stored in advance; determining a region to which the probe belongs and obtaining the center point of the region; obtaining a hash value of the center point and coordinate of the probe; and determining whether or not the hash value of the center point and a hash value stored in the gallery are equal and determining whether or not the probe and the gallery are classified into the same class by calculating whether or not the coordinate of the probe is inside a decision boundary configured with thresholds on the basis of the coordinates of the center point.03-26-2009
20110229042IMAGE SIGNATURE MATCHING DEVICE - An image signature to be used for matching is generated by the following generation method. First, region features are extracted from respective sub-regions of a plurality of pairs of sub-regions in an image, and for each of the pairs of sub-regions, a difference value between the region features of two sub-regions forming a pair is quantized. Then, a collection of elements which are quantization values calculated for the respective pairs of sub-regions is used as an image signature to be used for discriminating the image. The image signature matching device specifies, from an image signature of a first image and an image signature of a second image generated by the above generating method, a margin region of each of the images. The image signature matching device matches the image signature of the first image and the image signature of the second image in such a manner that a weight of an element, in which at least one of two sub-regions forming a pair is included in the specified margin region, is reduced.09-22-2011
20090208112PATTERN RECOGNITION METHOD, AND STORAGE MEDIUM WHICH STORES PATTERN RECOGNITION PROGRAM - A pattern recognition method is applied to processing of causing an information processing apparatus to recognize a pattern in a plurality of steps. The information processing apparatus detects recognition candidates which can be recognition candidates of each step. The information processing apparatus expands the recognition candidates of the next step belonging to each detected recognition candidate of each step. The information processing apparatus calculates the evaluation value of each expanded recognition candidate based on an a posteriori probability on condition of all recognition processing results for a recognition candidate which has undergone recognition processing. The information processing apparatus selects recognition candidates based on the calculated evaluation value of each recognition candidate. The information processing apparatus determines a recognition result based on the selected recognition candidates.08-20-2009
20130121587SYSTEMS AND METHODS FOR LARGE SCALE, HIGH-DIMENSIONAL SEARCHES - Methods and systems for fast, large scale, high-dimensional searches are described. In some embodiments, a method comprises transforming components of a high-dimensional image descriptor into transformed components in a transform domain, allocating one or more bits available within a bit budget to a given transformed component within a first subset of transformed components as a function of a variance of the given transformed component, independently quantizing each transformed component within the first subset of transformed components, generating a compact representation of the high-dimensional image descriptor based, at least in part, on the independently quantized components, and evaluating a nearest neighbor search operation based, at least in part, on the compact representation of the high-dimensional image descriptor.05-16-2013
20130121589SYSTEM AND METHOD FOR ENABLING THE USE OF CAPTURED IMAGES THROUGH RECOGNITION - An embodiment provides for enabling retrieval of a collection of captured images that form at least a portion of a library of images. For each image in the collection, a captured image may be analyzed to recognize information from image data contained in the captured image, and an index may be generated, where the index data is based on the recognized information. Using the index, functionality such as search and retrieval is enabled. Various recognition techniques, including those that use the face, clothing, apparel, and combinations of characteristics may be utilized. Recognition may be performed on, among other things, persons and text carried on objects.05-16-2013
20130121590EVENT DETECTION APPARATUS AND EVENT DETECTION METHOD - An event detection apparatus includes an input unit configured to input a plurality of time-sequential images, a first extraction unit configured to extract sets of first image samples according to respective different sample scales from a first time range of the plurality of time-sequential images based on a first scale parameter, a second extraction unit configured to extract sets of second image samples according to respective different sample scales from a second time range of the plurality of time-sequential images based on a second scale parameter, a dissimilarity calculation unit configured to calculate a dissimilarity between the first and second image samples based on the sets of the first and second image samples, and a detection unit configured to detect an event from the plurality of time-sequential images based on the dissimilarity.05-16-2013
20130121591SYSTEMS AND METHODS USING OBSERVED EMOTIONAL DATA - Systems and techniques using observed emotional data are described herein. A sequence of visual observations of a subject can be received during execution of an application. An emotional state of the subject can be determined based on the sequence of visual observations. Execution of the application can be modified from a baseline execution using the emotional state.05-16-2013
20130121592POSITION AND ORIENTATION MEASUREMENT APPARATUS,POSITION AND ORIENTATION MEASUREMENT METHOD, AND STORAGE MEDIUM - An apparatus comprises: extraction means for extracting an occluded region in which illumination irradiated onto the target object is occluded in an obtained two-dimensional image; projection means for projecting a line segment that constitutes a three-dimensional model onto the two-dimensional image based on approximate values of position/orientation of the target object; association means for associating a point that constitutes the projected line segment with a point that constitutes an edge in the two-dimensional image; determination means for determining whether the associated point that constitutes an edge in the two-dimensional image is present within the occluded region; and measurement means for measuring the position/orientation of the target object based on a distance on the two-dimensional image between the point that constitutes the projected line segment and the point that constitutes the edge, the points being associated as the pair, and a determination result.05-16-2013
20090010545SYSTEM AND METHOD FOR IDENTIFYING FEATURE OF INTEREST IN HYPERSPECTRAL DATA - A system and method for identifying objects of interest in image data is provided. The present invention utilizes principles of Iterative Transformational Divergence in which objects in images, when subjected to special transformations, will exhibit radically different responses based on the physical chemical, or numerical properties of the object or its representation (such as images), combined with machine learning capabilities. Using the system and methods of the present invention, certain objects that appear indistinguishable from other objects to the eye or computer recognition systems, or are otherwise almost identical, generate radically different and statistically significant differences in the image describers (metrics) that can be easily measured.01-08-2009
20090110291IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - A key region extraction unit (04-30-2009
20110142349INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - Methods and apparatuses for detecting a plurality of pixels of interest within an image and identifying luminance values corresponding to a predetermined object. The apparatus for detecting includes a memory configured to store first and second images captured using light of first and second wavelengths, respectively. The apparatus for detecting further includes at least one processor configured to detect a plurality of pixels of interest within the first captured image based on luminance values of the stored first and second captured images. The apparatus for identifying includes a memory configured to store a processed image, and at least one processor configured to determine frequencies of luminance values of the plurality of pixels of interest in the processed image and to determine a range of luminance values corresponding to a predetermined object within the processed image based on the determined frequencies of the luminance values.06-16-2011
20090245648RIDGE DIRECTION EXTRACTING DEVICE, RIDGE DIRECTION EXTRACTING PROGRAM, AND RIDGE DIRECTION EXTRACTING METHOD - To extract the ridge directivities properly and precisely by suppressing a bad influence of noises having a periodicity. A ridge directivity is assumed along some direction codes set in advance, and the reliability indicating the probability of the assumed direction code being consistent with a valley is obtained for each direction code by a unit of pixel based on a difference between a density of a target pixel and densities of pixels neighboring to the target pixel in a direction orthogonal to the assumed direction code, and the direction having the maximum reliability is determined as the ridge directivity of the target pixel.10-01-2009
20090257657METHOD AND DEVICE FOR PROCESSING AND PRESENTING MEDICAL IMAGES - The present invention relates to a method for processing and presenting at least a first image (10-15-2009
20100150450IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE CAPTURING APPARATUS - Face regions are detected from a captured image, and a weight of each detected face region is computed based on a size and/or a position of the detected face region. Then a previous priority ranking weight is computed based on a priority ranking determined in previous processing. A priority of the face region is computed from the weight and the previous priority ranking weight. For example, if the continuous processing number exceeds the threshold the priority ranking weight is reduced. After the processing is completed for all face regions, a priority ranking of each face region is determined based on the priority computed for each face region.06-17-2010
20090263022Real-Time Face Tracking in a Digital Image Acquisition Device - An image processing apparatus for tracking faces in an image stream iteratively receives an acquired image from the image stream including one or more face regions. The acquired image is sub-sampled at a specified resolution to provide a sub-sampled image. An integral image is then calculated for a least a portion of the sub-sampled image. Fixed size face detection is applied to at least a portion of the integral image to provide a set of candidate face regions. Responsive to the set of candidate face regions produced and any previously detected candidate face regions, the resolution is adjusted for sub-sampling a subsequent acquired image.10-22-2009
20090316994METHOD AND FILTER FOR RECOVERY OF DISPARITIES IN A VIDEO STREAM - The invention concerns a method for recovery, through a digital filtering processing, of the disparities (di,k) in the digital images (12-24-2009
20120195507MICROBEAD AUTOMATIC RECOGNITION METHOD AND MICROBEAD - A microbead automatic recognition method includes the steps of: acquiring an image of a circular surface of a cylindrical microbead having a recognition pattern created on the circular surface and a plurality of reference points also created on the circular surface; and acquiring information on the rear/front and/or orientation of the cylindrical microbead from the acquired image on the basis of the positions of the reference points.08-02-2012
20100183227Person detecting apparatus and method and privacy protection system employing the same - A person detection apparatus and method, and a privacy protection system using the method and apparatus, the person detection apparatus includes: a motion region detection unit, which detects a motion region from a current frame image using motion information between frames; and a person detecting/tracking unit, which detects a person in the detected motion region using shape information of persons, and performs a tracking process on a motion region detected as the person in a previous frame image within a predetermined tracking region.07-22-2010
20090116748INFERENTIAL SELF-REGISTRATION OF IMPERFECT OMR FORMS - Image data of a zone in a response form that has a plurality of response bubbles in the zone is processed. The image data of the zone has at least one response bubble that is well-formed and at least one response bubble that is not well-formed. Well-formed response bubbles are located in the zone from image data of the zone. The locations of the well-formed response bubbles in the zone are compared to a form template that defines the zone and contains data regarding locations of all expected response bubbles in the zone. It is determined from the comparison whether sufficient information exists to determine that the well-formed response bubbles constitute a specific part of the form template zone. If so, then the well-formed response bubbles are processed from the image data of the zone.05-07-2009
20090116747Artificial intelligence systems for identifying objects - A process for object identification comprising extracting object shape features and object color features from digital images of an initial object and storing the extracted object shape features and object color features in a database where said extracted object shape features and object color features are associated with a unique identifier associated with said object and repeating the first step for a plurality of different objects. Then extracting object shape features and object color features from a digital image of an object whose identity is being sought and correlating the extracted object shape features and object color features of the object whose identity is being sought with the extracted object shape features and object color features previously stored in the database. If a first correlation of the extracted object shape features is better than a first threshold value for a given object associated with an identifier in the database and if a second correlation of the extracted object color features is better than a second threshold value for the given object, then making a determination that the object whose identity is being sough is said given object.05-07-2009
20100189357METHOD AND DEVICE FOR THE VIRTUAL SIMULATION OF A SEQUENCE OF VIDEO IMAGES - The invention relates to a method for the virtual simulation of a sequence of video images from a sequence of video images of a moving face/head, comprising: an acquisition and initialization phase of a face/head image of the real video sequence; an evolution phase for determining specific parametric models from characteristic points extracted from said image and used as initial priming points, and for deforming said specific models for adaptation to the outlines of the features of the analyzed face, and also for detecting and analyzing the cutaneous structure of one or more regions of the face/head; and a tracking and transformation phase for modifying the characteristic features of other images in the video sequence and the colors of the cutaneous structure, said modifications being carried out according to predetermined criteria stored in at least one database and/or according to decision criteria of at least one expert system of a 0+ or 1 order.07-29-2010
20110235919EYE OPEN/CLOSE RECOGNIZING APPARATUS AND RECORDING MEDIUM - A computer (09-29-2011
20100014758METHOD FOR DETECTING PARTICULAR OBJECT FROM IMAGE AND APPARATUS THEREOF - When discriminating a plurality of types of objects, a plurality of local feature quantities are extracted from local regions in an image, and positions of the local regions, and attributes according to image characteristics of the local feature quantities are stored in correspondence with each other. Then, object likelihoods with respect to a plurality of objects are determined from attributes of feature quantities in a region-of-interest, an object whose object likelihood is not less than a threshold value is determined as an object candidate, and whether an object candidate is a predetermined object is determined.01-21-2010
20100226577IMAGE PROCESSING APPARATUS AND METHOD - An apparatus includes. a unit generating a sample-texture image, a unit searching a preset-search range for similar pixels and to generate a texture image by assigning a pixel value of each of the similar pixels to a pixel value of a processing-target pixel in the texture image, the preset-search range being included in the sample-texture image and including a position corresponding to a position of the processing-target pixel to which any pixel value is not yet assigned, the similar pixels having, around the similar pixels, variation patterns similar to a pattern of pixels which are located in the texture image near the processing-target pixel and to which pixel values are assigned, and a unit combining the texture image and a base image of a same size as the texture image to obtain a synthetic image, the base image holding shades similar to shades of a transform-target image.09-09-2010
20100226578OBJECT DETECTING APPARATUS, AND OBJECT DETECTING METHOD - An object detecting apparatus includes a plurality of feature value calculating units that are provided for respective different features of an image and perform a process of extracting the features from an attention region in parallel; a plurality of combining units detecting combinations of the features in parallel, the plurality of combining units are provided for the respective combinations of the features included in the attention region, the plurality of combining units detect the combinations from a outputted features from the plurality of feature value calculating units; and a plurality of identifying units that are provided corresponding to the plurality of combining units and perform in parallel a process of identifying an object based on the combinations detected by the combining units.09-09-2010
20110058747IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND COMPUTER READABLE-MEDIUM - There is provided an image processing apparatus. The image processing apparatus includes: an obtaining unit configured to obtain an image; a generating unit configured to generate a plurality of feature maps for a plurality of features of the image, wherein each of the feature maps corresponds to one of the features of the image; an imaging situation determining unit configured to determine an imaging situation of the image; a weighting unit configured to weight the feature maps in accordance with the imaging situation; and a detector configured to detect a region of interest from the image based on feature distributions of the weighted feature maps.03-10-2011
20110058746IMAGE RETRIEVAL APPARATUS, IMAGE RETRIEVAL METHOD, AND STORAGE MEDIUM - An image retrieval apparatus includes a designation unit configured to designate a query area of an image based on a user's designation operation, a display unit configured to display an area where a local feature amount is difficult to be extracted in the query area designated by the designation unit as a feature non-extractable area, and a retrieval unit configured to retrieve, based on a local feature amount extracted from an area which is not displayed as the feature non-extractable area in the query area by the display unit, image feature data with which a local feature amount and the image are associated and which is stored in a storage device.03-10-2011
20090310867BUILDING SEGMENTATION FOR DENSELY BUILT URBAN REGIONS USING AERIAL LIDAR DATA - A method for extracting a 3D terrain model for identifying at least buildings and terrain from LIDAR data is disclosed, comprising the steps of generating a point cloud representing terrain and buildings mapped by LIDAR; classifying points in the point cloud, the point cloud having ground and non-ground points, the non-ground points representing buildings and clutter; segmenting the non-ground points into buildings and clutter; and calculating a fit between at least one building segment and at least one rectilinear structure, wherein the fit yields the rectilinear structure with the fewest number of vertices. The step of calculating further comprises the steps of (a) calculating a fit of a rectilinear structure to the at least one building segment, wherein each of the vertices has an angle that is a multiple of 90 degrees; (b) counting the number of vertices; (c) rotating the at least one building segment about an axis by a predetermined increment; and (d) repeating steps (a)-(c) until a rectilinear structure with the least number of vertices is found.12-17-2009
20100209000IMAGE PROCESSING APPARATUS FOR DETECTING COORDINATE POSITION OF CHARACTERISTIC PORTION OF FACE - There is provided an image processing apparatus that is used for detecting a coordinate position of a characteristic portion of a face included in a target image.08-19-2010
20090245649Method, Program and Apparatus for Detecting Object, Computer Readable Recording Medium Storing Object Detection Program, and Printing Apparatus - An object detection method for detecting a predetermined object image from a target image. A target image is contracted to generate a contracted image, and partitioned to generate a partitioned image. The predetermined object image is detected from the image using the image and a detection frame. The detection frame and the contracted image are used to detect the predetermined object image if the detection frame is equal to or larger than a predetermined size, and the detection frame and the partitioned image are used to detect the predetermined object image if the detection frame is smaller than the predetermined size.10-01-2009
20090324088Method for detecting layout areas in a video image and method for generating an image of reduced size using the detection method - The invention relates to an automatic detection method in a source image, of at least one area called a layout area comprising at least one layout, such as a logo and/or a score. According to the invention, the layout areas of a source image are detected using the salience of source image pixels. The detection is carried out in specific areas of the source image saliency map, usually in the areas corresponding to the comers of the image or to the bands in the upper part and lower part of the image. In these areas, two points are sought having maximum salience values and distant by at least p points from each other. These two points corresponding to the beginning and end of a layout area. The window bounding these two points then corresponds to a layout area.12-31-2009
20090324089IMAGING SYSTEM, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - A favorable noise reduction process that is optimized for capturing conditions and that prevents the occurrence of residual image components is enabled. Provided is an imaging system including: a first extraction section that extracts a local region that includes a pixel of interest from an image signal; a second extraction section that extracts, from another image signal captured at a different time, a local region located at almost the same position as said local region; a first noise reduction section that performs a noise reduction process by using the local regions; a noise estimation section that estimates an amount of noise included in the pixel of interest; a residual image detection section that detects a residual image component included in the local region based on the estimated amount of noise; and a second noise reduction section that performs a noise reduction process based on the detected residual image component.12-31-2009
20100296736IMAGE SEARCH APPARATUS AND METHOD THEREOF - An image search apparatus, which determines a similarity between an input query image and a registered comparison destination image and searches an image similar to the query image, extracts a plurality of corresponding pairs of feature points in two images based on feature points selected from the both images. A coordinate transformation coefficient to execute a coordinate transformation process is decided so that coordinates of the feature point of one of the two images match coordinates of the feature point of the other image, in relation to each pair. Only if an amount of transformation of coordinates satisfies the constraint conditions designated in advance, the coordinate transformation process using the coordinate transformation coefficient is executed in relation to the plurality of pairs of feature points, and coordinates of the feature points after the transformation of one image are compared with coordinates of the corresponding feature points of the other image.11-25-2010
20100310176Apparatus and Method for Measuring Depth and Method for Computing Image Defocus and Blur Status - An apparatus and a method for measuring the depth of an object in a scene and a method for computing image defocus and blur status are provided. An image analysis unit receives a plurality of reference blurred images, analyzes the reference blurred images, produces reference grey-scale distribution data, where the reference blurred images corresponds to a plurality of reference depths, respectively. A blur comparison module produces a blur model according to the reference grey-scale distribution data and the corresponding reference depths. The image analysis unit receives a target blurred image, analyzes the target blurred image, and produces and transmits target grey-scale distribution data to the blur comparison module for comparing the target grey-scale distribution data according to the blur model, and producing depth information. Moreover, the present invention further produces the corresponding blur status data, used in defocus and blur computations, according to the defocused and blurred image.12-09-2010
20130142435CAMERA SYSTEM AND METHOD FOR TAKING PHOTOGRAPHS THAT CORRESPOND TO USER PREFERENCES - A database of user preferences for a high quality picture is maintained. Preferences may be generated over time by tracking attributes of pictures that the user has deleted or failed to select for storage. When the camera is in preview mode, the camera may automatically capture image data for one or more pictures as a background operation.06-06-2013
20110170785Image processing apparatus and image processing method - An image processing apparatus includes a face detection processing unit that reduces an image of a frame to be processed to a first level among several reduction levels to generate a reduced image of the frame to be processed, with one of frames included in a moving image as the frame to be processed, and compares the reduced image generated by the reducing unit and the learning data to extract a face image from the reduced image. When the extraction of the face image from the frame to be processed is ended, the face detection processing unit updates the frame to be processed to a next frame subsequent to the frame to be processed and generates a reduced image that is reduced to another level other than reduction levels adjoining the first level.07-14-2011
20110170784IMAGE REGISTRATION PROCESSING APPARATUS, REGION EXPANSION PROCESSING APPARATUS, AND IMAGE QUALITY IMPROVEMENT PROCESSING APPARATUS - [Problem]An object of the present invention is to provide an image registration processing apparatus that is capable of performing a robust and high-accuracy registration processing with respect to an entire image between images including multiple motions.07-14-2011
20110002547IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD - There are provided an image processing device, an image processing method and a program that generate an electronic document in a format specification that is optimal for many purposes of electronic documents. A table region is discriminated from an input image, and a table structure in the table region is analyzed. A table line determination is made on the analyzed table structure as to whether or not each ruled line is representable in the format, and ruled line information and a vector line object are created according to the determination result. The created ruled line information and vector line object are used to generate the electronic document.01-06-2011
20090034848IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - Line images in horizontal and vertical directions are detected from input image data, and an intersection of the line images is calculated. the calculated intersection is regarded as a feature point of input image data. Thus, it is possible to easily and promptly extract, from image data, a feature point that allows specifying the image data appropriately.02-05-2009
20110019920METHOD, APPARATUS, AND PROGRAM FOR DETECTING OBJECT - A region where a detecting target object exists is extracted by a comparison between an evaluated value indicating a probability that the detecting target object exists and a threshold, through the process of producing a differential image between different frames in plural frames constituting a continuous image, of setting an average value in an averaging region extended around each pixel of the differential image as a new value of each pixel, of obtaining the evaluated value by applying a filter that acts on a search region on an image to a search region extended around a search pixel that is extracted by comparing the new value and the threshold, on the differential image.01-27-2011
20120134593METHOD AND SYSTEM FOR IMAGE-BASED IDENTIFICATION - The present invention may provide a method for image-based identification. The method may include providing a digital photo of an unidentified item; transmitting, over a network, the digital photo to an identification service; in response to transmitting the digital photo, receiving, over the network, item information from the identification service, wherein the item information includes textual identification information about the item; and displaying the textual identification information.05-31-2012
20110243453Information processing apparatus, information processing method, and program - There is provided an information processing apparatus including an analysis section which analyzes, based on image information extracted from image data, a theme per image data group including a plurality of pieces of the image data, and a selection section which selects a combination of predetermined processing which is stored in association with the theme and the image data group based on the theme.10-06-2011
20110243454VEHICLE POSITION RECOGNITION SYSTEM - A vehicle position recognition system calculates an estimated position of a vehicle based on satellite positioning and dead reckoning navigation, and calculates a basic error range in which there is a possibility that the vehicle exists. The system calculates an estimated error amount regarding a directional error factor. The directional error factor is an error factor that tends to cause an error in a specific direction with respect to a vehicle traveling direction. The estimated error amount is an estimated amount of the error that is caused by the directional error factor. The system adjusts the basic error range based on (1) a direction in which the error tends to be caused by the directional error factor and (2) the estimated error amount.10-06-2011
20120243789FAST IMAGE CLASSIFICATION BY VOCABULARY TREE BASED IMAGE RETRIEVAL - Systems and methods are disclosed to categorize images by detecting local features for each image; applying a tree structure to index local features in the images; and extracting a rank list of candidate images with category tags based on a tree indexing structure to estimate a label of a query image.09-27-2012
20120243788DYNAMIC RADIAL CONTOUR EXTRACTION BY SPLITTING HOMOGENEOUS AREAS - Systems and methods for extracting a radial contour around a given point in an image includes providing an image including a point about which a radial contour is to be extracted around. A plurality of directions around the point and a plurality of radius lengths for each direction are provided. Local costs are determined for all radius lengths for each direction by comparing texture variances at each radius length with the texture variance at a further radius length. A radius length is determined, using a processor, for each direction based on the accumulated value of the local costs to provide a radial contour.09-27-2012
20110142350Image Analysis Method And System - A method and system for analyzing an image made up of a plurality of pixels includes identifying a subset of the pixels, and for each identified pixel a local patch of pixels surrounding the identified pixel is defined and the pixels in the local patch are grouped into bins. The local patch is divided into a plurality of sub-patches, and for each sub-patch the method includes determining how many pixels in each sub-patch fall into each of the bins to create an intermediate vector, and the intermediate vectors are concatenated to form a feature vector and describe the local patch.06-16-2011
20100183226BETA-SHAPE: COMPACT STRUCTURE FOR TOPOLOGY AMONG SPHERES DEFINING BLENDING SURFACE OF SPHERE SET AND METHOD OF CONSTRUCTING SAME - There is provided a beta-shape, which is a compact structure for topology among spheres defining a blending surface of a sphere set. There is also provided a method of constructing the beta-shape, comprising: acquiring a Voronoi diagram of spheres; searching for partially accessible Voronoi edges; and obtaining faces of the beta-shape from the partially accessible Voronoi edges. Further, there is provided a method of utilizing the beta-shape to recognize pockets, which comprises the steps of acquiring the beta-shapes, and recognizing the pockets from the beta-shapes.07-22-2010
20100014759EYE DETECTING DEVICE, EYE DETECTING METHOD, AND PROGRAM - An eye part detecting device has an image input section (01-21-2010
20100246969COMPUTATIONALLY EFFICIENT LOCAL IMAGE DESCRIPTORS - Described is a technology in which an image (or image patch) is processed into a highly discriminative and computationally efficient image descriptor that has a low storage footprint. Feature vectors are generated from an image (or image patch), and further processed via a polar Gaussian pooling approach (a DAISY configuration) into a descriptor. The descriptor is normalized, and processed with a dimension reduction component and a quantization component (based upon dynamic range reduction) into a finalized descriptor, which may be further compressed. The resulting descriptors have significantly reduced error rates and significantly smaller sizes than other image descriptors (such as SIFT-based descriptors).09-30-2010
20110085734ROBUST VIDEO RETRIEVAL UTILIZING VIDEO DATA - Techniques for determining if two video signals match by extracting features from a first and second video signal, and cross-correlating the features thereby providing a cross-correlation score at each of a number of time lags, and finally determining the similarity score based on both the cross-correlation scores.04-14-2011
20110249900METHODS AND DEVICES THAT USE AN IMAGE-CAPTURED POINTER FOR SELECTING A PORTION OF A CAPTURED IMAGE - An electronic device includes a camera that is configured to generate a visual data signal that corresponds to dynamically captured graphic content that includes an image and a pointer that is operable to communicate a selection characteristic of the image. A signal processor receives the visual data signal and is operable to identify a portion of the image in the dynamically captured graphic content responsive to the selection characteristic communicated by the pointer.10-13-2011
20110075935METHOD TO MEASURE LOCAL IMAGE SIMILARITY BASED ON THE L1 DISTANCE MEASURE - A method of adaptive local image similarity measurement based on the L03-31-2011
20110069889METHOD AND DEVICE FOR THE INVARIANT-AFFINE RECOGNITION OF SHAPES - A method for the recognition of objects in at least one digital image includes: a) simulating from the digital image a plurality of digital rotations and at least two digital tilts different from 1 in order to develop a simulated image for each rotation-tilt pair; and b) applying an algorithm generating values that are invariant in translation, rotation and zoom onto the simulated images in order to determine so-called SIF (scale invariant features) local characteristics used for recognising objects. The SIFT method can be used in step b.03-24-2011
20100303360IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND RECORDING MEDIUM - A foreground pixel block extraction process section divides input image data into a plurality of pixel blocks, and classifies each pixel block as a uniform density pixel block or foreground pixel block. By performing above process, the foreground pixel block extraction process section extracts foreground pixel blocks. A foreground color calculation process section calculates the foreground colors from the extracted foreground pixel blocks as color information. A labeling process section extracts connected foreground pixel block areas as foreground pixel areas by giving the same label to a plurality of adjacent foreground pixel blocks. From these processing results, a foreground pixel extraction process section calculates a representative color for each foreground pixel area, and extracts pixels having pixel values close to the representative color as foreground pixels.12-02-2010
20100266209IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - The present invention provides an image processing apparatus which able to shorten processing time and includes an image inputting unit that inputs an image, a region division unit that generates region information by dividing an input image input by the image inputting unit into regions having a plurality of different types of attributes, including a frame region attribute, a non-frame region processing unit that obtains data of respective regions other than frame regions by applying attribute-specific processing for respective regions, excluding frame regions, included in the input image on the basis of the input image and the region information, and executes processing of filling of the processed regions, and a frame region processing unit that executes reduction processing and vectorization processing for the frame region according to the region size of the frame region.10-21-2010
20120201465IMAGE PROCESSING APPARATUS - An image processing apparatus may include a condition setting unit that sets a specified image capturing time, a specified image capturing location, and specified image capturing composition, an image capturing time determination unit that extracts image data from among a plurality of image data based on additional information included in the image data, an image capturing location determination unit that extracts the image data from among the plurality of image data based on the additional information, a composition determination unit that extracts the image data from among the plurality of image data based on the additional information, and an order setting unit that generates information indicating order of the image data consistent with given conditions based on the additional information for the image data extracted by all of the image capturing time determination unit, the image capturing location determination unit, and the composition determination unit.08-09-2012
20090022404Efficient detection of constant regions of a region - A technique that improves image analysis efficiency by reducing the number of computations needed to detect constant regions. Constant region detection according to the present techniques includes determining whether an image analysis window at a current position contains a constant region by analyzing a new line of pixels in the image analysis window if a pixel at a predetermined location in the image analysis window in the current position has a value equal to a pixel at the predetermined location from a previous position of the image analysis window. Analyzing only the new line of pixels saves the computational time that would otherwise go into analyzing all of the pixels in the image analysis window.01-22-2009
20090022402Image-resolution-improvement apparatus and method - Provided is an image-resolution-improvement apparatus and method which can increase the resolution of an input image at a high magnification to thereby obtain a high-quality final image. The apparatus includes a textured-region-detection unit to detect a texture region in an input image; and a final-image-generation unit to synthesize a first intermediate image and a second intermediate image, which are obtained by applying different interpolation techniques to the texture region and a non-texture region excluding the texture region and generating a final image.01-22-2009
20120201464COMPUTER READABLE MEDIUM, IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING METHOD - A computer readable medium stores a program causing a computer to execute a process for image processing. The process includes: calculating, on the basis of image feature information of a plurality of image areas each set with a classification information item, a probability distribution of the image feature information for each classification information item; acquiring a target image; calculating an evaluation value of each of pixels included in the target image relating to a specified classification information item, on the basis of the image feature information of an image area including the pixel and the probability distribution of the image feature information calculated for the specified classification information item; and extracting, from the target image, an image area relating to the specified classification information item, on the basis of the evaluation value calculated for each of the pixels included in the target image.08-09-2012
20090148049RECORDING MEDIUM FOR RECORDING LOGICAL STRUCTURE MODEL CREATION ASSISTANCE PROGRAM, LOGICAL STRUCTURE MODEL CREATION ASSISTANCE DEVICE AND LOGICAL STRUCTURE MODEL CREATION ASSISTANCE METHOD - A method for assisting in the creation of a logical structure model, which stores, from an image in which character strings associated respectively with a plurality of logical elements constituting a logical structure are described, the logical elements, character strings associated with the logical elements, and the logical structure, wherein character strings in an input image and the logical structure among the character strings in the input image are extracted, a logical element is selected among the plurality of logical elements according to the degrees of similarity between the extracted character strings and the character string associated respectively with the plurality of logical elements stored in the logical structure model, a character string associated with the selected logical element and a character string in the input image associated with the logical element based on the logical structure among the extracted character strings in the input image are extracted.06-11-2009
20090110290IMAGE FORMING APPARATUS - An image forming apparatus has an image data reading section that reads image data formed on a document, an image processing section that separates regions where effective information is present from the read image data so as to determine, for every separated region, a shape of a see-through preventing pattern and a position at which the see-through preventing pattern is formed, and an image printing section that prints the determined see-through preventing pattern at a part on a back surface of a recording sheet on which the document is printed, on the basis of the information of the determined position at which the see-through preventing pattern is formed, the part corresponding to any of the regions where the effective information of the document is present.04-30-2009
20110158541IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM - An image processing device includes a texture extraction unit to extract a texture component of a G corrected image in which blur or defocus of a structure component of a G image including a G component of an input image is corrected, a mask generation unit to generate a mask image in which the synthesis amount of the texture component of the G corrected image to an R corrected image and a B corrected image is decreased for a region in which at least one of correlation between a variation of the G component and a variation of the R component or correlation between the variation of the G component and a variation of the B component is weak, and a synthesis unit to synthesize the texture component of the G corrected image to the R corrected image and the B corrected image using the mask image.06-30-2011
20110158540PATTERN RECOGNITION METHOD AND PATTERN RECOGNITION APPARATUS - A pattern recognition apparatus that recognizes a data attribute of input data calculates correlation values of feature quantities of corresponding local patterns between the input data and dictionary data for each of a plurality of dictionary data prepared for each data attribute, combines, for each data attribute, the calculated correlation values of local patterns of each dictionary datum to acquire a set of correlation values of each data attribute, integrates correlation values included in each set of correlation values of each data attribute to calculate a similarity of the input data for each data attribute, and identifies the data attribute of the input data based on the calculated similarity.06-30-2011
20100322520IMAGE-READING DEVICE - An image-reading device includes a reading unit, a displaying unit, a recognition target region setting unit, a recognition target region adding unit, and a character recognizing unit. The recognition target region setting unit sets a recognition target region within the displayed image data to recognize characters. The recognition target region adding unit adds a new recognition target region based on the recognition target region set by the recognition target region setting unit. The character recognizing unit performs character recognition in the recognition target regions.12-23-2010
20110164821DESIGNATING CORRIDORS TO PROVIDE ESTIMATES OF STRUCTURES - In particular embodiments, analyzing data includes receiving sensor data generated in response to sensing one or more structures. The structural features of the sensor data are identified. Each structural feature is represented by one or more vectors. A score matrix describing relationships among the vectors is generated. Candidate corridors are identified from at least some of the vectors according to the score matrix. One or more candidate corridors are designated as designated corridors. Each designated corridor comprises an opening defined by at least two structural features. A layout of the structures is generated from the structural features and the designated corridors.07-07-2011
20120148162JOINT SEMANTIC SEGMENTATION OF IMAGES AND SCAN DATA - Systems, methods, and apparatus are described that that increase computer vision analysis in the field of semantic segmentation. With images accompanied by scan data, both two-dimensional and three-dimensional image information is employed for joint segmentation. Through the established correspondence between image data and scan data, two-dimensional and three-dimensional information respectively associated therewith is integrated. Using trained random forest classifiers, the probability of each pixel in images belonging to different object classes is predicted. With the predicted probability, optimization of the labeling of images and scan data is performed by integrating multiples cues in the markov random field.06-14-2012
20120148161APPARATUS FOR CONTROLLING FACIAL EXPRESSION OF VIRTUAL HUMAN USING HETEROGENEOUS DATA AND METHOD THEREOF - Disclosed are an apparatus for controlling facial expression of a virtual human using heterogeneous information and a method using the same. The apparatus for controlling expression of a virtual human using heterogeneous information includes: an extraction module extracting feature data from input image data and sentence or voice data; a DB construction module classifying the extracted feature data into a set of emotional expressions and a emotional expression category by using a set of pre-constructed index data on heterogeneous data; a recognition module transferring the classified emotional expression category; and a viewing module viewing the images and the sentence or voice of the virtual human according to the emotional expression category. By this configuration, the exemplary embodiment of the present invention can delicately express emotion of a virtual human and increase recognition for emotional classification accordingly.06-14-2012
20120148160LANDMARK LOCALIZATION FOR FACIAL IMAGERY - A process and system for facial landmark detection of a face in a scene of an image includes determining face dimensions from the image, identifying regions of search for one or more facial landmarks using the face dimensions, and running a cascaded classifier and a strong classifier tailored to detect different types of facial landmarks to determine one or more respective locations of the facial landmarks. According to another example embodiment, the facial landmarks are used for face mining or face recognition, and the cascaded classifier is performed using a multi-staged AdaBoost classifier, where detections from multiple stages are utilized to enable the best location of the landmark. According to another example embodiment, the strong classifier is a support vector machine (SVM) classifier with input features processed by a principal component analysis (PCA) of the landmark subimage.06-14-2012
20100183228SPECIFYING POSITION OF CHARACTERISTIC PORTION OF FACE IMAGE - Image processing apparatus and methods are provided for specifying the positions of predetermined characteristic portions of a face image. A method includes determining an initial disposition of characteristic points in a target face image, applying a transformation to at least one of the target face image or the reference face image, and updating the disposition of the characteristic points in response to a comparison between at least one of the transformed target face image and the reference face image or the target face image and the transformed reference face image.07-22-2010
20080219560IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND IMAGE FORMING APPARATUS - Based on an area detection signal, a layer separation section outputs a text component of a document, to a feature point calculating section, and generates four layers from a pictorial component of the document to output the generated layers to the feature point calculating section. The feature point calculating section sums feature points extracted for each component. A features calculating section calculates a hash value based on the feature points. A vote processing section searches a hash table based on the hash value, and votes for a reference image associated with the hash value. Based on the voting result, a similarity determination processing section determines whether the document image is similar to any reference image, and then outputs the determination result. Thus, even if the document contains a photograph, accurate matching can be performed.09-11-2008
20110019919AUTOMATIC MODIFICATION OF WEB PAGES - Systems and methods for quickly and easily getting information about, or included in, a paper document into a public or private digital page. One embodiment of an example system includes a scanner that generates scan information from at least a portion of a paper document and a processing system that receives the generated scan information from the scanner, accesses a database of digital documents, searches the database based on the received scan information, locates a digital document corresponding to the paper document, and sends either the digital content or a hyperlink to the digital content to a predetermined web page.01-27-2011
20100119157IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND COMPUTER READABLE MEDIUM - An image processing apparatus includes a characteristic region detecting section that detects a plurality of characteristic regions in an image, a condition storing section that stores thereon assignment conditions differing in accordance with characters of characteristic regions, so that different compression strengths are assigned in accordance with the characters of the characteristic regions, a compressing section that respectively compresses a plurality of characteristic region images which are images of the plurality of characteristic regions, and a compression control section that controls compression strengths at which the compressing section respectively compresses the plurality of characteristic region images in accordance with characters of the plurality of characteristic regions, with reference to the conditions stored on the condition storing section. Also provided is an image processing apparatus that includes an encoding manner storing section that stores encoding manners in association with quantities of characteristics of objects, a characteristic region detecting section that detects a plurality of characteristic regions from an image, and a compressing section that compresses the images of the plurality of characteristic regions by encoding manners stored in the encoding manner storing section in association with the quantities of characteristics of objects included in the plurality of characteristic regions respectively.05-13-2010
20110081086NOISE SUPPRESSION METHOD USING MULTIPLE DIGITAL - A noise suppression method using multiple digital images performs a de-noising process with the multiple digital images. First, a feature weighting procedure and an image feature compensation of a target pixel are performed on each digital image, and then a cross reference is performed on the multiple continuous or similar digital images to suppress noises for the target pixel.04-07-2011
20110052077RESOLUTION INCREASING APPARATUS AND RESOLUTION INCREASING METHOD - A resolution increasing apparatus includes: a candidate specifying section configured to sequentially set pixels of notable points one by one from multiple pixels in one image; a matching error calculating section and the like configured to extract a corresponding point B corresponding to a pixel of a notable point A from a corresponding block Q having a pixel value change pattern identical or similar to a pixel value change pattern included in a notable block P that includes the pixel of the notable point A; a corresponding point increasing section configured to generate a new corresponding point C within an interval of a line segment obtained by rectilinearly connecting the notable point A and the corresponding point B; and a high-resolution pixel value calculating section configured to calculate a pixel value of a high-resolution image from the notable point A, the corresponding point B and the new corresponding point C.03-03-2011
20100215275Test procedure for measuring the geometric features of a test specimen - The test procedure for measuring a geometric feature of a test specimen employs a replicating compound to obtain a casting with a negative image of the geometric feature followed by forming a protective covering over the casting from a replicating compound having a contrasting color. The casting and protective covering unit is cut to obtain a test piece and a flat bed scanner is used to scan the profile of the test piece and obtain an electronic two-dimensional image of the profile for analysis.08-26-2010
20110176733IMAGE RECOGNITION METHOD - The present invention provides an image recognition method for recognizing a plurality of objects in an image, wherein each object is composed of a plurality of image segments. The image recognition method includes the steps of: sequentially acquiring every pixel of the image; identifying a start point of a newly detected image segment; recording information of the newly detected image segment pixel-by-pixel from the start point; identifying an end point of the newly detected image segment; recognizing an object to which the newly detected image segment belongs according to the start point and the end point of the newly detected image segment; and identifying an invalid object or a merged object thereby releasing the data space thereof.07-21-2011
20100260423APPARATUS AND METHOD FOR PROCESSING IMAGE - A template representative of an image of a human face is provided. At least one of the template and image data is rotated to adjust a relative angle between an original orientation of the template and an original orientation of the image data, so as to exclude an angle range including 180 degrees. It is examined a matching between a part of the image data and the template to identify a region in the image data containing an image of a human face. The image data is corrected in accordance with a condition of the image of the human face.10-14-2010
20100166318ADAPTIVE PARTIAL CHARACTER RECOGNITION - A method and system for recognizing a character affected by a noise or an obstruction is disclosed. After receiving an image with characters, a character being affected by a noise or an obstruction is determined. Then, areas in the character where the noise or obstruction affected are precisely located. Templates representing every possible character in the image are updated by removing equivalent areas to the areas in the character being affected by the noise or obstruction. Then, the character is classified in a template among the updated templates by finding the template having the highest number of matching pixels with the character.07-01-2010
20100166317METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING FACE POSE ESTIMATION - A method for providing face pose estimation for face detection may include utilizing a selected portion of classifiers in detectors to determine coarse pose information for a candidate face in an image, determining fine pose information for the candidate face based at least in part on the determined coarse pose information, and employing another portion of the classifiers in the detectors to perform face detection based at least in part on the fine pose information to determine whether the candidate face corresponds to a face. An apparatus and computer program product corresponding to the method are also provided.07-01-2010
20100158387SYSTEM AND METHOD FOR REAL-TIME FACE DETECTION USING STEREO VISION - A system and a method for detecting a face are provided. The system includes a vision processing unit and a face detection unit. The vision processing unit calculates distance information using a plurality of images including a face pattern, and discriminates between a foreground image including the face pattern and a background image not including the face pattern, using the distance information. The face detection unit scales the foreground image according to the distance information, and detects the face pattern from the scaled foreground image.06-24-2010
20110188759Method and System of Pre-Analysis and Automated Classification of Documents - Automatic classification of different types of documents is disclosed. An image of a form or document is captured. The document is assigned to one or more type definitions by identifying one or more objects within the image of the document. A matching model is selected via identification of the document image. In the case of multiple identifications, a profound analysis of the document type is performed—either automatically or manually. An automatic classifier may be trained with document samples of each of a plurality of document classes or document types where the types are known in advance or a system of classes may be formed automatically without a priori information about types of samples. An automatic classifier determines possible features and calculates a range of feature values and possible other feature parameters for each type or class of document. A decision tree, based on rules specified by a user, may be used for classifying documents. Processing, such as optical character recognition (OCR), may be used in the classification process.08-04-2011
20110188758IMAGE PROCESSING DEVICE AND METHOD, AND PROGRAM THEREFOR - There is provided an image processing device that specifies a region including a specific subject on each input image of a plurality of continuous frames. The image processing device includes: subject map generation means that, from feature maps corresponding to features of respective pixels of the input image and representing feature amounts in respective regions of the input image, selects one feature amount of any of the feature maps for each pixel so as to thereby generate a subject map representing similarities of the respective regions of the input image to the subject; and subject region specification means that, on the basis of the subject map, specifies a subject region, which is a region most similar to the subject, in the subject map so as to thereby specify a region which includes the subject on the input image.08-04-2011
20100027890IMAGE INFORMATION PROCESSING METHOD AND APPARATUS - An eye-gaze direction calculation unit calculates the eye-gaze direction in an input facial image of a person by carrying out prescribed operation processing based on iris shape data output from an iris detection unit and face-direction measurement data output from a face-direction measurement unit. The eye-gaze direction of the facial image of the person can be measured on the basis of accurate iris shape information obtained by an iris shape detection unit. The iris and sclera regions can be estimated on the basis of the detected eyelid contour information, thereby making it possible to accurately estimate the shape of the iris.02-04-2010
20100021067ABNORMAL AREA DETECTION APPARATUS AND ABNORMAL AREA DETECTION METHOD - An abnormal area detecting apparatus is provided for detecting the presence or absence and the position of abnormality with high accuracy using higher-order local auto-correlation feature. The abnormal area detecting apparatus comprises means for extracting feature data from image data on a pixel-by-pixel basis through higher-order local auto-correlation; means for adding the feature data extracted by the feature data extracting means for pixels within a predetermined range including each of pixels spaced apart by a predetermined distance; means for calculating an index indicative of abnormality of feature data with respect to a subspace indicative of a normal area; means for determining an abnormality based on the index; and means for outputting a pixel position at which an abnormal is determined. The apparatus may extract a plurality of higher-order local auto-correlation feature data which differ in displacement width. Further, the apparatus may comprise means for finding a subspace indicative of a normal area based on a principal component vector from feature data in accordance with a principal component analysis approach. The apparatus is capable of determine an abnormality on a pixel-by-pixel basis, and capable of correctly detecting the position of an abnormal area.01-28-2010
20110216978METHOD OF AND APPARATUS FOR CLASSIFYING IMAGE - A method of and an apparatus for classifying an image are disclosed. The method includes: extracting a feature vector from the image, wherein the feature vector comprises a plurality of first features, each of the first features corresponds to a combination of a plurality of first areas arranged in the direction of a first axis, a plurality of second areas arranged in the direction of a second axis intersecting with the direction of the first axis, and one of a plurality of predetermined orientations, and the extracting of each of the first features comprises: acquiring a difference between sums or mean values of pixels of the plurality of first areas in the corresponding combination to obtain a first difference vector in the direction of the first axis, and acquiring a difference between sums or mean values of pixels of the plurality of second areas in the corresponding combination to obtain a second difference vector in the direction of the second axis; acquiring a first projection difference vector and a second projection difference vector projected by the first difference vector and the second difference vector on the line of the predetermined orientation in the corresponding combination; and acquiring the sum of magnitudes of the first projection difference vector and the second projection difference vector as the first feature; and classifying the image according to the extracted feature vector.09-08-2011
20120308142METHOD FOR EYE DETECTION FOR A GIVEN FACE - A method for detection of eye comprises computing an average inter-ocular distance for a given face. The method further comprises, detecting of a skin region of the given face. Furthermore, the method comprises identifying a search region for the given face. The method may also comprise computing an actual inter-ocular distance and computing eye centers of the given face.12-06-2012
20100310175Method and Apparatus to Facilitate Using Fused Images to Identify Materials - First image data (which comprises a penetrating image of an object formed using a first spectrum) and second image data (which also comprises a penetrating image of this same object formed using a second, different spectrum) is retrieved from memory and fused to facilitate identifying at least one material that comprises at least a part of this object. The aforementioned first spectrum can comprise, for example, a spectrum of x-ray energies having a high typical energy while the second spectrum can comprise a spectrum of x-ray energies with a relatively lower typical energy. By one approach, this process can associate materials as comprise the object with corresponding atomic numbers and hence corresponding elements (such as, for example, uranium, plutonium, and so forth).12-09-2010
20090116749METHOD OF LOCATING FEATURES OF AN OBJECT - A method of locating features of an object, of a class of objects, of a class of objects, within a target image. The method comprises initialising a set of feature points within the target image, each feature point corresponding to a predetermined feature for objects of the class of objects; deriving a set of template detectors, from the set of feature points, using a statistical model of the class of objects, each template detector comprising an area of image located about the location of a feature point for an object of the class of objects; comparing the set of template detectors with the target image; and updating the set of feature points within the target image in response to the result of the comparison.05-07-2009
20110064312IMAGE-BASED GEOREFERENCING - An image-based georeferencing system comprises an image receiver, an image identification processor, a reference feature determiner, and a feature locator. The image receiver is configured for receiving a first image for use in georeferencing. The image comprises digital image information. The system includes a communicative coupling to a georeferenced images database of images. The image identification processor is configured for identifying a second image from the georeferenced images database that correlates to the first image. The system includes a communicative coupling to a geographic location information system. The reference feature determiner is configured for determining a reference feature common to both the second image and the first image. The feature locator is configured for accessing the geographic information system to identify and obtain geographic location information related to the common reference feature.03-17-2011
20100080466Smart Navigation for 3D Maps - An interest center-point and a start point are created in an image. A potential function is created where the potential function creates a potential field and guides traversal from the starting point to the interest center-point. The potential field is adjusted to include a sum of potential fields directed toward the center-point where each potential field corresponds to an image. Images are displayed in the potential field at intervals in the traversal from the start point toward the interest center point.04-01-2010
20110135204METHOD AND APPARATUS FOR ANALYZING NUDITY OF IMAGE USING BODY PART DETECTION MODEL, AND METHOD AND APPARATUS FOR MANAGING IMAGE DATABASE BASED ON NUDITY AND BODY PARTS - A method for analyzing nudity of an image using a body part detection model includes: extracting a skin blob from an image; calculating a first probability value, which indicates a probability of determination on harmfulness of at least one of the image and the skin blob, using a harmfulness detection model; classifying the skin blob as a specific body part using a body part detection model, and calculating a second probability value which indicates a probability of certainty of said classifying; and rating nudity of the image based on the first probability value and the second probability value.06-09-2011
20120039539METHOD AND SYSTEM FOR CLASSIFYING ONE OR MORE IMAGES - A method for determining a predictability of a media entity portion, the method includes: receiving or generating (a) reference media descriptors, and (b) probability estimations of descriptor space representatives given the reference media descriptors; wherein the descriptor space representatives are representative of a set of media entities; and calculating a predictability score of the media entity portion based on at least (a) the probability estimations of the descriptor space representatives given the reference media descriptors, and (b) relationships between the media entity portion descriptors and the descriptor space representatives. A method for processing media streams, the method may include: applying probabilistic non-parametric process on the media stream to locate media portions of interest; and generating metadata indicative of the media portions of interest.02-16-2012
201001428232D PARTIALLY PARALLEL IMAGING WITH K-SPACE SURROUNDING NEIGHBORS BASED DATA RECONSTRUCTION - Embodiments of the present invention relate to a Surrounding Neighbors based Autocalibrating Partial Parallel Imaging (SNAPPI) approach to MRI reconstruction. Several 2D PPI reconstruction methods may be provided by applying SNAPPI to recover the partially skipped k-space data along two PE directions separately or non-separately, in k-space or in the hybrid k-space and image-space.06-10-2010
20110091111Multilevel bit-mapped image analysis method - The present invention discloses a multilevel method of bit-mapped image analysis that comprises a whole image data representation via its components—objects of different levels of complexity—hierarchically connected therebetween by spatially-parametrical links.04-21-2011
20110064314LINE SEGMENT EXTRACTION DEVICE - Provided is a line segment extraction device capable of accurately and rapidly extracting a line segment. A pattern generating means relates an original image to cells of a cellular automaton to generate time-series configuration patterns. A primary detection means detects the movement of a cell in an active state and sets a primary flag according to the configuration patterns generated in time series. This generates time-series primary flag patterns. A secondary detection means detects a cell in which the direction of the primary flag and the direction of the movement thereof are matched to set a secondary flag according to the primary flag patterns generated in time series. A line segment detection means records a line segment marker on a cell in which the secondary flag in the opposite direction is generated, together with the direction. In this way, the line segment is extracted.03-17-2011
20110064313METHOD AND APPARATUS FOR FACE DETERMINATION - Provided are a method and an apparatus for processing digital images, and more particularly, a method and an apparatus for face determination, wherein it is determined if a subject is a true subject based on distance information regarding a distance to the subject and face detection information. In an embodiment, the face detecting apparatus is a digital image processing apparatus and includes a digital signal processor for determining if a subject is a true subject based on distance information regarding a distance to the subject and face length information.03-17-2011
20120251008Classification Algorithm Optimization - Classification algorithm optimization is described. In an example, a classification algorithm is optimized by calculating an evaluation sequence for a set of weighted feature functions that orders the feature functions in accordance with a measure of influence on the classification algorithm. Classification thresholds are determined for each step of the evaluation sequence, which indicate whether a classification decision can be made early and the classification algorithm terminated without evaluating further feature functions. In another example, a classifier applies the weighted feature functions to previously unseen data in the order of the evaluation sequence and determines a cumulative value at each step. The cumulative value is compared to the classification thresholds at each step to determine whether a classification decision can be made early without evaluating further feature functions.10-04-2012
20120045133IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS - According to the present invention, there is provided an image processing method that performs a tone correction to obtain a combined image with suitable brightness and contrast when a plurality of image data pieces is combined, and an image processing apparatus that can execute the method. The image processing method includes detecting brightness distribution for each of the plurality of image data pieces, calculating a characteristic amount of each brightness distribution from the brightness distribution, and acquiring a correction amount for a tone correction executed to the combined image data based on the obtained characteristic amount of the brightness distribution.02-23-2012
20120045132METHOD AND APPARATUS FOR LOCALIZING AN OBJECT WITHIN AN IMAGE - An improved method and apparatus for localizing objects within an image is disclosed. In one embodiment, the method comprises accessing at least one object model representing visual word distributions of at least one training object within training images, detecting whether an image comprises at least one object based on the at least one object model, identifying at least one region of the image that corresponds with the at least one detected object and is associated with a minimal dissimilarity between the visual word distribution of the at least one detected object and a visual word distribution of the at least one region and coupling the at least one region with indicia of location of the at least one detected object.02-23-2012
20120002879IMAGE PROCESSING APPARATUS, METHOD OF PROCESSING IMAGE, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes: an area extracting unit that extracts a candidate area of a classification target area in which a pixel value does not correspond to a three-dimensional shape of an imaging target based on pixel values of an intraluminal image acquired by imaging the inside of a lumen or information of a change in pixel values of peripheral pixels; and an area classifying unit that classifies the classification target area out of the candidate area based on the pixel values of the inside of the candidate area, a boundary portion of the candidate area, or a periphery portion of the candidate area.01-05-2012
20120002880IMAGE COMPARISON USING REGIONS - Methods and systems for comparing two digital images are provided. A region of a selected image is extracted and processed. A target image from a plurality of target images is provided, where the region is smaller than the target image. The region is compared with a plurality of regions in the target image, and the region is matched to a target region from the plurality of regions in the target image.01-05-2012
20120002878IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM THAT CLASSIFIES DATA OF IMAGES - In an image processing apparatus, a face detection unit 01-05-2012
20110038547METHODS OF FACIAL CODING SCORING FOR OPTIMALLY IDENTIFYING CONSUMERS' RESPONSES TO ARRIVE AT EFFECTIVE, INCISIVE, ACTIONABLE CONCLUSIONS - The present disclosure relates to a method of assessing consumer reaction to a stimulus, comprising receiving a visual recording stored on a computer-readable medium of facial expressions of at least one human subject as the subject is exposed to a business stimulus so as to generate a chronological sequence of recorded facial images; accessing the computer-readable medium for automatically detecting and recording expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images; automatically coding contemporaneously detected and recorded expressional repositionings to at least a first action unit, wherein the action unit maps to a first set of one or more possible emotions expressed by the human subject; assigning a numerical weight to each of the one or more possible emotions of the first set based upon both the number of emotions in the set and the common emotions in at least a second set of one or more possible emotions related to at least one other second action unit observed within a predetermined time period.02-17-2011
20120020568IMAGE PROCESSOR AND IMAGE PROCESSING METHOD - An image processor for implementing an image turning operation of turning a photographic image showing a person into an image having a painting effect comprises a face detection part for capturing an image and detecting an image of the face of a person shown in the image so captured, a determination part for determining whether or not the mage of the face of the person detected meets a predetermined criterion, and an image turning operation implementing part for implementing an image turning operation of turning the image into an image having a painting effect based on the result of the determination. When the photographic image showing the person is turned into an image having a painting effect, whether or not the photographic image is turned into an image having a painting effect is determined in consideration of the position and size of the person in the photographic image, as well as orientations of the face and a line of sight of the person.01-26-2012
20130011071METHODS AND APPARATUS TO SPECIFY REGIONS OF INTEREST IN VIDEO FRAMES - Methods and apparatus to specify regions of interest in video frames are disclosed. Example disclosed methods to mark a region in a graphical presentation include selecting a first point located at a substantially central position within the region, selecting a plurality of second points to define a boundary of the region, and comparing a plurality of stored templates with the selected first and second points to identify a first one of the stored templates to represent the region.01-10-2013
20120014608APPARATUS AND METHOD FOR IMAGE PROCESSING - An image processing apparatus includes: a reducing section reducing an image for which a feature analysis is to be perform at a predetermined reduction ratio; an ROI mask generating section analyzing a feature of a reduced image as the image reduced at the predetermined reduction ratio, and generating an ROI mask as mask information indicating a region of interest as a region to be interested in the reduced image; an ROI mask enlarging section enlarging a size of the ROI mask to a size of the image before being reduced by the reducing section; and an ROI mask updating section analyzing a feature of a region, set as a blank region as a region not to be interested in the ROI mask, of the image before being reduced by the reducing section, and updating the ROI mask by using an analysis result.01-19-2012
20120014610FACE FEATURE POINT DETECTION DEVICE AND PROGRAM - Detecting with good precision an eye inside corner position and an eye outside corner position as face feature points even when the eye inside corner and/or the eye outside corner portions are obscured by noise. First eyelid profile modeling is performed with a Bezier curve expressed by a fixed control point P01-19-2012
20120014609IMAGE SIGNATURE EXTRACTION DEVICE - The image signature extraction device includes an extraction unit and a generation unit. The extraction unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions. The generation unit generates an image signature to be used for identifying the image based on the extracted region features of the respective sub-regions, using, for at least one pair of sub-regions, a method different from that used for another pair of sub-regions.01-19-2012
20120027305APPARATUS TO PROVIDE GUIDE FOR AUGMENTED REALITY OBJECT RECOGNITION AND METHOD THEREOF - A method for providing a guide for augmented reality object recognition includes acquiring image information, analyzing an object corresponding to the image information, and outputting object recognition guide information according to a result of analyzing the object. An apparatus to provide a guide for augmented reality object recognition includes an image acquisition unit to acquire and output image information, and a control unit to analyze an object corresponding to the image information and to output object recognition guide information according to a result of analyzing the object.02-02-2012
20120027306IMAGE SIGNATURE EXTRACTION DEVICE - The image signature extraction device includes an image signature generation unit and an encoding unit. The image signature generation unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and a relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions, and based on the extracted region features of the respective sub-regions, generates an image signature to be used for identifying the image. The encoding unit encodes the image signature.02-02-2012
20120057794IMAGE PROCESSING DEVICE, PROGRAM, AND IMAGE PROCESING METHOD - There is provided an image processing device including a recognition unit configured to recognize a plurality of users being present in an input image captured by an imaging device, an information acquisition unit configured to acquire display information to be displayed in association with each user recognized by the recognition unit, and an output image generation unit configured to generate an output image by overlaying the display information acquired by the information acquisition unit on the input image. The output image generation unit may determine which of first display information associated with a first user and second display information associated with a second user is to be overlaid on a front side on the basis of a parameter corresponding to a distance of each user from the imaging device.03-08-2012
20120057795IMAGE PROCESSING APPARATUS, IMAGE FORMING APPARATUS, IMAGE READING APPARATUS, AND IMAGE PROCESSING METHOD - An image processing apparatus includes an image area extracting section for identifying and extracting, on the basis of inputted image data, an image area within the document where an image is present. The image area extracting section includes an image area detecting section for comparing a pixel value of each part of an image of the inputted image data with a threshold value so as to detect, as the image area, an area where a pixel value is larger than the threshold value. The image area extracting section further includes a judging section for judging a type of the inputted image data, and a threshold value changing section for changing the threshold value used in the image area detecting section to one suitable for the type of the inputted image data in accordance with the type judged by the judging section.03-08-2012
20120301034IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - An image processing apparatus includes the following elements. A receiving device receives an image. An estimating device estimates, for each of pixels within the image received by the receiving device, on the basis of the received image, an amount of fog, which is a difference between a luminance value of the pixel and an original luminance value of the pixel. A measuring device measures, for each of the pixels within the image received by the receiving device, a luminance value of the pixel. A determining device determines a correction target value for luminance values of pixels of a background portion within the image received by the receiving device. A correcting device corrects the luminance value of each of the pixels measured by the measuring device on the basis of the amount of fog estimated by the estimating device and the correction target value determined by the determining device.11-29-2012
20120301033IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - An image processing apparatus includes the following elements. A receiving device receives an image. An extracting device extracts regions from the image received by the receiving device. A selecting device selects a region from among the regions extracted by the extracting device in accordance with a predetermined rule. A measuring device measures luminance values of pixels contained in the region selected by the selecting device. An estimating device estimates a function representing a degree of fog in the image received by the receiving device from the luminance values of the pixels measured by the measuring device. An eliminating device eliminates fog from the image received by the receiving device on the basis of the function estimated by the estimating device.11-29-2012
20120207395MEASURING DEVICE SET AND METHOD FOR DOCUMENTING A MEASUREMENT - A measuring device set (08-16-2012
20120155775WALKING ROBOT AND SIMULTANEOUS LOCALIZATION AND MAPPING METHOD THEREOF - A walking robot and a simultaneous localization and mapping method thereof in which odometry data acquired during movement of the walking robot are applied to image-based SLAM technology so as to improve accuracy and convergence of localization of the walking robot. The simultaneous localization and mapping method includes acquiring image data of a space about which the walking robot walks and rotational angle data of rotary joints relating to walking of the walking robot, calculating odometry data using kinematic data of respective links constituting the walking robot and the rotational angle data, and localizing the walking robot and mapping the space about which the walking robot walks using the image data and the odometry data.06-21-2012
20110103697INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND PROGRAM - An information processing apparatus includes: a calculation unit adapted to analyze an image and calculate an intermediate value; a setting unit adapted to set a feature extraction region in the image, using the intermediate value; and an extraction unit adapted to extract a local feature of the feature extraction region, reusing the intermediate value used by the setting unit.05-05-2011
20100290708IMAGE RETRIEVAL APPARATUS, CONTROL METHOD FOR THE SAME, AND STORAGE MEDIUM - An image retrieval apparatus configured so as to enable a global feature method and a local feature method to complement each other is provided. After obtaining a retrieval result candidate using the local feature method, the image retrieval apparatus further verifies global features already registered in a database, with regard to the retrieval result candidate image. A verification position of the global features is estimated using the local features.11-18-2010
20120121187MOBILE TERMINAL AND METADATA SETTING METHOD THEREOF - A mobile terminal and metadata setting method thereof are disclosed, by which metadata of various types can be set. The present invention includes displaying an image including at least one object, selecting a specific object from the at least one object, extracting the specific object from the image, setting the metadata for the extracted specific object, and storing the metadata set for the specific object and an image including the specific object.05-17-2012
20120121186EXTRACTING STEP AND REPEAT DATA - A method for extracting step and repeat data from a halftone printing job (05-17-2012
20120121185Calibrating Vision Systems - Methods, systems, and computer program calibrate a vision system. An image of a human gesture is received that frames a display device. A boundary defined by the human gesture is computed, and gesture area defined by the boundary is also computed. The gesture area is then mapped to pixels in the display device.05-17-2012
20100246971HOUSE CHANGE DETERMINING METHOD, HOUSE CHANGE DETERMINING PROGRAM, HOUSE CHANGE DETERMINING IMAGE GENERATING METHOD, AND HOUSE CHANGE DETERMINING IMAGE - A method of judging a house change based on a comparison between high-resolution images or DSMs acquired from an aircraft is incapable of determining the house change in the region in which the DSM or the like is acquired for the first time. A house region in which a house exists is extracted from a judgment target region based on house polygon data acquired in advance at a time point T09-30-2010
20100246970DEVICE AND A METHOD FOR PROVIDING INFORMATION ABOUT ANIMALS WHEN WALKING THROUGH AN ANIMAL PASSAGE - The invention relates to a device and method for providing information about animals walking through an animal passage (I), the information comprising at least the number of animals walking through the animal passage, using a detection device having a sensor device connected to a processor for capturing animal data about animals walking through the animal passage, and an analysis device for recognizing animals in the data/signals captured by the sensor device for the purpose of outputting counter impulses when animals are detected in said signals, the sensor device being designed for producing 3D images, and the analysis device being designed for detecting animals in the 3D data of the 3D images and for counting the animals using said detection.09-30-2010
20120128255PART DETECTION APPARATUS, PART DETECTION METHOD, AND PROGRAM - The present disclosure provides a part detection apparatus including, a part detection block configured to detect a location of a plurality of parts making up a subject from an input image, and a part-in-attention estimation block configured, if a location of a part in attention has not been detected by the part detection block, to estimate the location of a part in attention on the basis of the location of a part detected by the part detection block and information about a locational relation with the detected location of a part being used as reference.05-24-2012
20120163719Multilevel Image Analysis - Disclosed is a method of bit-mapped image analysis that comprises a whole image data representation via its component objects. The objects are assigned to different levels of complexity. The objects may be hierarchically connected by spatially-parametrical links. The method comprises preliminarily generating a classifier of image objects consisting of one or more levels differing in complexity; parsing the image into objects; attaching each object to one or more predetermined levels; establishing hierarchical links between objects of different levels; establishing links between objects within the same level; and performing an object feature analysis. Object feature analysis comprises generating and examining a hypothesis about object features and correcting the concerned object's features of the same and other levels in response to results of hypothesis examination. Object feature analysis may also comprise execution of a recursive X-Y cut within the same level.06-28-2012
20100086211METHOD AND SYSTEM FOR REFLECTION DETECTION IN ROAD VIDEO IMAGES - A method and system for reflection detection in road video images, is provided. One implementation involves detecting road surface reflections, by receiving an image of a road in front of a vehicle from an image capturing device, determining a region of interest in an identified road in the image, and detecting road surface reflections in the region of interest.04-08-2010
20100208997Image-Based Advertisement Platform - Described is an image centric advertisement platform and technology in which input images are matched to advertisements based on actual visible content (e.g., corresponding to features) within the image. Advertisers upload and bid on advertiser-provided images that correspond to advertisements. When an input image is received, such as a result of user interaction with web content or transmission of the image by a user, (e.g., via MMS), the image is matched to an advertiser images, such as via feature-based image matching. An index based upon the features may be used for efficiently locating the advertisement or advertisements. Also described is a tool for advertisers to use in creating a new scene based upon an uploaded image, or for adding the uploaded image to an existing scene.08-19-2010
20100208999METHOD OF COMPENSATING FOR DISTORTION IN TEXT RECOGNITION - A method of compensating for distortion in text recognition is provided, which includes extracting a text region from an image; estimating the form of an upper end of the extracted text region; estimating the form of a lower end of the extracted text region; estimating the form of left and right sides of the extracted text region; estimating a diagram constituted in the form of the estimated upper end, lower end, left and right sides, and including a minimum area of the text region; and transforming the text region constituting the estimated diagram into a rectangular diagram using an affine transform.08-19-2010
20110182521METHOD OF DECODING CODING PATTERN WITH VARIABLE NUMBER OF MISSING DATA SYMBOLS POSITIONED OUTSIDE IMAGING FIELD-OF-VIEW - A method of decoding a coding pattern disposed on or in a substrate. The method comprises the steps of: (a) operatively positioning an optical reader relative to a surface of the substrate; (b) capturing an image of a portion of the coding pattern, the coding pattern comprising a plurality of tags, each tag comprising a plurality n07-28-2011
20110182520Light Source Detection from Synthesized Objects - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a location relative to an object and a type of a light source that illuminated the object when the image was captured, are described. A method performed by a process executing on a computer system includes identifying an object of interest in a digital image. The method further includes projecting at least a portion of the digital image corresponding to the object of interest onto a three dimensional (3D) model that includes a polygon-mesh corresponding to the object's shape. The method further includes determining one or more properties of a light source that illuminated the object in the digital image at an instant that the image was captured based at least in part on a characteristic of one or more polygons in the 3D model onto which the digital image portion was projected.07-28-2011
20120314956IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - Out of regions extracted from a frame image, regions assigned the same identification information as that of a region unselected in a past frame immediately before the frame are defined as nonselection regions, and nonselection regions in number equal to or smaller than a predetermined number are selected out of the nonselection regions.12-13-2012
20120134595METHOD AND APPARATUS FOR PROVIDING AN IMAGE FOR DISPLAY - At least one region of interest within an image is determined, step 05-31-2012
20120134594DETERMINING A VISUAL BALANCE OF AN IMAGE - A method for determining with a physical image processing device a visual balance of an image includes assigning a visual weight point to each of a plurality of visual elements within the image with the image processing device, each visual weight point having a weight value based on visual properties associated with the visual element, and determining the visual balance of the image with the image processing device by measuring a vector value at a center of the composition, the vector value being based on a distance of each visual weight point from the center and the weight value associated with each visual weight point.05-31-2012
20120170850SYSTEM AND METHOD FOR IMAGE REGISTRATION BASED ON VARIABLE REGION OF INTEREST - An image registration system for aligning first and second images. The novel system includes a first system for extracting a region of interest (ROI) from each image and a second system for coarsely aligning the regions of interest. The first system determines the size and location of the ROI based on the number of features contained within the region. The size of the ROI is enlarged until a number of features contained in the ROI is larger than a predetermined lower bound or until the size is greater than a predetermined upper bound. The second system computes a cross-correlation on the regions of interest using a plurality of transforms to find a coarse alignment transform having a highest correlation. The image registration system may also include a third system for performing sub-pixel alignment on the regions of interest.07-05-2012
20120170849IMAGE PROCESSING APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes: an isolated point detecting unit that detects isolated points in image data; a line-shaped region extracting unit that extracts line-shaped regions in the image data, as character line candidate regions; an isolated point type determining unit that determines a representative pixel of each isolated point in each line-shaped region to be a pixel of interest, determines discontinuity of each line-shaped region around the pixel of interest for each isolated point, determines an isolated point determined to have discontinuity to be a true isolated point, and determines an isolated point determined to have no discontinuity to be a pseudo isolated point; and a halftone-dot region determining unit that determines a halftone-dot region, based on isolated point type determination results for the respective isolated points detected by the isolated point detecting unit.07-05-2012
20120170848ARTIFACT MANAGEMENT IN ROTATIONAL IMAGING - A method for artifact management in a rotational imaging system is presented. The method includes the steps of acquiring data employing a helical scanning pattern over N revolutions, where N is greater than 1, and detecting at least one artifact in the acquired data of each revolution. The method further includes segmenting the data acquired over N revolutions into N-1 data frames each bounded by at least one of the at least one artifacts.07-05-2012
20120076419IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An image processing apparatus includes a gradient magnitude calculating unit that calculates a gradient magnitude of each pixel on the basis of pixel values of a target image in which a predetermined target object is imaged; a candidate-edge detecting unit that detects contour-candidate edge positions on the basis of the gradient magnitude of each pixel; a reference-range setting unit that sets a reference range, which is to be referred to when a contour edge position is selected from among the contour-candidate edge positions, on the basis of the gradient magnitude of each pixel; and a contour-edge selecting unit that selects, as the contour edge position, one of the contour-candidate edge positions in the reference range.03-29-2012
20120076418FACE ATTRIBUTE ESTIMATING APPARATUS AND METHOD - To provide a face attribute estimating apparatus capable of determining a face attribute with high precision. A scan region extracting part extracts, as a scan region, a region in which a specific face part can exist from a face region detected by a face detecting part. A region scanning part sets a small region in the scan region extracted by the scan region extracting part and, while scanning the scan region with the small region, sequentially outputs a pixel value in the small region. A pattern similarity calculating part sequentially calculates similarity between the pixel value output from the region scanning part and a specific pattern on the specific face part. A face attribute determining part determines a face attribute by comprehensively determining the similarities sequentially calculated by the pattern similarity calculating part. Therefore, a face attribute can be determined with high precision.03-29-2012
20090110289IMAGE PROCESSING OF APPARATUS CONDITION - In one embodiment, a method of continually monitoring and detecting in real-time a condition of an apparatus is detailed. In one step, an apparatus is continually monitored in real-time using continual real-time images of the apparatus taken by at least one camera. In another step, the continual real-time images of the apparatus from the at least one camera are communicated to at least one computer processing unit. In still another step, the continual real-time images of the apparatus are processed using at least one software program embedded in the at least one computer processing unit in order to monitor and detect in real-time a condition, of the apparatus.04-30-2009
20120314957INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus including a multi-stage determining unit that includes determinators which each function as a node of an N-level tree structure (N is an integer value of 2 or more) in order to perform determination for classifying a determination target into at least one of a plurality of ranges. Each determinator performs determination of classifying the determination target into any one of two ranges, and the two ranges determined in each determinator include an overlapping portion. The present technology can be applied to an information processing apparatus that classifies data.12-13-2012
20100272369Image processing apparatus - Multi-resolution images of a reference image and a target image are generated. Then, whole-range matching is performed on an image of a lower resolution to detect a two-dimensional displacement between the images. Block matching is performed on an image of a higher resolution to detect a displacement at each feature point. The accuracy of motion data is increased by correcting the motion data with an image of a higher resolution by using the previously calculated motion data of the lowest resolution through higher resolutions as an initial value.10-28-2010
20100272368IMAGE PREVIEWING SYSTEM CAPABLE OF AUTOMATICALLY MAGNIFYING FACE PORTION IN IMAGE AND MAGNIFYING METHOD THEREOF - An image previewing system includes a display unit, a face portion recognition unit, a selecting unit, a comparing unit and a magnifying unit. The display unit comprises a screen configured to show an image. The face portion recognition unit is configured to recognize any human face contained in the image and determine face portions in the image if human face(s) exists in the image. The selecting unit is configured to select one of the face portions in the image. The comparing unit is configured to compare the number of image pixels of the selected face portion with the resolution of the screen of the display unit and generate a result. According to the result, the magnifying unit configured to magnify the selected face portion and display the magnified face portion on the screen.10-28-2010
20100272367IMAGE PROCESSING USING GEODESIC FORESTS - Image processing using geodesic forests is described. In an example, a geodesic forest engine determines geodesic shortest-path distances between each image element and a seed region specified in the image in order to form a geodesic forest data structure. The geodesic distances take into account gradients in the image of a given image modality such as intensity, color, or other modality. In some embodiments, a 1D processing engine carries out 1D processing along the branches of trees in the geodesic forest data structure to form a processed image. For example, effects such as ink painting, edge-aware texture flattening, contrast-aware image editing, forming animations using geodesic forests and other effects are achieved using the geodesic forest data structure. In some embodiments the geodesic forest engine uses a four-part raster scan process to achieve real-time processing speeds and parallelization is possible in many of the embodiments.10-28-2010
20120251009IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING DEVICE - An image processing apparatus includes an approximate-surface calculator that calculates multiple approximate surfaces that each approximate the pixel value of a pixel included in an examination-target region of an image; an approximate-surface selector that selects at least one approximate surface from the approximate surfaces on the basis of the relation between the pixel value of the pixel in the examination-target region and the approximate surfaces; an approximate-region setting unit that sets an approximate region that is approximated by at least the selected one approximate surface; and an abnormal-region detector that detects an abnormal region on the basis of the pixel value of a pixel in the approximate region and the value corresponding to the coordinates of that pixel on at least one approximate surface.10-04-2012
20090087102METHOD AND APPARATUS FOR REGISTERING IMAGE IN TELEPHONE DIRECTORY OF PORTABLE TERMINAL - A method and an apparatus for registering an image in a telephone directory of a portable terminal are provided. The method includes identifying at least one face area in an image displayed on a screen, selecting a face area to be registered in the telephone directory from the at least one identified face area, generating a face area capture image from the selected image, and registering the face area capture image in the telephone directory. A user may thereby easily retrieve a telephone number of the registered person.04-02-2009
20090060345RAPID, SPATIAL-DATA VIEWING AND MANIPULATING INCLUDING DATA PARTITION AND INDEXING - A high-density, distance-measuring laser system and an associated computer that processes the data collected by the laser system. The computer determines a data partition structure and stores that structure as a header file for the scan before data is collected. As the scan progresses, the computer collects data points until a predetermined threshold is met, at which point a block of data consisting of the data points up to the threshold is written to disk. The computer indexes each data block using all three coordinates of its constituent data points using, preferably, a flexible index, such as an R-tree. When a data block is completely filled, it is written to disk preferably with its index and, as a result, each data block is ready for access and manipulation virtually immediately after having been collected. Also, each data block can be independently manipulated and read from disk.03-05-2009
20090060344Image Processing Device, Image Processing Method, and Image Processing Program - An image processing device that executes deformation of an image includes a candidate area setting unit, an exclusion determination unit and a deformation processing unit. The candidate area setting unit sets candidate areas, each of which includes a specific image, on a target image used as a target for a deformation process. The exclusion determination unit, when there is an overlap between the candidate areas, excludes one or more candidate areas from the target for the deformation process so as to eliminate the overlap. The deformation processing unit performs deformation of the image on the candidate areas other than the excluded candidate areas.03-05-2009
20120189208IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM, AND STORAGE MEDIUM - An image processing apparatus (07-26-2012
20120082384IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - A feature point detection unit (04-05-2012
20120082383PROJECTING PATTERNS FOR HIGH RESOLUTION TEXTURE EXTRACTION - Camera-based texture extraction in Augmented Reality (AR) systems is enhanced by manipulating projected patterns. One or more fine line patterns are projected onto a textured surface, a Moiré interference pattern measured, and different properties of the projected pattern(s) adjusted until the Moiré interference pattern measurements indicate that a similar texture pattern to that of the three dimensional target is being projected. Thereby, the target texture may be more closely matched even as sub-pixel resolutions, variable lighting conditions, and/or complicated geometries.04-05-2012
20090016610Methods of Using Motion-Texture Analysis to Perform Activity Recognition and Detect Abnormal Patterns of Activities - Methods of using motion-texture analysis to perform video analytics are disclosed. One method includes selecting a plurality of frames from a video sequence, analyzing motion textures in the plurality of frames to identify a flow, extracting features from the flow, and characterizing the extracted features to perform activity recognition. Another method includes selecting a plurality of frames from a video sequence, analyzing motion textures in the plurality of frames to identify a flow, extracting first features from the flow, comparing the first features with second features extracted during a previous training phase, and based on the comparison, determining whether the first features indicate abnormal activity. Another method includes partitioning a given frame in a video sequence into a plurality of patches, forming a vector model for each patch by analyzing motion textures associated with that patch, and clustering patches having vector models that show a consistent pattern.01-15-2009
20120230591IMAGE RESTORATION SYSTEM, IMAGE RESTORATION METHOD, AND IMAGE RESTORATION PROGRAM - A defect pixel value estimation means estimates a pixel value which each pixel in a defect region as a region to be restored in an image may take, based on the pixel value of pixels in a non-defect region as a region in the image not including the defect region. A patch selection means selects a pair of patches in which a defect patch and a reference patch are most similar to each other from pairs of patches including the defect patch as the image of a region including the defect region and the reference patch as the image of a region not including the defect region. The patch selection means selects a pair of patches in which the image of the defect patch and the image of the reference patch are most similar to each other based on a relationship between the pixel value of the defect region estimated in the defect patch and the pixel value of the corresponding reference patch. An image restoration means restores the defect patch based on the reference patch in the selected pair of patches.09-13-2012
20120230590IMAGE PROCESSING APPARATUS, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND IMAGE PROCESSING METHOD - An image processing apparatus includes a registering unit that registers a first language and a second language different from the first language, a character string extracting unit that extracts one or more character strings from reading information acquired by reading an original, plural feature character string creating sections that create a feature character string of the original on the basis of the one or more character strings extracted by the character string extracting unit, and a switching unit that switches the feature character string creating section used to create the feature character string on the basis of a combination of the registered first language and the registered second language.09-13-2012
20080298685IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - In order to perform appropriate image correction considering the moving image region and static image region of image data forming a moving image, the image data forming the moving image is input in a unit of frame, the image data is divided into a moving image region and static image region, and a feature amount of the moving image region and that of the static image region are calculated. At least one table for image correction is generated based on the feature amount of the moving image region and that of the static image region, and the image data is corrected using the table.12-04-2008
20120263383ROAD PROFILE DEFINING APPARATUS, ROAD PROFILE DEFINING METHOD, AND ROAD PROFILE DEFINING PROGRAM - A road profile defining apparatus includes an image acquisition configured to acquire an image, a lane marking recognition configured to extract from the image a left lane marking located at a left end of one lane painted on a road and a right lane marking located at a right end of the lane, and a road profile determination configured to output gradient information indicating a gradient change of the road based on directions of the left and right lane markings.10-18-2012
20120093418METHOD, TERMINAL, AND COMPUTER-READABLE RECORDING MEDIUM FOR TRIMMING A PIECE OF IMAGE CONTENT - The present invention relates to a method for applying a trimming operation to image. The method includes the steps of: detecting person and object; calculating area of face region and area of detected object and calculating distance between center of face region and that of image and distance between center of detected object and that thereof; and applying trimming operation to the object if area of detected object is by over a first prefixed percent larger than that of face, and applying trimming operation to the object if distance between center of detected object and that of image is a second prefixed value or less and distance between center of face and that of image is by over a third prefixed value larger than distance between center of detected object and that thereof.04-19-2012
20110038546METHOD AND APPARATUS FOR DETECTING AN INSERTED SEGMENT INTO A VIDEO DATA STREAM - An inserted segment of a video data stream is detected if no graphical object is detected. The presence of at least one active graphical object in the video data stream is detected concurrently with detecting appearance of a new graphical object in the video data stream. The most reliable graphical object from the at least one active graphical object and the new graphical object is determined and the presence of the most reliable graphical object is detected from a point in the video data stream at the new graphical object was detected to appear. An inserted segment of the video data stream is detected if no graphical object is detected.02-17-2011
20120269441IMAGE QUALITY ASSESSMENT - A computer-implemented system and method for predicting an image quality of an image are disclosed. For an input image, the method includes generating a first descriptor based on semantic content information for the image and generating a second descriptor based on aesthetic features extracted from the image. With a categorizer which has been trained to assign a quality value to an image based on first and second descriptors, a quality value is assigned to the image based on the first and second descriptors and output.10-25-2012
20100254609DIGITAL CAMERA AND IMAGE CAPTURING METHOD - A digital camera and an image capturing method for photographing at least one object in the digital camera. An image is sensed, and an eye-gazing detection process is accordingly preformed on the image to detect an eye-gazing direction of at least one pair of eyes of the at least one object. It is determined whether the eye-gazing direction meets a gazing criterion. If the eye-gazing direction meets the gazing criterion, an application of the digital camera is triggered.10-07-2010
20100189358FACIAL EXPRESSION RECOGNITION APPARATUS AND METHOD, AND IMAGE CAPTURING APPARATUS - A facial expression recognition apparatus includes an image input unit configured to sequentially input images, a face detection unit configured to detect faces in images obtained by the image input unit, and a start determination unit configured to determine whether to start facial expression determination based on facial image information detected by the face detection unit. When the start determination unit determines that facial expression determination should be started, an acquisition unit acquires reference feature information based on the facial image information detected by the face detection unit and a facial expression determination unit extracts feature information from the facial image information detected by the face detection unit and determines facial expressions of the detected faces based on the extracted feature information and the reference feature information.07-29-2010
20110235920IMAGE SIGNATURE MATCHING DEVICE - An image signature to be used for matching is generated by the following generation method. First, region features are extracted from respective sub-regions of a plurality of pairs of sub-regions in an image, and for each of the pairs of sub-regions, a difference value between the region features of two sub-regions forming a pair is quantized. When performing the quantization, the difference value is quantized to a particular quantization value if an absolute value of the difference value is smaller than a predetermined value. Then, a collection of elements which are quantization values calculated for the respective pairs of sub-regions is used as an image signature to be used for discriminating the image. An image signature matching device matches an image signature of a first image and an image signature of a second image, generated by the above-described generation method, in such a manner that a weight of an element having the particular quantization value is reduced.09-29-2011
20100232704DEVICE, METHOD AND COMPUTER PROGRAM PRODUCT - A device is provided, in which a display screen displays an image; an image analyzer determines at least one potential area of interest in the image; a visual indicator highlights at least a boundary of the at least one potential area of interest, and an optical zoom and/or a digital zoom changes the magnification level of an area of interest selected from the at least one potential area of interest. The device permits a user to zoom in and/or zoom out of the selected area of interest by displacing the boundary of the selected area of interest over at least a portion of the display screen.09-16-2010
20100232705Device and method for detecting shadow in image - A device for detecting a shadow region in an image includes an imaging module generating a multi-channel image including brightness, red, green, and blue channels, a brightness correcting module correcting values of the brightness channel based on imaging parameters and outputting a corrected multi-channel image, a scene classifying module determining to carry out a shadow detection on the corrected multi-channel image, a shadow detecting module classifying pixels of the corrected multi-channel image into a shadow or non-shadow pixel, and generating a shadow classification mark matrix having pixels having a shadow classification mark value corresponding to the classification, a region segmentation module segmenting the multi-channel image into regions having pixels having similar color values, and generating a region mark matrix having pixels having a region mark value, and a post-processing module updating the shadow classification mark matrix based on the shadow classification mark matrix and region mark matrix.09-16-2010
20120281921IMAGE ALIGNMENT - Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, “distinguishing” features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A “key” for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishing features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.11-08-2012
20120321195METHOD FOR AUTOMATIC MISMATCH CORRECTION OF IMAGE VOLUMES - A method for automatic mismatch correction is presented. The method includes identifying a feature of interest in a reference image volume and a target image volume. Furthermore, the method includes computing a cost matrix based on one or more pairs of image slices in the reference image volume and the target image volume. The method also includes identifying one or more longest common matching regions in the reference image volume and the target image volume based on the computed cost matrix. In addition, the method includes aligning the reference image volume and the target image volume based on the identified one or more longest common matching regions. A non-transitory computer readable medium including one or more tangible media, where the one or more tangible media include code adapted to perform the method for automatic mismatch correction is also presented. Systems and non-transitory computer readable medium configured to perform the method for automatic mismatch correction of image volumes are also presented.12-20-2012
20120321196INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND MOBILE TERMINAL APPARATUS - Automatically preparing communications for transmission by a mobile terminal device. In one example, this includes storing contact information including personal contacts and associated address and face image information. A face image is extracted from within a still image content to obtain an extracted face image. The extracted face image is linked to a given personal contact among the personal contacts by finding a match corresponding to the extracted face image among previously extracted face images found in the stored contact information. The given personal contact is correlated to address information using the contact information, and a message is automatically prepared for transmission to the given personal contact using the address information.12-20-2012
20120321197IMAGE PROCESSING APPARATUS, CONTENT DELIVERY SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM - Disclosed herein is an image processing apparatus including an upper body feature data storage unit (12-20-2012
20110280486ELECTRONIC DEVICE AND METHOD FOR SORTING PICTURES - A system and method for sorting pictures stored in an electronic device receives sorting features of the pictures and a sorting priority sequence of the sorting features set by a user. The pictures are sorted in each of the sorting features according to the sorting priority sequence. If pictures have no sorting features, the pictures are stored in a file of a storage system of the electronic device. The pictures having the same sorting sub-feature of the sorting feature are stored in a picture file.11-17-2011
20110293189Facial Analysis Techniques - Described herein are techniques for obtaining compact face descriptors and using pose-specific comparisons to deal with different pose combinations for image comparison.12-01-2011
20120328198DICTIONARY DATA REGISTRATION APPARATUS FOR IMAGE RECOGNITION, METHOD THEREFOR, AND PROGRAM - A dictionary data registration apparatus includes a dictionary configured to be registered a local feature amount for each region of an image with respect to each of a plurality of categories, an extraction unit configured to extract the local feature amount from a plurality of regions of an input image, a selection unit configured to select a plurality of the local feature amounts for each region according to a distribution of the local feature amounts extracted by the extraction unit from a plurality of regions of a plurality of pieces of input images which belongs to the category with respect to each of the plurality of categories, and a registration unit configured to register the selected plurality of local feature amounts on the dictionary as a local feature amount for each region with respect to the category.12-27-2012
20100208998VISUAL BACKGROUND EXTRACTOR - The present invention relates to a Visual Background Extractor (VIBE) consisting in a method for detecting a background in an image selected from a plurality of related images. Each one of said set of images is formed by a set of pixels, and captured by an imaging device. This background detection method comprising the steps of: establishing, for a determined pixel position in said plurality of images, a background history comprising a plurality of addresses, in such a manner as to have a sample pixel value stored in each address; comparing the pixel value corresponding to said determined pixel position in the selected image with said background history, and, if said pixel value from the selected image substantially matches at least a predetermined number of said sample pixel values: classifying said determined pixel position as belonging to the image background; and—updating said background history by replacing the sample pixel values in one randomly chosen address of said background history with said pixel value from the selected image. The method of the invention is applicable a.o. for video surveillance purposes, videogame interaction and imaging devices with embedded data processors.08-19-2010
20100202699IMAGE PROCESSING FOR CHANGING PREDETERMINED TEXTURE CHARACTERISTIC AMOUNT OF FACE IMAGE - Image processing apparatus and methods are provided for changing a texture amount of a face image. A method includes specifying positions of predetermined characteristic portions of the face image, determining a size of the face image, selecting a reference face shape based on the determined face image size, selecting a texture model corresponding to the selected reference face shape, performing a first transformation of the face image such that the resulting transformed face image shape matches the selected reference shape, changing the texture characteristic amount by using the selected texture model, and transforming the changed face image via an inverse transformation of the first transformation.08-12-2010
20100202698SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR DETERMINING DOCUMENT VALIDITY - A method according to one embodiment includes extracting an identifier from an electronic first document, and identifying a complementary document associated with the first document using the identifier. A validity of the first document is determined by simultaneously considering: textual information from the first document; textual information from the complementary document; and predefined business rules. An indication of the determined validity is output. Systems and computer program products for providing, performing, and/or enabling the methodology presented above are also presented.08-12-2010
20130011070STUDYING AESTHETICS IN PHOTOGRAPHIC IMAGES USING A COMPUTATIONAL APPROACH - The aesthetic quality of a picture is automatically inferred using visual content as a machine learning problem using, for example, a peer-rated, on-line photo sharing Website as data source. Certain visual features of images are extracted based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. A one-dimensional support vector machine is used to identify features that have noticeable correlation with the community-based aesthetics ratings. Automated classifiers are constructed using the support vector machines and classification trees, with a simple feature selection heuristic being applied to eliminate irrelevant features. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings.01-10-2013
20130016910INFORMATION PROCESSING APPARATUS, METADATA SETTING METHOD, AND PROGRAMAANM MURATA; MakotoAACI TokyoAACO JPAAGP MURATA; Makoto Tokyo JPAANM Shibuya; NaokiAACI KanagawaAACO JPAAGP Shibuya; Naoki Kanagawa JPAANM Kurata; MasatomoAACI TokyoAACO JPAAGP Kurata; Masatomo Tokyo JPAANM Sato; KojiAACI TokyoAACO JPAAGP Sato; Koji Tokyo JP - Provided is an information processing apparatus including a specified region detection unit for detecting a specified region specified by a user within a screen during reproduction of a video, a region metadata setting unit for setting region metadata indicating a position and a range of the specified region for each video frame, and a section metadata setting unit for setting section metadata indicating a section corresponding to a video frame for which the region metadata has been set, for each video.01-17-2013
20130016911METHOD FOR CLASSIFYING PROJECTION RECAPTURESAANM Rolland-Neviere; XavierAACI Cesson SevigneAACO FRAAGP Rolland-Neviere; Xavier Cesson Sevigne FRAANM Chupeau; BertrandAACI Cesson Sevigne CedexAACO FRAAGP Chupeau; Bertrand Cesson Sevigne Cedex FRAANM Doerr; GwenaelAACI Cesson Sevigne CedexAACO FRAAGP Doerr; Gwenael Cesson Sevigne Cedex FRAANM Blonde; LaurentAACI Cesson Sevigne CedexAACO FRAAGP Blonde; Laurent Cesson Sevigne Cedex FR - A method for classifying projection recaptures is disclosed. A copy of a video content is classified as recorded from the digital projection of the video content, or as recorded from the projection of the celluloid film print of the video content by an automated method comprising classifying as a function of at least one feature extracted from a digital representation of the copy, among (i) spatial illumination uniformity; (ii) on-screen vertical stability; and (iii) temporal illumination pulse.01-17-2013
20110158542DATA CORRECTION APPARATUS AND METHOD - A data correction apparatus which corrects data associated with an image of an object projects vector data obtained by connecting data to be corrected to each other onto a subspace to generate a dimensionally reduced projection vector, and executes dimension restoration processing in which the dimensionality of the projection vector is restored to generate dimensionally restored vector data, thereby generating a plurality of dimensionally restored vector data for each type of fluctuation. The data correction apparatus determines the fluctuation of the object based on the projection vector, integrates the plurality of dimensionally restored vector data with each other based on the determination result, and outputs the integration result as corrected data.06-30-2011
20110158539SYSTEM AND METHOD FOR EXTRACTING FEATURE DATA OF DYNAMIC OBJECTS - A system and method for extracting feature data of dynamic objects selects sequential N frames of a video file up front, where N is a positive integer, and divides each of the N frames into N*N squares. The system and method further selects any n frames from the N frames, selects any n rows and n columns of the n frames to obtain n*n*n squares, where n is a positive integer. The system and method further extracts feature data from the video file by computing averages and differences for pixel values of the n*n*n squares.06-30-2011
20130022274SPECIFYING VALUES BY OCCLUDING A PATTERN ON A TARGET - A mobile platform captures a scene that includes a real world object, wherein the real world object has a non-uniform pattern in a predetermined region. The mobile platform determines an area in an image of the real world object in the scene corresponding to the predetermined region. The mobile platform compares intensity differences between pairs of pixels in the area, with known intensity differences between pairs of pixels in the non-uniform pattern, to identify any portion of the area that differs from a corresponding portion of the predetermined region. The mobile platform then stores in its memory, a value indicative of a location of the any portion relative to the area. The stored value may be used in any application running in the mobile platform.01-24-2013
20130022275SEARCH SUPPORTING SYSTEM, SEARCH SUPPORTING METHOD AND SEARCH SUPPORTING PROGRAM - In a database, product image data is accumulated. A search portion acquires product image data having the image characteristics information that is the same as or similar to the image characteristics information that indicates the characteristics of the image of input image data from the database for the input image data. A search server outputs information on another product that is different from the product corresponding to the product image data together with the product image data acquired by the search portion.01-24-2013
20130177249Semantic Parsing of Objects in Video - Techniques, systems, and computer program products for parsing objects in a video are provided herein. A method includes producing and storing a plurality of versions of an image of an object derived from a video input, wherein each version of said image has a different resolution of said image; computing an appearance score at each of a plurality of regions on the lowest resolution version of said image for a plurality of semantic attributes with associated parts for said object, said appearance score denoting a probability of each semantic attribute appearing in the region; analyzing increasingly higher resolution versions than the lowest resolution version to compute a resolution context score for each region in the lowest resolution version; and ascertaining an optimized configuration of body parts and associated semantic attributes in the lowest resolution version, said ascertaining utilizing the appearance scores and the resolution context scores.07-11-2013
20130177250IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - An image processing apparatus in accordance with an embodiment receives a setting related to a common image processing to be executed on each of a plurality of process target areas; receives a setting of a reference area for defining the plurality of process target areas on an input image; and receives a setting for regularly defining the plurality of process target areas using the reference area as a reference. In accordance with the setting related to the common image processing, image processing is executed on each of the plurality of process target areas, and a result of overall process reflecting the results of image processing of respective ones of the plurality of process target areas is output.07-11-2013
20130177248METHOD AND APPARATUS FOR PHOTOGRAPH FINDING - Digital image data including discrete photographic images of a variety of different subjects, times, and so forth, are collected and analyzed to identify specific features in the photographs. In an embodiment of the invention, distinctive markers are distributed to aid in the identification of particular subject matter. Facial recognition may also be employed. The digital image data is maintained in a database and quarried in response to search requests. The search requests include criteria specifying any feature category or other identifying information, such as date, time, and location that each photograph was taken, associated with each photograph. Candidate images are provided for review by requesters, who may select desired images for purchase or downloading.07-11-2013
20130170755SMILE DETECTION SYSTEMS AND METHODS - Systems and methods of smile detection are disclosed. An exemplary method comprises generating a search map (07-04-2013
20110274358METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR RECOGNIZING A GESTURE - A method, apparatus and computer program product are provided for recognizing a gesture in which one or more relationships are determined between a plurality of body parts and the gesture is then determined based upon these relationships. Each relationship may be determined by determining an angle associated with at least one joint, determining one or more states of a body part based upon the angle associated with at least one joint, and determining a probability of a body part being in each respective state. The gesture may thereafter be determined based upon the one or more states and the probability associated with each state of the body part. Directions may be provided, such as to an unmanned vehicle, based upon the gesture to, for example, control its taxiing and parking operations.11-10-2011
20110274357IMAGE SIGNATURE EXTRACTION DEVICE - The image signature extraction device includes an extraction unit and a generation unit. The extraction unit extracts region features from respective sub-regions in an image in accordance with a plurality of pairs of sub-regions in the image, the pairs of sub-regions including at least one pair of sub-regions in which both a combination of shapes of two sub-regions of the pair and a relative position between the two sub-regions of the pair differ from those of at least one of other pairs of sub-regions. The generation unit generates, based on the extracted region features of the respective sub-regions, an image signature to be used for identifying the image.11-10-2011
20110274356IMAGE PATTERN RECOGNITION - Image pattern recognition is described. In accordance with one embodiment, a method for image recognition includes dividing an image into blocks in preparation for separating a region of interest of the image from the remainder of the image. The blocks can be analyzed to determine whether a two dimensional projection of data from one or more blocks has a circular shape. The region of interest can be identified by identifying the blocks with circular shaped projections.11-10-2011
20100040290MOTION ESTIMATION AND SCENE CHANGE DETECTION USING TWO MATCHING CRITERIA - A motion estimation and scene change detection method using two matching criteria, whereby the first matching criterion applies goodness-of-fit or another known matching criterion for motion estimation and the second matching criterion is the reciprocal value of the estimation error variance lower bound, which can be given by the Cramer-Rao-Inequation. The method allows taking full advantage of multiresolution signal processing.02-18-2010
20130170754PUPIL DETECTION DEVICE AND PUPIL DETECTION METHOD - Disclosed is a pupil detection device capable improving the pupil detection accuracy even if a detection target image is a low-resolution image. In a pupil detection device (07-04-2013
20080212879METHOD AND APPARATUS FOR DETECTING AND PROCESSING SPECIFIC PATTERN FROM IMAGE - In an image within which a face pattern is detected, when a ratio of a skin color pixel is equal to or smaller than a first threshold value in a first region and a ratio of a skin color pixel is equal to or greater than a second threshold value in a second r region, the vicinity of the first region is determined to be a face candidate position at which the face pattern can exist. Face detection is carried out on the face candidate position. The second region is arranged in a predetermined position relative to the first region.09-04-2008
20130094762SYSTEM AND METHOD FOR IDENTIFYING COMPLEX TOKENS IN AN IMAGE - In a first exemplary embodiment of the present invention, an automated, computerized method is provided for processing an image. According to a feature of the present invention, the method comprises the steps of providing an image file depicting an image, in a computer memory, determining log chromaticity representations for the image, clustering the log chromaticity representations as a function of an index, to provide clusters of similar log chromaticity representations and identifying regions of uniform reflectance in the image as a function of the clusters of similar log chromaticity representations.04-18-2013
20130101223IMAGE PROCESSING DEVICE - Provided is an image processing device for associating images with objects appearing in the images, while reducing burden on the user. The image processing device: stores, for each of events, a photographic attribute indicating a photographic condition predicted to be met with respect to an image photographed in the event; stores an object predicted to appear in an image photographed in the event; extracts from a collection of photographed images a photographic attribute that is common among a predetermined number of photographed images in the collection, based on pieces of photography-related information of the respective photographed images; specifies an object stored for an event corresponding to the extracted photographic attribute; and conducts a process on the collection of photographed images to associate each photographed image containing the specified object with the object.04-25-2013
20130101222IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND INTEGRATED CIRCUIT - An image processing device preventing the effect of noise from restricting the processing range of the super resolution process. The image processing device classifies each of a plurality of pieces of input pixel data that represent the input image into high-frequency region data or non-high-frequency region data, and generates, as at least part of output pixel data that represents the output image, one or more pieces of output pixel data in correspondence with one or more pieces of input pixel data classified as the high-frequency region data, by using the super resolution process data in accordance with amounts of noise of the one or more pieces of input pixel data.04-25-2013
20130101221ANOMALY DETECTION IN IMAGES AND VIDEOS - A system, method, and computer program product for detecting anomalies in an image. In an example embodiment the method includes partitioning each image of a set of images into a plurality of image local units. The method further includes clustering all local units in the image set into clusters, and consequently assigning a class label to each local unit based on the clustering results. The local units with identical class labels having at least one substantially related image feature. Further, the method includes assigning a weight to each of the local units based on a variation of the class labels across all images in a set of images. The method further includes performing a clustering over all images in the set by using a distance metric that takes the learned weight of each local unit into account, then determining the images that belong to minorities of the clusters as anomalies.04-25-2013
20130101220PREFERRED IMAGES FROM CAPTURED VIDEO SEQUENCE - In one embodiment, a computer system identifies a user in one or more frames of a video file, accesses a data store for image attitudinal data associated with the user, ranks the one or more frames based on the image attitudinal data associated with the user, and presents one or more top ranked frames to the user.04-25-2013
20130101219IMAGE SELECTION FROM CAPTURED VIDEO SEQUENCE BASED ON SOCIAL COMPONENTS - In one embodiment, a mobile device analyzes frames before and after a particular frame of a real-time video to identify one or more social network objects, and selects one or more frames before and after the particular frame based on social network information for further storage in the mobile device.04-25-2013
20130101224ATTRIBUTE DETERMINING METHOD, ATTRIBUTE DETERMINING APPARATUS, PROGRAM, RECORDING MEDIUM, AND ATTRIBUTE DETERMINING SYSTEM - The present invention is to provide an attribute determining method, an attribute determining apparatus, a program, a recording medium, and an attribute determining system of high detection accuracy with which an attribute of a person can be determined even in the case where a person is not facing nearly the front.04-25-2013
20110268362PROBE AND IMAGE RECONSTRUCTION METHOD USING PROBE - Provided is a probe capable of effectively performing NIR imaging by optimally arranging input channels and detection channels, and an image reconstruction method using the probe. The probe (11-03-2011
20130129222METHODS AND APPARATUSES FOR FACILITATING DETECTION OF TEXT WITHIN AN IMAGE - Methods and apparatuses are provided for facilitating detection of text within an image. A method may include calculating an alpha value associated with an image region containing a hypothesized text fragment. The alpha value may be defined as a function of a curved character length distribution, a character width distribution, and an inter-character spacing distribution for the hypothesized text fragment. The method may additionally include calculating a gamma value based at least in part on an interval length distribution determined for the hypothesized text fragment. The method may also include classifying whether the image region is a text-containing region based at least in part on the calculated alpha and gamma values. Corresponding apparatuses are also provided.05-23-2013
20130129223METHOD FOR IMAGE PROCESSING AND AN APPARATUS - The disclosure relates to a method in which one or more local descriptors relating to an interest point of an image are received. A global descriptor is determined for the image on the basis of the one or more local descriptors; and the global descriptor is compressed. The disclosure also relates to an apparatus comprising a processor and a memory including computer program code, and storage medium having stored thereon a computer executable program code for use by an apparatus.05-23-2013
20130136364IMAGE COMBINING DEVICE AND METHOD AND STORAGE MEDIUM STORING IMAGE COMBINING PROGRAM - A device that combines a first image photographed with a first amount of exposure and a second image photographed with a second amount of exposure lower than the first amount of exposure, thereby generating a combined image having a wider dynamic range than a dynamic range for the amounts of exposure of the first and second images, the device includes a motion region extraction unit configured to extract at least one motion region in which an object moving between the first image and the second image is shown; a combining ratio determining unit configured to increase a combining ratio of the second image to the first image for a pixel in a background region outside of the at least one motion region such that the higher a luminance value of a pixel of the first image, the higher the combining ratio.05-30-2013
20130142432METHOD OF TRACKING TARGETS IN VIDEO DATA - A method of tracking targets in video data. At each of a sequence of time steps, a set of weighted probability distribution components is derived. At each time step the following steps are performed. First, a new set of components from the components of the previous time step are derived in accordance with a predefined motion model for the targets. The video at the current time step is then analysed to obtain a set of measurements, and the new set of components is updated using the measurements in accordance with a predefined measurement model. Finally, the set of components derived at each time step are analysed to derive a set of tracks for the targets.06-06-2013
20130142434INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM - An information processing apparatus sets a plurality of reference locations of data in information as one reference location pattern and acquires a feature amount obtained from a value of data of one of the plurality of pieces of reference information in one reference location pattern for each of a plurality of reference location patterns and the plurality of pieces of reference information. The apparatus extracts data included in the input information according to each of the plurality of reference location patterns, selects the reference location pattern for classification of the input information from the plurality of reference location patterns based on a value of data included in the extracted input information, and executes classification of the input information by using the feature amount in the selected reference location pattern and data included in the input information at a reference location indicated by the reference location pattern.06-06-2013
20130142433SYSTEM AND METHOD FOR FINGERPRINTING FOR COMICS - The present disclosure relates to a system and a method for finger printing for comics. A system for searching comics according to the present disclosure includes: a fingerprint database storing fingerprints extracted from comics, a comics fingerprint extraction unit extracting fingerprints configured of at least one of box frames, cuts, and speech bubbles included in input comic images, a fingerprint based candidate group search unit searching candidate groups among comics stored in the fingerprint database using the extracted fingerprints, and a similarity measuring unit measuring similarity between the searched candidate groups and the comic images corresponding to the extracted fingerprints.06-06-2013
20130148895METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO MEASURE GEOGRAPHICAL FEATURES USING AN IMAGE OF A GEOGRAPHICAL LOCATION - Methods, apparatus, and articles of manufacture to measure geographical features using an image of a geographical location are disclosed. An example method includes dividing, with a processor, an image of a geographic area of interest into a plurality of geographical zones, the geographical zones being representative of different geographical areas having approximately equal physical areas, measuring, with the processor, a geographical feature represented in the image for corresponding ones of the plurality of geographical zones, storing descriptions for the geographical zones in a computer memory, and storing values representative of the geographical feature of the geographical zones.06-13-2013
20130148896IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An image processing apparatus includes an image acquisition unit that acquires a first image and a second image each of which includes a character string, an extraction unit that extracts feature points included in each of the first image and the second image, a setting unit that sets partial regions including characters which are continuously arranged in an arrangement direction of the character string in each of the first image and the second image, and a specification unit that compares positions of the feature points included in the partial regions set in the first image with positions of the feature points included in the partial regions set in the second image so as to specify the partial regions set in the second image corresponding to the partial regions set in the first image, and specifies corresponding points in each of the specified partial regions.06-13-2013
20130148897METHOD FOR IMAGE PROCESSING AND AN APPARATUS - The disclosure relates to a method comprising receiving an image;06-13-2013
20130148898CLUSTERING OBJECTS DETECTED IN VIDEO - Identification of facial images representing both animate and inanimate objects appearing in media, such as videos, may be performed using clustering. Clusters contain facial images representing the same or similar objects, providing a database for future automated facial image identification to be performed more quickly and easily. Clustering also allows videos or other media to be indexed so that segments that contain a certain object may be found without having to search through the entire length of the media. Clustering involves separating media data into individual frames and filtering for frames with facial images. A digital media processor may then process each facial image, compare it to other facial images, and form clusterizer tracks with the objective of forming a cluster. These newly formed clusters may be compared with previously formed clusters via key faces in order to determine the identity of facial images contained in the clusters.06-13-2013
20130148899METHOD AND APPARATUS FOR RECOGNIZING A CHARACTER BASED ON A PHOTOGRAPHED IMAGE - An apparatus and method for recognizing a character based on a photographed image. The apparatus includes an image determining unit, an image effect unit, a binarizing unit and a character recognizing unit. The image determining unit is configured to select, from an input image, a Region Of Interest (ROI) to be used for image analysis when the input image is input, and to analyse the selected ROI to determine a type of the input image. The image effect unit is configured to apply to the input image, an image effect for distinguishing a character region and a background region in a display screen if the type of the input image indicates that the input image is obtained by photographing a display screen. The binarizing unit is configured to binarize the input image or the output of the image effect unit according to the determined type of the input image. The character recognizing unit is configured to recognize a character from the binarized input image.06-13-2013
20120275706Method for Regenerating the Background of Digital Images of a Video Stream - The invention relates to a method for regenerating the background of digital images of a video stream comprising steps consisting in: -setting an initial background image, -cutting the unit images of the video stream into blocks b (i, j, t) and of the background image into corresponding blocks Bo (i, j, t). The method is essentially characterized in that it further includes steps consisting in: -selecting one block Bo of the background image and/or b of the frame image, and •calculating the space correlation thereof, with: •at least one block Bo of the background image at a time (t), and/or at another time (t−a), and/or .at least one block b of the frame image at a time (t), and/or at another time (t−a), and/or -updating the background image according to the calculation of the space correlation.11-01-2012
20120275705Method and System for Sample Image Index Creation and Image Filtering and Search - The present disclosure discloses a method and apparatus for creating a sample image index table, filtering image, and searching image, to improve accuracy of monitoring images. A method for image filtering comprises: establishing a sample image index table; extracting regional characteristics from an image to be searched; clustering the regional characteristics of the image to be searched into corresponding nodes; obtaining a corresponding sample image identification by indexing the sample image index table using node identifications of the nodes of the image to be searched; determining a number of duplicate nodes between the image to be searched and the sample image; obtaining a degree of similarity of the image to be searched based on a number of the nodes of the image to be searched and a number of the nodes of the sample image; and filtering out the image to be searched when a degree of similarity between the image to be searched and the sample image exceeds a similarity threshold.11-01-2012
20130148900Pose Estimation - In a pose estimation for estimating the pose of an object of pose estimation with respect to a reference surface that serves as a reference for estimating a pose, a data processing device: extracts pose parameters from a binarized image; identifies a combination of pose parameters for which the number of cross surfaces of parameter surfaces that accord with surface parameter formulas, which are numerical formulas for expressing a reference surface, is a maximum; finds a slope weighting for each of cross pixels, which are pixels on each candidate surface and which are pixels within a prescribed range, that is identified based on the angles of the tangent plane at the cross pixel and based on planes formed by each of the axes of parameter space; and identifies the significant candidate surface for which a number, which is the sum of slope weightings, is a maximum, as the actual surface that is the reference surface that actually exists in the image.06-13-2013
20130156322IDENTIFYING TRUNCATED CHARACTER STRINGS - A method includes automatically identifying a screen control in a user interface. Optical character recognition is applied to read a text that is displayed on an image of the screen control. The displayed text is automatically compared to a character string that is designated for the screen control. If part of the character string is not included in the displayed text, the displayed text is identified as truncated text.06-20-2013
20130156323SYSTEMS AND METHODS FOR EFFICIENT FEATURE EXTRACTION ACCURACY USING IMPERFECT EXTRACTORS06-20-2013
20130156324GEOGRAPHICAL LOCATION RENDERING SYSTEM AND METHOD AND COMPUTER READABLE RECORDING MEDIUM - A geographical location rendering method executed in a geographical location rendering system for identifying at least one semantic region is provided. A density clustering is performed on a plurality of user generated contents of respective geographical location name information to generate a plurality of region candidates. A name extraction is performed on the region candidates to extract and confirm a common region name of the region candidates as a name of the semantic region. A region scope of the region candidates is detected as a location scope of the semantic region according to a spatial density analysis.06-20-2013
20130182958APPARATUS AND METHOD FOR ANALYZING BODY PART ASSOCIATION - An apparatus and method for analyzing body part association. The apparatus and method may recognize at least one body part from a user image extracted from an observed image, select at least one candidate body part based on association of the at least one body part, and output a user pose skeleton related to the user image based on the selected at least one candidate body part.07-18-2013
20130182959SYSTEMS AND METHODS FOR MOBILE IMAGE CAPTURE AND PROCESSING - In various embodiments, methods, systems, and computer program products for processing digital images captured by a mobile device are disclosed. Myriad features enable and/or facilitate processing of such digital images using a mobile device that would otherwise be technically impossible or impractical, and furthermore address unique challenges presented by images captured using a camera rather than a traditional flat-bed scanner, paper-feed scanner or multifunction peripheral.07-18-2013
20130182960METHOD AND APPARATUS FOR ENCODING GEOMETRY PATTERNS, AND METHOD FOR APPARATUS FOR DECODING GEOMETRY PATTERNS - 3D models often have a large number of small to medium sized connected components, with small numbers of large triangles, often with arbitrary connectivity. The efficiency of compact representation of large multi-component 3D models can be improved by detecting and representing similarities between components thereof, even if the components are not exactly equal. The invention uses displacement maps for encoding two or more different but similar geometry patterns differentially, based on clustering and a cluster representative surface. A method for encoding a plurality of geometry patterns comprises detecting and encoding identical copies of geometrical patterns, detecting and clustering similar geometry patterns, and detecting partial similarity. The detecting partial similarity comprises generating a cluster representative surface, generating for at least one clustered geometry pattern a displacement map, and encoding the common surface and the displacement maps.07-18-2013
20130121588METHOD, APPARATUS, AND PROGRAM FOR COMPRESSING IMAGES, AND METHOD, APPARATUS, AND PROGRAM FOR DECOMPRESSING IMAGES - Costs are reduced, by decreasing the number of encoders used to compress images when compressing two or more images at different compression rates. A region of interest is detected within a processing target image, and a region of interest image is generated. A reduced image is generated by reducing the size of the processing target image. The reduced image and the region of interest image are multiplexed in an image space to generate a multiplex image. The multiplex image is compressed to generate compressed image data.05-16-2013
20110311146ADAPTED PIECEWISE LINEAR PROCESSING DRIVE - A piecewise linear processing device applies different amplification rates according to a general environment and a low luminance environment where much noise exists. The piecewise linear processing device includes a knee point storing unit configured to store a user's default setting value and low luminance setting value; a luminance detecting unit configured to detect a noisy environment to output a current luminance information signal and a maximum luminance information signal; an adaptive knee point supply unit configured to receive the default setting value, the low luminance setting value, the current luminance information signal, and the maximum luminance information signal to supply a adjusted adaptive knee point according to a degree of noise; and a piecewise linear processing unit configured to apply a section amplification rate to an input data on the basis of a region corresponding to the adaptive knee point.12-22-2011
20110311145SYSTEM AND METHOD FOR CLEAN DOCUMENT RECONSTRUCTION FROM ANNOTATED DOCUMENT IMAGES - A computer-implemented method and system for reconstructing a clean document from annotated document images and/or extracting annotations therefrom are provided. The method includes receiving a set of at least two annotated document images into computer memory, selecting a representative image from the set of annotated document images, performing a global alignment on each of the set of annotated document images with respect to the selected representative image, and forming a consensus document image based at least on the aligned annotated document images. A clean document based at least on the consensus document image is then formed which can be used for extracting the annotations.12-22-2011
20110311144RGB/DEPTH CAMERA FOR IMPROVING SPEECH RECOGNITION - A system and method are disclosed for facilitating speech recognition through the processing of visual speech cues. These speech cues may include the position of the lips, tongue and/or teeth during speech. In one embodiment, upon capture of a frame of data by an image capture device, the system identifies a speaker and a location of the speaker. The system then focuses in on the speaker to get a clear image of the speaker's mouth. The system includes a visual speech cues engine which operates to recognize and distinguish sounds based on the captured position of the speaker's lips, tongue and/or teeth. The visual speech cues data may be synchronized with the audio data to ensure the visual speech cues engine is processing image data which corresponds to the correct audio data.12-22-2011
20130188873IMAGE PICKUP DEVICE, FLASH IMAGE GENERATING METHOD AND COMPUTER-READABLE MEMORY MEDIUM - When a displacement between a reference frame of a plurality of images acquired by continuous image pickup and a target frame is less than a first threshold indicating that such a frame is not likely to be affected by occlusion, smoothing is performed on an object area through a morphological operation with a normal process amount. Conversely, when a displacement between the reference frame of the plurality of images acquired by continuous image pickup and a target frame is larger than or equal to the first threshold, smoothing is performed with the process amount of morphological operation being increased with respect to the normal process amount.07-25-2013
20120002877Non-transitory computer readable storage medium, marker creating apparatus, restoration apparatus, and marker creating method - When creating a marker, an encryption apparatus extracts each pixel value in a region and allows a storing unit to save, as restoration information, the high-order bits of each extracted pixel value. Then, the encryption apparatus creates a marker by changing the high-order bits of the pixel value in a region in which the marker is created and embeds encrypted information in an encrypted region specified by the marker. When decoding the encrypted information, a decoding apparatus detects the marker from a digital image, decodes the encrypted information in the encrypted region specified by the marker, and overwrites bits contained in the restoration information with the high-order bits of the pixel value of the marker.01-05-2012
20120020567DATA PROCESSING APPARATUS AND CONTROL METHOD THEREOF - A data processing apparatus that executes determining processing, using a plurality of stages, for determining whether or not a partial image sequentially extracted from an image of each frame of a moving image corresponds to a specific pattern, assigns a plurality of discriminators to each stage such that a plurality of partial images are processed in parallel. The data processing apparatus divides an image into a plurality of regions, and, for the image of each region, calculates a passage rate or accumulated passage rate from a ratio between the number of partial images input to a stage and the number of partial images determined to correspond to the specific pattern. The assignment of the discriminators to each stage is changed based on the passage rate or accumulated passage rate of the image processed immediately of a region to which the partial image extracted from the image being processed belongs.01-26-2012
20130195363IMAGE-BASED GEOREFERENCING - A computer-implemented method of providing georeferenced information regarding a location of capture of an image is provided. The method includes receiving a first image at an image-based georeferencing system, the first image comprising digital image information and identifying a cataloged second image that correlates to the first image. The method further includes automatically determining reference features common to both the second image and the first image, accessing geographic location information related to the common reference features, utilizing the geographic location information related to the common features to determine a georeferenced location of capture of the first image and providing the georeferenced location of capture for access by a user of the image-based georeferencing system.08-01-2013
20130195361IMAGE INDEX GENERATION BASED ON SIMILARITIES OF IMAGE FEATURES - Embodiments of the present application relate to an image index generation method, system, a device, and a computer program product. An image index generation method is provided. The method includes selecting an image included in an image library for which an image index is to be generated, determining at least one target region included in the image, extracting visual features from the determined at least one target region, determining a similarity value of the selected image and image included in the image library based on the extracted visual features, determining image categories to which the images belong to based on the determined similarity values among the images, and assigning category identifiers to the images in accordance with an identifier assignment method, the identifier assignment method assigns the same category identifiers to images belonging to the same image category, and different category identifiers to images belonging to different image categories.08-01-2013
20130195362IMAGE-BASED GEOREFERENCING - An image-based georeferencing system comprises an image receiver, an image identification processor, a reference feature determiner, and a feature locator. The image receiver is configured for receiving a first image for use in georeferencing. The image comprises digital image information. The system includes a communicative coupling to a georeferenced images database of images. The image identification processor is configured for identifying a second image from the georeferenced images database that correlates to the first image. The system includes a communicative coupling to a geographic location information system. The reference feature determiner is configured for determining a reference feature common to both the second image and the first image. The feature locator is configured for accessing the geographic information system to identify and obtain geographic location information related to the common reference feature.08-01-2013
20130195364SITUATION DETERMINING APPARATUS, SITUATION DETERMINING METHOD, SITUATION DETERMINING PROGRAM, ABNORMALITY DETERMINING APPARATUS, ABNORMALITY DETERMINING METHOD, ABNORMALITY DETERMINING PROGRAM, AND CONGESTION ESTIMATING APPARATUS - A congestion estimating apparatus includes an area dividing unit that divides a moving image into partial areas. A movement information determining unit determines whether there is movement, and a person information determining unit determines whether there is a person, in each partial area. A staying determining unit determines a state for each partial area. The staying determining unit determines the state as a movement area in which there is a movement of person when there is movement and there is a person; and determines the state as a noise area when there is movement and there is no person; and determines the state as a staying area in which there is a person who is staying when there is no movement and there is a person; and determines the state as a background area in which there is no person when there is no movement and there no person.08-01-2013
20130202210METHOD FOR HUMAN ACTIVITY PREDICTION FROM STREAMING VIDEOS - A method for human activity prediction from streaming videos includes extracting space-time local features from video streams containing video information related to human activities; and clustering the extracted space-time local features into multiple visual words based on the appearance of the features. Further, the method for the human activity prediction includes computing an activity likelihood value by modeling each activity as an integral histogram of the visual words; and predicting the human activity based on the computed activity likelihood value.08-08-2013

Patent applications in class Local or regional features

Patent applications in all subclasses Local or regional features