Patent application number | Description | Published |
20080231709 | SYSTEM AND METHOD FOR MANAGING THE INTERACTION OF OBJECT DETECTION AND TRACKING SYSTEMS IN VIDEO SURVEILLANCE - A system, method and program product for providing a video surveillance system that enhances object detection by utilizing feedback from a tracking system to an object detection system. A system is provided that includes: a moving object detection system for detecting moving objects in a video input; an object tracking system for tracking a detected moving object in successive time instants; and a tracker feedback system for feeding tracking information from the object tracking system to the moving object detection system to enhance object detection. | 09-25-2008 |
20080232685 | CATEGORIZING MOVING OBJECTS INTO FAMILIAR COLORS IN VIDEO - An improved solution for categorizing moving objects into familiar colors in video is provided. In an embodiment of the invention, a method for categorizing moving objects into familiar colors in video comprises: receiving a video input; determining at least one object track of the video input; creating a normalized cumulative histogram of the at least one object track; and one of: performing a parameterization quantization of the histogram including separating the histogram into regions based on at least one surface curve derived from one of saturation and intensity; or identifying a significant color of the quantized histogram. | 09-25-2008 |
20080252448 | SYSTEM AND METHOD FOR EVENT DETECTION UTILIZING SENSOR BASED SURVEILLANCE - The present invention includes a method, system, and program product for detecting an event that includes receiving at least one data input stream from one or more sensors, selecting a data input stream from one of the one or more sensors, recording the data input stream on a recordable medium, specifying a rule comprising an event in the data input stream, and detecting at least one event in the data input stream based upon the rule. | 10-16-2008 |
20090033746 | AUTOMATIC ADJUSTMENT OF AREA MONITORING BASED ON CAMERA MOTION - A solution for monitoring an area while accounting for camera motion and/or monitoring tasks is provided. For example, a physical area corresponding to a new field of view can be estimated for a camera for which motion is detected. The physical area can be estimated using a set of reference images previously captured by the camera, each of which comprises a unique field of view previously captured by the camera. Based on the physical area, a status for a monitoring task of the camera (e.g., an alert) can be updated and/or a location of an area for the monitoring task within an image captured by the camera can be updated. Further, based on the update(s), a field of view for a second camera can be automatically adjusted and/or a status for the monitoring task on the second camera can be automatically updated. | 02-05-2009 |
20100013656 | AREA MONITORING USING PROTOTYPICAL TRACKS - A solution for monitoring an area includes using a region schema for the area. The region schema can include a set of prototypical tracks, each of which includes a start location, an end location, and a trajectory. The trajectory comprises an expected path an object will travel between the start location and the end location and can include variation information that defines an amount that an object can vary from the trajectory. The region schema can be generated by obtaining training object tracking data for the area for an initialization time period and evaluating the object tracking data to identify the set of prototypical tracks. While monitoring the area, monitored object tracking data is obtained for a monitored object in the area, and abnormal behavior of the monitored object is identified when the monitored object tracking data for the monitored object does not follow at least one of the set of prototypical tracks in the region schema. | 01-21-2010 |
20100106707 | INDEXING AND SEARCHING ACCORDING TO ATTRIBUTES OF A PERSON - An approach that indexes and searches according to a set of attributes of a person is provided. In one embodiment, there is an extensible indexing and search tool, including an extraction component configured to extract a set of attributes of a person monitored by a set of sensors in a zone of interest. An index component is configured to index each of the set of attributes of the person within an index of an extensible indexing and search tool. A search component is configured to enable a search of the index of the extensible indexing and search tool according to at least one of the set of attributes of the person. | 04-29-2010 |
20120026335 | Attribute-Based Person Tracking Across Multiple Cameras - Techniques for tracking an individual across two or more cameras are provided. The techniques include detecting an image of one or more individuals in each of two or more cameras, tracking each of the one or more individuals in a field of view in each of the two or more cameras, applying a set of one or more attribute detectors to each of the one or more individuals being tracked by the two or more cameras, and using the set of one or more attribute detectors to match an individual tracked in one of the two or more cameras with an individual tracked in one or more other cameras of the two or more cameras. | 02-02-2012 |
20120027249 | Multispectral Detection of Personal Attributes for Video Surveillance - Techniques for detecting an attribute in video surveillance include generating training sets of multispectral images, generating a group of multispectral box features comprising receiving input of a detector size of a width and height, a number of spectral bands in the multispectral images, and integer values representing a minimum and maximum width and height of multispectral box features, fixing a feature width and to height, generating feature building blocks with the fixed width and height, placing a feature building block at a same location for each spectral band level, and enumerating combinations of the feature building blocks through each spectral level until all sizes within the integer values have been covered, and wherein each combination determines a multispectral box feature, using the training sets to select multispectral box features to generate a multispectral attribute detector, and using the multispectral attribute detector to identify a location of an attribute in video surveillance. | 02-02-2012 |
20120030208 | Facilitating People Search in Video Surveillance - Techniques for facilitating a video surveillance search of a person are provided. The techniques include maintaining a database of one or more attributes of one or more people captured on one or more video cameras, indexing the one or more attributes in the database extracted from the one or more video cameras, and pruning one or more images captured from the one or more video cameras using the one or more attributes and one or more items of qualifying information to facilitate a video surveillance search of a person. | 02-02-2012 |
20120170805 | OBJECT DETECTION IN CROWDED SCENES - Methods and systems are provided for object detection. A method includes automatically collecting a set of training data images from a plurality of images. The method further includes generating occluded images. The method also includes storing in a memory the generated occluded images as part of the set of training data images, and training an object detector using the set of training data images stored in the memory. The method additionally includes detecting an object using the object detector, the object detector detecting the object based on the set of training data images stored in the memory. | 07-05-2012 |
20120274805 | Color Correction for Static Cameras - Methods and apparatus are provided for color correction of images. One or more colors in an image obtained from a static video camera are corrected by obtaining one or more historical background models from one or more prior images obtained from the static video camera; obtaining a live background model and a live foreground model from one or more current images obtained from the static video camera; generating a reference image from the one or more historical background models; and processing the reference image, the live background model, and the live foreground model to generate a set of color corrected foreground objects in the image. The set of color corrected foreground objects is optionally processed to classify a color of at least one of the foreground objects. | 11-01-2012 |
20120281873 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object detected and tracked within a field of view environment of a 2D data feed of a calibrated video camera is represented by a 3D model through localizing a centroid of the object and determining an intersection with a ground-plane within the field of view environment. An appropriate 3D mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding 2D image as a function of the centroid and the determined ground-plane intersection. Nonlinear dynamics of a tracked motion path of the object are represented as a collection of different local linear models. A texture of the object is projected onto the 3D model, and 2D tracks of the object are upgraded to 3D motion to drive the 3D model by learning a weighted combination of the different local linear models that minimizes an image re-projection error of model movement. | 11-08-2012 |
20130028468 | Example-Based Object Retrieval for Video Surveillance - Methods and apparatus are provided for example-based object retrieval that can retrieve objects from video images in real-time. An object of interest is identified in a sequence of images by obtaining an identification from a user of an example object having at least one attribute of interest; generating a query object based on the identified example object, wherein the query object has a substantially similar viewpoint as objects in the sequence of images and wherein the query object comprises a plurality of attributes that are substantially similar as the example object; and processing the sequence of images to identify the object of interest based on a similarity metric to the query object. | 01-31-2013 |
20130108102 | Abandoned Object Recognition Using Pedestrian Detection | 05-02-2013 |
20130241928 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object detected and tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model through localizing a centroid of the object and determining an intersection with a ground-plane within the field of view environment. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image as a function of the centroid and the determined ground-plane intersection. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model. | 09-19-2013 |
20130336534 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 12-19-2013 |
20130336535 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 12-19-2013 |
20140050356 | MULTI-MODE VIDEO EVENT INDEXING - Multi-mode video event indexing includes determining a quality of object distinctiveness with respect to images from a video stream input. A high-quality analytic mode is selected from multiple modes and applied to video input images via a hardware device to determine object activity within the video input images if the determined level of detected quality of object distinctiveness meets a threshold level of quality, else a low-quality analytic mode is selected and applied to the video input images via a hardware device to determine object activity within the video input images, wherein the low-quality analytic mode is different from the high-quality analytic mode. | 02-20-2014 |
20140056476 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model. | 02-27-2014 |
20140098221 | APPEARANCE MODELING FOR OBJECT RE-IDENTIFICATION USING WEIGHTED BRIGHTNESS TRANSFER FUNCTIONS - An approach for re-identifying an object in a first test image is presented. Brightness transfer functions (BTFs) between respective pairs of training images are determined. Respective similarity measures are determined between the first test image and each of the training images captured by the first camera (first training images). A weighted brightness transfer function (WBTF) is determined by combining the BTFs weighted by weights of the first training images. The weights are based on the similarity measures. The first test image is transformed by the WBTF to better match one of the training images captured by the second camera. Another test image, captured by the second camera, is identified because it is closer in appearance to the transformed test image than other test images captured by the second camera. An object in the identified test image is a re-identification of the object in the first test image. | 04-10-2014 |
20140147041 | IMAGE COLOR CORRECTION - Color-correcting a digital image comprising P pixels (P≧4) is presented. Each of the P pixels has a respective color. Color strengths of the P pixels are determined based at least on respective intensities, respective saturations, or both respective intensities and respective saturations of the P pixels. A subset of the P pixels less than all of the P pixels is determined. The pixels in the subset have respective color strengths in a range of respective color strength. All other pixels of the P pixels have respective color strengths outside of the range of respective color strengths. Color correction is determined for the P pixels based in part on the colors of the respective pixels in the subset which are the only pixels of the P pixels used for determining the color correction. The colors of the P pixels are corrected based on the color correction. | 05-29-2014 |
20140253732 | TOPOLOGY DETERMINATION FOR NON-OVERLAPPING CAMERA NETWORK - Image-matching tracks the movements of the objects from initial camera scenes to ending camera scenes in non-overlapping cameras. Paths are defined through scenes for pairings of initial and ending cameras by different respective scene entry and exit points. For each of said camera pairings a combination path having a highest total number of tracked movements relative to all other combinations of one path through the initial and ending camera scene is chosen, and the scene exit point of the selected path through the initial camera and the scene entry point of the selected path into the ending camera define a path connection of the initial camera scene to the ending camera scene. | 09-11-2014 |
20140314277 | INCORPORATING VIDEO META-DATA IN 3D MODELS - A moving object tracked within a field of view environment of a two-dimensional data feed of a calibrated video camera is represented by a three-dimensional model. An appropriate three-dimensional mesh-based volumetric model for the object is initialized by using a back-projection of a corresponding two-dimensional image. A texture of the object is projected onto the three-dimensional model, and two-dimensional tracks of the object are upgraded to three-dimensional motion to drive a three-dimensional model. | 10-23-2014 |