Patent application number | Description | Published |
20090087024 | CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated. | 04-02-2009 |
20090087085 | TRACKER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - A tracker component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The behavior-recognition system may be configured to learn, identify, and recognize patterns of behavior by observing a video stream (i.e., a sequence of individual video frames). The tracker component may be configured to track objects depicted in the sequence of video frames and to generate, search, match, and update computational models of such objects. | 04-02-2009 |
20090087086 | IDENTIFYING STALE BACKGROUND PIXELS IN A VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene. | 04-02-2009 |
20090087093 | DARK SCENE COMPENSATION IN A BACKGROUND-FOREGROUND MODULE OF A VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene. | 04-02-2009 |
20090087096 | BACKGROUND-FOREGROUND MODULE FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a module for identifying a background of a scene depicted in an acquired stream of video frames that may be used by a video-analysis system. For each pixel or block of pixels in an acquired video frame a comparison measure is determined. The comparison measure depends on difference of color values exhibited in the acquired video frame and in a background image respectively by the pixel or block of pixels and a corresponding pixel and block of pixels in the background image. To determine the comparison measure, the resulting difference is considered in relation to a range of possible color values. If the comparison measure is above a dynamically adjusted threshold, the pixel or the block of pixels is classified as a part of the background of the scene. | 04-02-2009 |
20100208986 | ADAPTIVE UPDATE OF BACKGROUND PIXEL THRESHOLDS USING SUDDEN ILLUMINATION CHANGE DETECTION - Techniques are disclosed for a computer vision engine to update both a background model and thresholds used to classify pixels as depicting scene foreground or background in response to detecting that a sudden illumination changes has occurred in a sequence of video frames. The threshold values may be used to specify how much pixel a given pixel may differ from corresponding values in the background model before being classified as depicting foreground. When a sudden illumination change is detected, the values for pixels affected by sudden illumination change may be used to update the value in the background image to reflect the value for that pixel following the sudden illumination change as well as update the threshold for classifying that pixel as depicting foreground/background in subsequent frames of video. | 08-19-2010 |
20110043536 | VISUALIZING AND UPDATING SEQUENCES AND SEGMENTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a sequence storing an ordered string of symbols generated from kinematic data derived from analyzing an input stream of video frames depicting one or more foreground objects. The sequence may represent information learned by a video surveillance system. A request may be received to view the sequence or a segment partitioned form the sequence. A visual representation of the segment may be generated and superimposed over a background image associated with the scene. A user interface may be configured to display the visual representation of the sequence or segment and to allow a user to view and/or modify properties associated with the sequence or segment. | 02-24-2011 |
20110043625 | SCENE PRESET IDENTIFICATION USING QUADTREE DECOMPOSITION ANALYSIS - Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene. | 02-24-2011 |
20110043689 | FIELD-OF-VIEW CHANGE DETECTION - Techniques are disclosed for detecting a field-of-view change for a video feed. These techniques differentiate between a new or changed scene and a temporary variation in the scene to accurately detect field-of-view changes for the video feed. A field-of-view change is detected when the position of a camera providing the video feed changes, the video feed is switched to a different camera, the video feed is disconnected, or the camera providing the video feed is obscured. A false-positive field-of-view change is not detected when the scene changes due to a sudden variation in illumination, obstruction of a portion of the camera providing the video feed, blurred images due to an out-of-focus camera, or a transition between bright and dark light when the video feed transitions between color and near infrared capture modes. | 02-24-2011 |
20110044492 | ADAPTIVE VOTING EXPERTS FOR INCREMENTAL SEGMENTATION OF SEQUENCES WITH PREDICTION IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110044498 | VISUALIZING AND UPDATING LEARNED TRAJECTORIES IN VIDEO SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur. | 02-24-2011 |
20110044533 | VISUALIZING AND UPDATING LEARNED EVENT MAPS IN SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert. | 02-24-2011 |
20110050896 | VISUALIZING AND UPDATING LONG-TERM MEMORY PERCEPTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system. | 03-03-2011 |
20110050897 | VISUALIZING AND UPDATING CLASSIFICATIONS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification. | 03-03-2011 |
20120163670 | BEHAVIORAL RECOGNITION SYSTEM - Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned. | 06-28-2012 |
20120257831 | CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated. | 10-11-2012 |
20150078656 | VISUALIZING AND UPDATING LONG-TERM MEMORY PERCEPTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system. | 03-19-2015 |