Patent application number | Description | Published |
20080219504 | AUTOMATIC MEASUREMENT OF ADVERTISING EFFECTIVENESS - An automated system for measuring information about a target image in a video is described. One embodiment includes receiving a set of one or more video images for the video, automatically finding the target image in at least a subset of the video images, determining one or more statistics regarding the target image being in the video, and reporting the one or more statistics. | 09-11-2008 |
20090027494 | PROVIDING GRAPHICS IN IMAGES DEPICTING AERODYNAMIC FLOWS AND FORCES - A video broadcast of a live event is enhanced by providing graphics in the video in real time to depict the fluid flow around a moving object in the event and to provide other informative graphics regarding aerodynamic forces on the object. A detailed flow field around the object is calculated before the event, on an offline basis, for different speeds of the object and different locations of other nearby objects. The fluid flow data is represented by baseline data and modification factors or adjustments which are based on the speed of the object and the locations of the other objects. During the event, the modification factors are applied to the baseline data to determine fluid flow in real time, as the event is captured on video. In an example implementation, the objects are race cars which transmit their location and/or speed to a processing facility which provides the video. | 01-29-2009 |
20090027500 | DETECTING AN OBJECT IN AN IMAGE USING TEMPLATES INDEXED TO LOCATION OR CAMERA SENSORS - An object is detected in images of a live event by storing and indexing templates based on representations of the object from previous images. For example, the object may be a vehicle which repeatedly traverses a course. A first set of images of the live event is captured when the object is at different locations in the live event. A representation of the object in each image is obtained, such as by image recognition techniques, and a corresponding template is stored. When the object again traverses the course, for each location, the stored template which is indexed to the location can be retrieved for use in detecting the object in a current image. The object's current location may be obtained from GPS data from the object, for instance, or from camera sensor data, e.g., pan, tilt and zoom, which indicates a direction in which the camera is pointed. | 01-29-2009 |
20090028385 | DETECTING AN OBJECT IN AN IMAGE USING EDGE DETECTION AND MORPHOLOGICAL PROCESSING - A representation of an object in a live event is detected in an image of the event. A location of the object in the live event is translated to an estimated location in the image based on camera sensor and/or registration data. A search area is determined around the estimated location in the image. A direction of motion of the object in the image is also determined. A representation of the object is identified in the search area by detecting edges of the object, e.g., perpendicular to the direction of motion and parallel to the direction of motion, performing morphological processing, and matching against a model or other template of the object. Based on the position of the representation of the object, the camera sensor and/or registration data can be updated, and a graphic can be located in the image substantially in real time. | 01-29-2009 |
20090028425 | IDENTIFYING AN OBJECT IN AN IMAGE USING COLOR PROFILES - A representation of an object in an image of a live event is obtained by determining a color profile of the object. The color profile may be determined from the image in real time and compared to stored color profiles to determine a best match. For example, the color profile of the representation of the object can be obtained by classifying color data of the representation of the object into different bins of a color space, in a histogram of color data. The stored color profiles may be indexed to object identifiers, object viewpoints, or object orientations. Color data which is common to different objects or to a background color may be excluded. Further, a template can be used as an additional aid in identifying the representation of the object. The template can include, e.g., a model of the object or pixel data of the object from a prior image. | 01-29-2009 |
20090028439 | PROVIDING VIRTUAL INSERTS USING IMAGE TRACKING WITH CAMERA AND POSITION SENSORS - Camera registration and/or sensor data is updated during a live event by determining a difference between an estimated position of an object in an image and an actual position of the object in the image. The estimated position of the object in the image can be based on an estimated position of the object in the live event, e.g., based on GPS or other location data. This position is transformed to the image space using current camera registration and/or sensor data. The actual position of the object in the image can be determined by template matching which accounts for an orientation of the object, a shape of the object, an estimated size of the representation of the object in the image, and the estimated position of the object in the image. The updated camera registration/sensor data can be used in detecting an object in a subsequent image. | 01-29-2009 |
20090028440 | DETECTING AN OBJECT IN AN IMAGE USING MULTIPLE TEMPLATES - A representation of an object in an image of a live event is detected by matching potential representation of the object against multiple types of templates. For example, the templates can include monochrome data, chrominance and/or luminance data, pixel data of the object from an earlier image, e.g., as a video template, an edge and morphology based template, a model of the object, or a predetermined static texture which is based on an appearance of the object. A weighting function may also be used. In one possible approach, a first type of template is used in an initial search area, and a second type of template is used in a smaller region of the initial search area. Based on a position of the optimum representation of the object in the image, a graphic can be provided in the image, or sensor and/or registration data of a camera can be updated. | 01-29-2009 |
20090128580 | Telestrator System - A telestrator system is disclosed that allows a broadcaster to annotate video during or after an event. For example, while televising a sporting event, an announcer (or other user) can use the present invention to draw over the video of the event to highlight one or more actions, features, etc. In one embodiment, when the announcer draws over the video, it appears that the announcer is drawing on the field or location of the event. Such an appearance can be performed by mapping the pixels location from the user's drawing to three dimensional locations at the event. Other embodiments include drawing on the video without obscuring persons and/or other specified objects, and/or smoothing the drawings in real time. | 05-21-2009 |
20100238163 | Telestrator System - A telestrator system is disclosed that allows a broadcaster to annotate video during or after an event. For example, while televising a sporting event, an announcer (or other user) can use the present invention to draw over the video of the event to highlight one or more actions, features, etc. In one embodiment, when the announcer draws over the video, it appears that the announcer is drawing on the field or location of the event. Such an appearance can be performed by mapping the pixels location from the user's drawing to three dimensional locations at the event. Other embodiments include drawing on the video without obscuring persons and/or other specified objects, and/or smoothing the drawings in real time. | 09-23-2010 |
20110205022 | TRACKING SYSTEM - A system simultaneously tracks multiple objects. All or a subset of the objects includes a wireless receiver and a transmitter for providing an output. The system includes one or more wireless transmitters that send commands to the wireless receivers of the multiple objects instructing different subsets of the multiple objects to output (via their respective transmitter) at different times. The system also includes object sensors that receive output from the transmitters of the multiple objects and a computer system in communication with the object sensors. The computer system calculates locations of the multiple objects based on the sensed output from the multiple objects. In some embodiments, the system can also track an item based on proximity of that item to one or more of the objects. In such embodiments, the multiple objects each includes one or more local sensors. The local sensors detect presence of the item and the items' transmitters communicate presence of the item based on respective one or more local sensors. The computer system identifies a location of the item based on communications from one or more of the objects indicating presence of the item and the calculated locations of the multiple objects detecting the item. | 08-25-2011 |
20110205077 | TRACKING SYSTEM USING PROXIMITY AND/OR PRESENCE - A system simultaneously tracks multiple objects. All or a subset of the objects includes a wireless receiver and a transmitter for providing an output. The system includes one or more wireless transmitters that send commands to the wireless receivers of the multiple objects instructing different subsets of the multiple objects to output (via their respective transmitter) at different times. The system also includes object sensors that receive output from the transmitters of the multiple objects and a computer system in communication with the object sensors. The computer system calculates locations of the multiple objects based on the sensed output from the multiple objects. In some embodiments, the system can also track an item based on proximity of that item to one or more of the objects. In such embodiments, the multiple objects each includes one or more local sensors. The local sensors detect presence of the item and the items' transmitters communicate presence of the item based on respective one or more local sensors. The computer system identifies a location of the item based on communications from one or more of the objects indicating presence of the item and the calculated locations of the multiple objects detecting the item. | 08-25-2011 |
20110218065 | BALL - A ball includes a bladder and a cover disposed over the bladder. The bladder is typically filled with air. The cover may be made of leather, rubber, or other material. One or more electrical components are positioned inside the ball. | 09-08-2011 |
20120233105 | SIMULATION SYSTEM - A simulation system is proposed that makes use of historical and live data sensed for one or more objects (e.g., people, cars, balls, rackets, etc.). An event will include one or more decision points. A choice of an action to take at a decision point is made. That chosen action will be simulated based on the historical and live data. The simulation can be compared to the actual action taken in the event as a way to judge the choice. Although the choice of action to take at the decision point is simulated, the real event is not altered by the choice. | 09-13-2012 |
20130033598 | SYSTEM FOR ENHANCING VIDEO FROM A MOBILE CAMERA - A system for enhancing video can insert graphics, in perspective, into the video where the positions of the graphics are dependent on sensor measurements of the locations and/or attitudes of objects and the movable camera capturing the video. | 02-07-2013 |
20150021481 | TRACKING SYSTEM - A system simultaneously tracks multiple objects. All or a subset of the objects includes a wireless receiver and a transmitter for providing an output. The system includes one or more wireless transmitters that send commands to the wireless receivers of the multiple objects instructing different subsets of the multiple objects to output (via their respective transmitter) at different times. The system also includes object sensors that receive output from the transmitters of the multiple objects and a computer system in communication with the object sensors. The computer system calculates locations of the multiple objects based on the sensed output from the multiple objects. | 01-22-2015 |