Patent application number | Description | Published |
20100020068 | View Point Representation for 3-D Scenes - Techniques are described for deriving information, including graphical representations, based on perspectives of a 3D scene by utilizing sensor model representations of location points in the 3D scene. A 2D view point representation of a location point is derived based on the sensor model representation. From this information, a data representation can be determined. The 2D view point representation can be used to determine a second 2D view point representation. Other techniques include using sensor model representations of location points associated with dynamic objects in a 3D scene. These sensor model representations are generated using sensor systems having perspectives external to the location points and are used to determine a 3D model associated with a dynamic object. Data or graphical representations may be determined based on the 3D model. A system for obtaining information based on perspectives of a 3D scene includes a data manager and a renderer. | 01-28-2010 |
20100030350 | System and Method for Analyzing Data From Athletic Events - Embodiments of this invention relate to generating information from an athletic event. In an embodiment, a method includes receiving an aspect of a first object and an aspect of a second object in an athletic event. In some cases, objects may be athletes, balls, pucks, game officials, goals, defined areas, time periods or other sports related objects. Aspects may include but are not limited to, a location, motion, pose, shape or size. The method further includes determining a data representation based on the aspect of the first object relative to the aspect of the second object. In some cases, data representations may be stored in a data server. In other cases, data representations may be displayed. In another embodiment, a system includes an object tracker and a data manager. Aspects may be recorded using a sensor system. | 02-04-2010 |
20100050082 | Interactive Video Insertions, And Applications Thereof - Embodiments of this invention relate to controlling insertion of visual elements integrated into video. In an embodiment, a method enables control of insertions in a video. In the embodiment, control data is received from a user input device. Movement of at least one point of interest in a video is analyzed to determine video metadata. Finally, a visual element is inserted into a video according to the control data, and the visual element changes or moves with the video as specified by the video metadata to appear integrated with the video. | 02-25-2010 |
20100238351 | Scene recognition methods for virtual insertions - A method of adding a virtual insertion to an image, according to an embodiment, includes extracting dynamic features from an input image, associating the dynamic extracted features with dynamic reference features in a reference feature database, generating a camera model based on the associations, mixing a virtual insertion into the input image based on the camera model, and outputting an image containing both the input image and the virtual insertion. According to another embodiment, a method of adding a virtual insertion to an image includes generating a biased camera model using a statistically selected subset of a plurality of non-fixed regions of the image, locating fixed reference objects in the image, using the biased camera model as an entry point for a search, generating a corrected camera model using the fixed reference objects in the image, and adding a virtual insertion to the image using the corrected camera model. | 09-23-2010 |
20100251287 | Backpropagating a Virtual Camera to Prevent Delayed Virtual Insertion - A method for video insertion using backpropagation may include determining a first camera model from a first frame of the sequence. The method may also include determining a transition location. The method may further include generating a transform model based on an analysis of the first frame and a second frame that occurs earlier in the video sequence and applying the transform model to the first camera model to generate a second camera model for the second frame. The method then includes inserting an insertion into one or more frames earlier in the sequence between the second frame and the transition location based on the second camera model, wherein the inserting is performed before displaying the frames. A system for video insertion using backpropagation includes search, transition, track and insertion subsystems. | 09-30-2010 |
20110013087 | Play Sequence Visualization and Analysis - A method for visualizing plays in a sporting event may include receiving a video stream of the sporting event and a measurement stream, asynchronous to the video stream, associated with objects in the sporting event. The method may further include displaying a synchronized presentation of the video stream and the measurement stream. The synchronization may be performed near the time of the displaying. Another method for visualizing plays in a sporting event may include receiving measurement information related to actions from one or more sporting events. The method may also include identifying plays from the actions using the measurement information and displaying a representation of the identified plays. A system for visualizing plays in a sporting event may include an integrated server and a synchronization mechanism. Another method for visualizing plays in a sporting event may include displaying a video of a play selected from a representation. | 01-20-2011 |
20110013836 | MULTIPLE-OBJECT TRACKING AND TEAM IDENTIFICATION FOR GAME STRATEGY ANALYSIS - A method for automatically tracking multiple objects from a sequence of video images that may extract raw data about participating elements in a sporting, or other event, in a way that does not interfere with the actual participating elements in the event. The raw data may include the position and velocity of the players, the referees, and the puck, as well as the team affiliation of the players. These data may be collected in real time and may include accounting for players moving fast and unpredictably, colliding with and occluding each other, and getting in and out of the playing field. The video sequence, captured by a suitable sensor, may be processed by a suitably programmed general purpose computing device. | 01-20-2011 |
20110063415 | Hyperlinked 3D Video Inserts for Interactive Television - A viewer may directly interact with a 3D object that is virtually placed in a physical location in a video scene. Initially, the object appears as an integral part of the original video scene and does not interfere with the general viewer's experience of the program. A viewer may initiate interaction with the object using an input device. An interested viewer may navigate through the object's architecture based on the viewer's interest. For example, the viewer may drag the object to a new physical insertion point in the scene. The user may rotate the 3D object into different orientations and zoom in. Each orientation of the object, if selected by the viewer, may invoke a new linked object in the predefined architecture. For example, the viewer may walk through the linked objects in the predefined architecture or observe an object at an increasing level of detail. | 03-17-2011 |
20110090344 | Object Trail-Based Analysis and Control of Video - Systems and methods for analyzing scenes from cameras imaging an event, such as a sporting event broadcast, are provided. Systems and methods include detecting and tracking patterns and trails. This may be performed with intra-frame processing and without knowledge of camera parameters. A system for analyzing a scene may include an object characterizer, a foreground detector, an object tracker, a trail updater, and a video annotator. Systems and methods may provide information regarding centers and spans of activity based on object locations and trails, which may be used to control camera field of views such as a camera pose and zoom level. A magnification may be determined for images in a video sequence based on the size of an object in the images. Measurements may be determined from object trails in a video sequence based on an effective magnification of images in the video sequence. | 04-21-2011 |
20110102678 | Key Generation Through Spatial Detection of Dynamic Objects - A method, apparatus, and computer program product are described that utilizes spatial modeling to represent foreground objects of an event to allow virtual graphics to be integrated into a background of the event in the presence of dynamic objects. The present invention detects a presence of dynamic objects within a region of interest from a video depicting the event. The present invention produces a suppression key corresponding to the dynamic object when present in the video or a suppression key with a default value when and where no dynamic object is present in the video. | 05-05-2011 |
20110128377 | Lens Distortion Method for Broadcast Video - A method, apparatus, and computer program product are described to improve a lens distortion curve which roughly approximates distortion caused by a camera lens to capture an event onto video. The present invention selects a generic lens distortion curve that roughly approximates the distortion caused by the camera lens while capturing the event onto the video. The video as well as information from the generic lens distortion curve is used to generate a camera model. This camera model is used to integrate virtual insertions into the video. If the camera model is sufficiently accurate to present a realistic appearance of the virtual insertions to the remote viewer, this camera model is then used to integrate more virtual insertions into the video. However, if the camera model is not sufficiently accurate, an iterative process is employed to refine this camera model. | 06-02-2011 |
20110141359 | In-Program Trigger of Video Content - Methods and systems for triggering an in-program display event are provided. In an embodiment, a method for triggering an in-program display event may include perfotining a video content analysis of a program display. The method may also include determining a display event trigger in real time based on the video content analysis. The method may further include displaying a display event in the program display based on the display event trigger. In some cases, the display event may be an interactive session. In another embodiment, a system for triggering an in-program display event may include a trigger mechanism and an insertion module. | 06-16-2011 |
20110216167 | VIRTUAL INSERTIONS IN 3D VIDEO - Embodiments relate to insertions in 3D video. Virtual camera models enable insertions to be reconciled relative to left and right channels of the 3D video to maximize 3D accuracy and realism of the insertions. Cameras are formed as composites, and can be derived from other models. The camera models can be based on a visual analysis of the 3D video, and can be based on 3D camera data including toe-in and ocular spacing. The camera data may be derived from information collected using instrumentation connect to a 3D camera system, derived based on visual analysis of the 3D video, or derived using a combination of information collected using instrumentation and visual analysis of 3D video. Insertions can be made on-site or at a remote site, and camera data can be embedded in the 3D video and/or separately transmitted to a remote site. Insertions can be adjusted in 3D space based on a type of insertion, the 3D video scene composition, and/or user feedback, including interactive adjustment of 3D insertions and adjustments in view of user sensitivity to eye strain. | 09-08-2011 |
20140327676 | View Point Representation for 3-D Scenes - Techniques are described for deriving information, including graphical representations, based on perspectives of a 3D scene by utilizing sensor model representations of location points in the 3D scene. A 2D view point representation of a location point is derived based on the sensor model representation. From this information, a data representation can be determined. The 2D view point representation can be used to determine a second 2D view point representation. Other techniques include using sensor model representations of location points associated with dynamic objects in a 3D scene. These sensor model representations are generated using sensor systems having perspectives external to the location points and are used to determine a 3D model associated with a dynamic object. Data or graphical representations may be determined based on the 3D model. A system for obtaining information based on perspectives of a 3D scene includes a data manager and a renderer. | 11-06-2014 |