Patent application number | Description | Published |
20090060321 | SYSTEM FOR COMMUNICATING AND METHOD - A system communicates a representation of a scene, which includes a plurality of objects disposed on a plane, to one or more client devices. The representation is generated from one or more video images of the scene captured by a video camera. The system comprises an image processing apparatus operable to receive the video images of the scene which includes a view of the objects on the plane, to process the captured video images so as to extract one or more image features from each object, to compare the one or more image features with sample image features from a predetermined set of possible example objects which the video images may contain, to identify the objects from the comparison of the image features with the predetermined image features of the possible example objects, and to generate object path data for each object which identifies the respective object; and provides a position of the identified object on a three dimensional model of the plane in the video images with respect to time. The image processing apparatus is further operable to calculate a projection matrix for projecting the position of each of the objects according to the object path data from the plane in the video image into the three dimensional model of the plane. A distribution server is operable to receive the object path data and the projection matrix generated by the image processing apparatus for distribution of the object path data and the projection matrix to one or more client devices. The system is arranged to generate a representation of an event, such as a sporting event, which provides a substantial data in an amount of information which must be communicated to represent the event. As such, the system can be used to communicate the representation of the event, via a bandwidth limited communications network, such as the internet, from the server to one or more client devices in real time. Furthermore, the system can be used to view one or more of the objects within the video images by extracting the objects from the video images. | 03-05-2009 |
20090063847 | CONTENT PROTECTION METHOD AND APPARATUS - There is disclosed a content protection method and apparatus. The content protection method and apparatus further improves such related schemes by facilitating spatial as well as temporal management of content. This is achieved by storing encrypted content and a corresponding decryption key and destroying the decryption key when suitable. In order to further facilitate the content protection, the decryption key may be received periodically, which allows for a large number of people to connect to the network at different times. | 03-05-2009 |
20090064249 | DISTRIBUTION NETWORK AND METHOD - There is disclosed a distribution network and method. This is particularly advantageous because the network allows highlight clips to be generated and distributed quickly. Additionally, the highlights are more consistent because the user selects the event (from which the highlight is formed) from a list of possible event selections. This also has the advantage that there is the reduced likelihood of the individual generating the events missing any other incidents requiring highlighting. | 03-05-2009 |
20090066784 | IMAGE PROCESSING APPARATUS AND METHOD - An image processing apparatus and method generate a three dimensional representation of a scene which includes a plurality of objects disposed on a plane. The three dimensional representation is generated from one or more video images of the scene, which include the objects on the plane produced from a view of the scene by a video camera. The method comprises processing the captured video images so as to extract one or more image features from each object, comparing the one or more image features with sample image features from a predetermined set of possible example objects which the video images may contain, and identifying the objects from the comparison of the image features with the stored image features of the possible example objects. The method also includes generating object path data, which includes object identification data for each object which identifies the respective object; and provides a position of the object on the plane in the video images with respect to time. The method further includes calculating a projection matrix for projecting the position of each of the objects according to the object path data from the plane into a three dimensional model of the plane. As such a three dimensional representation of the scene which includes a synthesised representation of each of the plurality of objects on the plane can be produced, by projecting the position of the objects according to the object path data into the plane of the three dimensional model of the scene using the projection matrix and a predetermined assumption of the height of each of the objects. Accordingly, a three dimensional representation of a live video image of, for example, a football match can be generated, or tracking information included on the live video images. As such, a change in a relative view of the generated three dimensional representation can be made, so that a view can be provided in the three dimensional representation of the scene from a view point at which no camera is actually present to capture video images of the live scene. | 03-12-2009 |
20100208942 | IMAGE PROCESSING DEVICE AND METHOD - An image processing device comprises receiving means operable to receive, from a camera, a captured image corresponding to an image of a scene captured by the camera. The scene contains at least one object. The device comprises determining means operable to determine a distance between the object within the scene and a reference position defined with respect to the camera, and generating means operable to detect a position of the object within the captured image, and to generate a modified image from the captured image based on image features within the captured image which correspond to the object in the scene. The generating means is operable to generate the modified image by displacing the position of the captured object within the modified image with respect to the determined position of the object within the captured image by an object offset amount which is dependent on the distance between the reference position and the object in the scene so that, when the modified image and the captured image are viewed together as a pair of images on a display, the captured object appears to be positioned at a predetermined distance from the display. | 08-19-2010 |
20110129143 | METHOD AND APPARATUS AND COMPUTER PROGRAM FOR GENERATING A 3 DIMENSIONAL IMAGE FROM A 2 DIMENSIONAL IMAGE - A method of generating a three dimensional image from a two dimensional image is described. In the method, the two dimensional image has a background and a first foreground object and a second foreground object located thereon, the method comprising the steps of: applying a transformation to a copy of the background, generating stereoscopically for display the background and the transformed background, generating stereoscopically for display the first and second foreground object located on the stereoscopically displayable background and the transformed background and determining whether the first and second foreground objects occlude with one another, wherein in the event of occlusion, the occluded combination of the first and second object forms a third foreground object and, the method further comprises the step of: applying a transformation to the third foreground object, wherein the transformation applied to the third foreground object is less than or equal to the transformation applied to the background; generating a copy of the third foreground object with the transformation applied thereto and generating stereoscopically for display the third foreground object with the transform applied thereto and the copy of the third foreground object displaced relative to one another by an amount determined in accordance with the position of one of the first or second foreground objects in the image. | 06-02-2011 |
20120081606 | VIDEO SYNCHRONIZATION - A method of synchronizing the phase of a local image synchronization signal generator of a local video data processor in communication with an asynchronous switched packet network to the phase of a reference image synchronization signal generator of a reference video data processor also coupled to the network, the local and reference processors having respective clocks, the reference and local image synchronization signal generators generating periodic image synchronization signals in synchronism with the reference and local clocks respectively including: frequency synchronizing the local and reference clocks; sending an image timing packet providing reference image synchronization data indicating the difference in timing, measured with respect to the reference processor's clock, between the time at which the image timing packet is launched onto the network and the time of production of a reference image synchronization signal; and controlling the timing of the production of the local image synchronization signal. | 04-05-2012 |
20120121090 | CONTENT PROTECTION METHOD AND APPARATUS - There is disclosed a content protection method and apparatus. The content protection method and apparatus further improves such related schemes by facilitating spatial as well as temporal management of content. This is achieved by storing encrypted content and a corresponding decryption key and destroying the decryption key when suitable. In order to further facilitate the content protection, the decryption key may be received periodically, which allows for a large number of people to connect to the network at different times. | 05-17-2012 |
20130258051 | APPARATUS AND METHOD FOR PROCESSING 3D VIDEO DATA - A method comprising: obtaining a disparity map for one of a left eye image and a right eye image, the disparity map indicating the horizontal displacement between corresponding pixels in the left eye image and the right eye image; generating a luminance representation of the disparity map, wherein a disparity unit which defines the resolution of the disparity map, is mapped to a corresponding luminance value; and companding the luminance value for transmission over a luminance channel is described. | 10-03-2013 |
20140300644 | METHOD AND APPARATUS FOR GENERATING AN IMAGE CUT-OUT - A method of generating a cut-out from an image of a scene which has been captured by a camera is described. The method comprises: defining the position of a virtual camera, the image plane of the virtual camera being the cut out, with respect to the position of the camera capturing the scene; | 10-09-2014 |
20140300645 | METHOD AND APPARATUS FOR CONTROLLING A VIRTUAL CAMERA - A method of controlling the movement of a virtual camera whose image plane provides a cut-out from a captured image of a scene is disclosed. The method comprises: defining a set of first pixel positions forming the boundary of the cut-out of the captured image; defining a set of second pixel positions for the boundary of the captured image; calculating a virtual camera rotation matrix to be applied to the first pixel positions, the virtual camera rotation matrix representative of the difference in at least one of the yaw and pitch of the image plane of the virtual camera and the image plane of the captured image of the scene, wherein the virtual camera rotation matrix is limited such that when one of the set of first pixel positions is transformed using the virtual camera rotation matrix, the transformed first pixel position is located within the boundary of the captured image. | 10-09-2014 |
20140300687 | METHOD AND APPARATUS FOR APPLYING A BORDER TO AN IMAGE - Disclosed is a method for generating an image, comprising: obtaining a first image comprised of a first area being a plurality of images having different field of views of a real-life scene and captured from a location above the real-life scene stitched together to form a panoramic view of the real-life scene, and a second area which does not include the plurality of images; generating a second image which is a segment of the first image; determining whether the second image includes only the first area; and when the second image includes both the first area and at least part of the second area, the method further comprises: applying a border to the second image that extends along an upper boundary and along a lower boundary of the second image, the border being applied above the upper boundary and below the lower boundary. | 10-09-2014 |