# Patent application title: System And Method For Determining The Current Focal Length Of A Zoomable Camera

##
Inventors:
Howard J. Kennedy (Hamilton Square, NJ, US)
Smadar Gefen (Yardley, PA, US)
Smadar Gefen (Yardley, PA, US)

IPC8 Class: AH04N1700FI

USPC Class:
348187

Class name: Television monitoring, testing, or measuring testing of camera

Publication date: 2013-09-12

Patent application number: 20130235213

Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP

## Abstract:

An accurate camera pose is determined by pairing a first camera with a
second camera in proximity to one another, and by developing a known
spatial relationship between them. An image from the first camera and an
image from the second camera are analyzed to determine corresponding
features in both images, and a relative homography is calculated from
these corresponding features. A relative parameter, such as a focal
length or an extrinsic parameter is used to calculate a first camera's
parameter based on a second camera's parameter and the relative
homography.## Claims:

**1.**A method, comprising: receiving a first image from a first camera; receiving a second image from a second camera, wherein the second camera is positioned with a relative orientation to the first camera; receiving a camera parameter of the second camera; determining a first set of corresponding feature pairs common to the first image and the second image; determining, based on the first set of corresponding feature pairs, a relative homography between the first image and the second image; and calculating a camera parameter of the first camera based on the relative homography and the camera parameter of second camera.

**2.**The method of claim 1, wherein the camera parameter of the second camera is a focal length and calculating the camera parameter of the first camera includes calculating the focal length of the first camera.

**3.**The method of claim 1, wherein the camera parameter of the second camera is an extrinsic parameter and calculating the camera parameter of the first camera includes calculating an extrinsic parameter of the first camera.

**4.**The method of claim 1, wherein the first set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

**5.**The method of claim 1, wherein the camera parameter of the second camera is computed based on: determining a second set of corresponding feature pairs common to the second image and a scene model; determining, based on the second set of corresponding feature pairs, an absolute homography between the second image and the scene model; and calculating the camera parameter of the second camera based on the absolute homography between the second image and the scene model.

**6.**The method of claim 5, wherein the second set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

**7.**A system, comprising: a geometric feature determining module for receiving a first image from a first camera and for receiving a second image from a second camera, wherein the second camera is positioned with a relative orientation to the first camera, the geometric feature determining module receiving a camera parameter of the second camera and determining a first set of corresponding feature pairs common to the first image and the second image; a homography determining module for determining, based on the first set of corresponding feature pairs, a relative homography between the first image and the second image; and a calculation module for calculating a camera parameter of the first camera based on the relative homography and the camera parameter of second camera.

**8.**The system of claim 7, wherein the camera parameter of the second camera is a focal length and the calculating of the camera parameter of the first camera by the calculation module includes calculating the focal length of the first camera.

**9.**The system of claim 7, wherein the camera parameter of the second camera is an extrinsic parameter and the calculating of the camera parameter of the first camera by the calculation module includes calculating an extrinsic parameter of the first camera.

**10.**The system of claim 7, wherein the first set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

**11.**The system of claim 7, wherein the camera parameter of the second camera is computed based on: determining a second set of corresponding feature pairs common to the second image and a scene model; determining, based on the second set of corresponding feature pairs, an absolute homography between the second image and the scene model; and calculating the camera parameter of the second camera based on the absolute homography between the second image and the scene model.

**12.**The system of claim 11, wherein the second set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

**13.**A computer-readable medium containing instructions that when executed on a computing device results in a performance of the following: receiving a first image from a first camera; receiving a second image from a second camera, wherein the second camera is positioned with a relative orientation to the first camera; receiving a camera parameter of the second camera. determining a first set of corresponding feature pairs common to the first image and the second image; determining, based on the first set of corresponding feature pairs, a relative homography between the first image and the second image; and calculating a camera parameter of the first camera based on the relative homography and the camera parameter of second camera.

**14.**The computer-readable medium of claim 13, wherein the camera parameter of the second camera is a focal length and calculating the camera parameter of the first camera includes calculating the focal length of the first camera.

**15.**The computer-readable medium of claim 13, wherein the camera parameter of the second camera is an extrinsic parameter and calculating the camera parameter of the first camera includes calculating an extrinsic parameter of the first camera.

**16.**The computer-readable medium of claim 13, wherein the first set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

**17.**The computer-readable medium of claim 13, wherein the camera parameter of the second camera is computed based on: determining a second set of corresponding feature pairs common to the second image and a scene model; determining, based on the second set of corresponding feature pairs, an absolute homography between the second image and the scene model; and calculating the camera parameter of the second camera based on the absolute homography between the second image and the scene model.

**18.**The computer-readable medium of claim 17, wherein the second set of corresponding feature pairs includes one of a set of common points, a set of common lines, a set of common conics, and a combination involving two or more of a set of common points, a set of common lines, and a set of common conics.

## Description:

**FIELD OF THE INVENTION**

**[0001]**The exemplary embodiments relate to systems and methods for determining the focal length of a first camera based on the focal length of a second camera positioned in proximity to the first camera.

**BACKGROUND INFORMATION**

**[0002]**An accurate camera pose is essential information in many systems, for example, in camera systems intended to broadcast sporting events at stadiums. Some elements of a camera pose (e.g., pan, tilt, roll, position) are sometimes known, fixed, or obtainable with inexpensive sensors. Pan means translation; roll means rotation. The current or instantaneous focal length of a zoomable camera is less frequently available, or of insufficient precision for many applications. In cameras that make their current focal length externally available, the resolution and/or absolute accuracy of the data may be too low for the application that requires focal length data. Thus, using a zoomable camera that outputs its focal length has proven to be unreliable and problematic.

**[0003]**In some cases, a portion or an entirety of the elements of a first camera pose can be determined by comparing the current view of the first camera with one or more static images or models of a scene, possibly derived from cameras beforehand. In some other cases, a portion or entirety of the elements of a first camera pose can be determined by comparing the current view of the first camera with concurrent images from one or more additional cameras, some of whose parameters are known. The extrinsic parameters of a camera include pan, roll, tilt, camera position, etc. This technique can be more useful than basing the determination on predetermined static images or scene models, since it can adapt to changes in lighting or background.

**[0004]**Many systems rely upon visual recognition of pre-determined scenes to solve for focal length (and other camera parameters). However, when the current scene is not a pre-determined, expected scene, a camera pose is not calculable. This may also occur when the camera is pointed away from a pre-determined scene (for instance, pointing at the audience), or when the camera is zoomed so far in or out that either expected fiducials are too few in number, or so small that they are unusable, or are occluded by foreground objects.

**[0005]**Typically, positions of landmarks in the scene are represented by a 3D Model. At intermediate zoom levels, the landmark position points of the model may be matched with their corresponding feature points from the current video image. Based on these pairs of corresponding points a homography (projective mapping between planar points from a 3D space and their projection in the image space) is calculated. Then, camera parameters are estimated based on the calculated homography. Pre-determined landmarks imply the need for a scene model. This is often inconvenient, or impossible. Nevertheless, a relative homography (or the homography) may be established between zoom-invariant, but ad-hoc feature points in simultaneous views of the same scene from two cameras.

**BRIEF DESCRIPTION OF THE DRAWINGS**

**[0006]**FIG. 1 is a schematic diagram of a system for determining the focal length of a zoomable camera.

**[0007]**FIG. 2 shows a functional block diagram illustrating a focal length calculating arrangement for calculating the focal length of a first camera.

**[0008]**FIG. 3 is a flow diagram illustrating the method for determining the absolute focal length of the first camera 105.

**DETAILED DESCRIPTION**

**[0009]**The exemplary embodiments may be further understood with reference to the following description of the exemplary embodiments and the related appended drawings, wherein like elements are provided with the same reference numerals. The exemplary embodiments are related to systems and methods for detecting objects in a video image sequence. The exemplary embodiments are described in relation to the detection of players in a sporting event performing on a playing surface, but the present invention encompasses as well systems and methods where determination of a camera focal length is required for accurate and visually desirable imaging of a static or dynamic scene. The exemplary embodiments may be advantageously implemented using one or more computer programs executing on a computer system having a processor or central processing unit, such as, for example, a computer using an Intel-based CPU, such as a Pentium or Celeron, running an operating system such as the WINDOWS or LINUX operating systems, having a memory, such as, for example, a hard drive, RAM, ROM, a compact disc, magneto-optical storage device, and/or fixed or removable media, and having a one or more user interface devices, such as, for example, computer terminals, personal computers, laptop computers, and/or handheld devices, with an input means, such as, for example, a keyboard, mouse, pointing device, and/or microphone.

**[0010]**An exemplary embodiment proposes to use a pair of cameras positioned in proximity to each other and with a known spatial mapping between their optical axes. This spatial mapping may be derived from the relative homography, as will be explained in detail below. The relative homography calculation is based on corresponding feature pairs, where correspondence is between a first camera's image projection and a second camera's image projection of the same landmark from the scene. An absolute homography computation is based on corresponding feature pairs, where correspondence is between a landmark at the scene model (e.g. landmark's real-world position) and its projection in the camera image. Although the embodiment is described in connection with one wide field-of-view camera (the second camera), there may be associated with the zoomable camera more than one WFOV camera. Although the preferred embodiment is discussed within the context of these cameras being used for coverage of a sporting event, it is to be understood that the exemplary embodiments are applicable to other contexts, such as, for example virtual world applications like augmented reality, virtual studio applications, etc. In the sports coverage example, the first camera tends to zoom in and out frequently, so that there may be large discrepancies in focal length between the first and second cameras. Matching corresponding features from two images with large differences in scale may be challenging. In addition to the discrepancy in scale, the problem may be complicated by the number and types of features that are to be reliably extracted and matched at a certain camera view. In the preferred embodiment, the first camera is a zoom-able camera, while the second camera has a fixed focal length and is set to capture a wide field-of-view of the scene. The first and second cameras share approximate pan, tilt, roll, and position pose elements, but not focal length. The focal length of the first camera may be calculated from 1) the known focal length of the second camera and 2) the relative homography between the first and second cameras generated from corresponding features extracted from images taken by these two cameras.

**[0011]**The features that may be used in the homography calculation include, for example, key-points, lines, and conics. These geometrical features are invariant under projective mapping (for example a conic projectively maps to a second corresponding conic). Note that in order to establish correspondence between two geometrical features, various image analysis methods may be employed, such as computing metrics based on the texture or color statistics of local pixels. In this disclosure, a combination of extractable key-points, lines, and conics, is utilized to solve for the homography via a linear equation system as explained in further detail below.

**[0012]**FIG. 1 is a schematic diagram of a system 100 for determining the focal length of a zoomable camera. In FIG. 1, first camera 105 is zoomable, and second camera 110 is not, having instead a fixed focal length and set to a field of view wide enough to capture enough fiducials to enable a calculation of a homography based on common geometric features between the images of the cameras 105, 110 (described below). Cameras 105, 110 may be attached to one another, through a mechanical connection 130, such that translations and rotations experienced by the first camera 105 are duplicated simultaneously in the second camera 110, and aligned such that the optical axes 115, 120 of cameras 105, 110 are parallel (as illustrated in FIG. 1), or are arranged according to some other relative orientation to one another that is known. Alternatively, cameras 105, 110 need not be physically attached to one another, so long as the spatial relation (or a rigid transformation) that maps one camera to another is extractable. In this alternative, the cameras 105, 110 are approximately co-located, but otherwise may be oriented differently as long as cameras 105, 110 cover enough corresponding features to calculate the homography. Note that, in the case where first camera and second camera are mechanically attached, due to vibrations or other practical reasons, there may still be a differential relative orientation that should be accounted for (meaning to receive an accurate result when calculating the first camera focal length, one should also model possible discrepancies in relative panning, tilting, or rolling between the two cameras).

**[0013]**FIG. 2 shows a functional block diagram illustrating a focal length calculating arrangement 200 for calculating the focal length of the first camera 105. In terms of a hardware implementation, the various modules illustrated collectively FIG. 2 may be embodied as a single processing device, such as a suitable programmed microprocessor, or as a system on a chip, ASIC, or any other programmable arrangement. Arrangement 200 may be housed inside either of cameras 105 or 110, or it may be located remotely in a media truck at the sporting venue or a broadcast studio. In the case of a remotely located arrangement, 200, the various information on camera pose elements may be transmitted to arrangement 200 either wirelessly through any suitable transmission medium (RF, IR, ultrasonic), through a direct wired connection, through a network like an ETHERNET network, or through the Internet.

**[0014]**The image from first camera 105 and the image from second camera 110 are supplied to a module 205 for determining the corresponding geometric features of the two images. These corresponding features may include, for example, points, lines, and conics from both images. The extraction and matching between these corresponding features may be accomplished according to any suitable method known in the art. For example, a known in the art method such as SIFT (Scale-Invariant Feature Transformation) or SURF (Speeded Up Robust Feature) may be used.

**[0015]**The extracted corresponding features are supplied to a linear homography calculating module 210. The particular technique for calculating the homography depends on the particular category of common feature extracted from the images. For instance, known methods linearly estimate the homography matrix based on corresponding set of points and/or lines. Some methods estimate the homography based on corresponding set of conics. In practice, as the camera 105 steers and zooms in order to cover an action of an event, at times, the available projected features (key-points, lines, or conics) are sparse. Therefore, a robust (linear) method that estimates the homography out of any currently available combination of features is advantageous and described in detail below.

**[0016]**The homography calculating module 210 receives from module 205 a combination of corresponding pairs of points, lines, and/or conics from two planar images from cameras 105, 110. As shown in the table below, a point, a line, and a conic extracted from a first image I, when undergoing projective mapping H

_{3}×3 onto image I' preserve their geometric properties. Meaning a point, a line, and a conic are invariant under the projective mapping (a point maps into a point, a line maps into a line, and a conic maps into a conic).

**TABLE**-US-00001 A Point A Line A Conic X' = H X l' = H

^{-}T l C' = H

^{-}T C

^{-}T H

^{-1}Where Where Where X = (x, y, 1)

^{T}; X ε I l = (a, b, c)

^{T}; l ε I C = [ a b d b c e d e f ] ; C .di-elect cons. I ##EQU00001## X' = (x', y', 1)

^{T}; X' ε I' l' = (a, b, c)

^{T}; l' ε I' C ' = [ a ' b ' d ' b ' c ' e ' d ' e ' f ' ] ; C ' .di-elect cons. I ' ##EQU00002##

**An estimate for the homography is determined by solving a homogeneous**equation system: Mh=0, where h is a concatenation of H's rows: h

^{T}≡[h

_{11},h

_{12},h

_{13},h

_{21},h

_{2}2,h

_{2}3,h.su- b.31,h

_{32},h

_{3}3] and M is derived from the given corresponding features in I and I' as is explained below for the cases of corresponding 1) pair of points, 2) pair of lines, and 3) two pairs of conics.

**[0017]**With regard to homogenous equations from a pair of corresponding points X and X', under projective mapping there exists X'=HX. Employing the cross product of X' on both sides of the equations yields: X'×X'=X'×HX. Since by definition X'×X'=0, X'×HX=0. The form X'×HX=0 can be written as Mh=0 where M may be derived as follows:

**M**≡ [ 0 0 0 - X 3 ' X 1 - X 3 ' X 2 - X 3 ' X 3 X 2 ' X 1 X 2 ' X 2 X 2 ' X 3 X 3 ' X 1 X 3 ' X 2 X 3 ' X 3 0 0 0 - X 1 ' X 1 - X 1 ' X 2 - X 1 ' X 3 - X 2 ' X 1 - X 2 ' X 2 - X 2 ' X 3 X 1 ' X 1 X 1 ' X 2 X 1 ' X 3 0 0 0 ] ##EQU00003##

**Note that only two equations are linearly independent**.

**[0018]**With regard to homogenous equations derived from a pair of corresponding lines, l and l', under projective mapping there exists l'=H

^{-}Tl. Employing the cross product of l' on both sides of the equation yields: l'×l'=l'×(H

^{-}Tl). Since by definition l'×l'=0, l'×(H

^{-}Tl)=0. The form l'×(H

^{-}Tl)=0 can be written as Mh=0, where M may be derived as follows:

**M**≡ [ 0 - l 3 l 1 ' l 2 l 1 ' 0 - l 3 l 2 ' l 2 l 2 ' 0 - l 3 l 3 ' l 2 l 3 ' l 3 l 1 ' 0 - l 1 l 1 ' l 3 l 2 ' 0 - l 1 l 2 ' l 3 l 3 ' 0 - l 1 l 3 ' - l 2 l 1 ' l 1 l 1 ' 0 - l 2 l 2 ' l 1 l 2 ' 0 - l 2 l 3 ' l 1 l 3 ' 0 ] ##EQU00004##

**Note that only two equations are linearly independent**.

**[0019]**With regard to homogenous equations derived from two pairs of corresponding conics C

_{1}, C

_{2}and C

_{1}', C

_{2}', and assuming non-degenerative conics (i.e. determinant det(C)≠0) corresponding conics are related as follows:

**s**

_{1}C

_{1}=H

^{TC}

_{1}'H

**s**

_{2}C

_{2}=H

^{TC}

_{2}'H

**Computing the determinant**, s

_{i}

^{3}det(C

_{i})=det(C

_{i}')det(H)

^{2}, and then setting det(H)

^{2}=1 results in: s

_{i}=(det(C

_{i}')/det(C

_{i}))

^{1}/3. Hence, C

_{i}and C

_{i}' are normalized so that det(C

_{i})=det(C

_{i}'). The normalized conics satisfy:

**C**

_{1}=H

^{TC}

_{1}'H

**C**

_{2}=H

^{TC}

_{2}'H

**Multiplying the inverse of first equation with the second equation**results:

**C**

_{1}

^{-1}C

_{2}=H

^{-1}C

_{1}'

^{-1}C

_{2}'H

**And then multiplying both sides by H yields a linear system with respect**to the element of H:

**C**

_{1}'

^{-1}C

_{2}'H-HC

_{1}

^{-1}C

_{2}=0

**Or**

**AH**-HB≡Mh=0

**Where**, M may be derived as follows:

**M**≡ [ M 11 M 12 M 13 A 12 A 12 A 12 A 13 A 13 A 13 A 21 A 21 A 21 M 24 M 25 M 26 A 23 A 23 A 23 A 31 A 31 A 31 A 32 A 32 A 32 M 37 M 38 M 39 ] , ##EQU00005##

**and where**,

**[0020]**M

_{11}=A

_{11}-B

_{11}-B

_{12}-B

_{13}, M

_{12}=A

_{11}-B

_{21}-B

_{2}2-B

_{2}3, M

_{13}=A

_{11}-B

_{31}-B

_{32}-B

_{3}3, M

_{24}=A

_{2}2-B

_{11}-B

_{12}-B

_{13}, M

_{25}=A

_{2}2-B

_{21}-B

_{2}2-B

_{2}3, M

_{26}=A

_{2}2-B

_{31}-B

_{32}-B

_{3}3, M

_{37}=A

_{3}3-B

_{11}-B

_{12}-B

_{13}, M

_{38}=A

_{3}3-B

_{21}-B

_{2}2-B

_{2}3, and M

_{39}=A

_{3}3-B

_{31}-B

_{32}-B

_{3}3. In this case, all three equations are linearly independent.

**[0021]**The following shall discuss determining minimum features for a unique solution (up to a scale) for the system Mh=0. In order to determine a unique solution, the rank of M should be eight, meaning the number of independent equations should be at least 8. Since a pair of corresponding points results in two independent equations, a pair of corresponding lines results in two independent equations, and two pairs of corresponding conics result in three independent equations, any combination of corresponding lines, points, and conics that result in at least 8 linear equations will be sufficient for the computation of h (except for the combination of two lines and two points). Therefore, many possible combinations of corresponding features from the pair of images for satisfying the eight linear equation are possible. For instance, one pair of points (one from each image), together with three pairs of corresponding lines (or vice versa) would produce eight independent equations. Similarly, two pairs of corresponding points, one pair of corresponding lines, and two pairs of corresponding conics (yielding three equations) would produce nine available independent equations. Determining such combinations of eight equations is done for each frame of the images from cameras 105 and 110. Thus, for a first pair of frames from the cameras, the eight equations may consist of equations from a pair of corresponding points in the first pair of images and equations from three pairs of corresponding lines in the first pair of images. For the next pair of frames, the eight equations may be determined from a different combination of corresponding features. In the case where accurate relative spatial orientation between the two cameras is given (or, for example, a subset of the parameters: relative pan, relative tilt, and relative roll is given), fewer corresponding feature pairs (i.e. fewer independent equations) are needed to solve for the first camera focal length. For example, when the relative orientation is fully known, two pairs of corresponding features are sufficient to calculate the relative focal length.

**[0022]**Once a unique solution is determined for the relative homography of a concurrent pair of frames of the two camera images, the absolute focal length of the first camera 105 can be determined. Specifically, once the relative homography for the current frames of the images from cameras 105, 110 is determined, it will contain the relative focal length between the two cameras 105, 110. That is, the relative focal length taken from the relative homography is the ratio of the apparent focal length of the first camera 105 with the known focal length of the second camera 110. Once this ratio is known, the absolute focal length of the first camera 105 is determined by multiplying the ratio by the absolute focal length of the second camera 110.

**[0023]**FIG. 3 is a flow diagram illustrating the method for determining the absolute focal length of the first camera 105. Step 300 involves receiving a current frame of a first image from camera 105 and a current frame of a second image from second camera 110. In step 305, corresponding geometrical features from the current frames are determined. As explained above, combinations of corresponding points, lines, and conics are determined continually to generate, for the general case, at least eight independent equations. Once the equations are determined, they are used in Step 310 to compute a unique solution for the relative homography H. The relative homography includes the relative focal length between the cameras 105, 110. The homography provides the ratio of focal lengths, that is, the relative focal length. Once the relative focal length is known, it is multiplied by the known focal length of camera 110 to determine the absolute focal length of camera 105. The method has been described as a single instance of focal length determination, but this method can be repeated as often as is needed. For instance, the focal length calculation can be performed for every frame of the images produced by cameras 105, 110, every other frame, or as often or as infrequently as the particular application requires. For instance, in an application requiring frequent changes of the focal length of camera 105, the exemplary embodiments can be keep track of the focal length changes by calculating the focal length on a periodic basis, with the period between calculations being determined according to the necessities of the particular application.

**[0024]**The exemplary embodiments can work with any camera/lens without modification. Moreover, the exemplary embodiments achieve the purpose of determining a focal length for a first camera that is attached to a second camera even if the observed scene does not match a known model. This situation may result if the observed scene contains too few, or no, known fiducials. In some applications, some or all of pan, tilt, roll, and position may be fixed, known, or available from sensors. The addition of current focal length according to the exemplary embodiments can provide a full camera pose, without recourse to visual model-matching. The exemplary embodiments can enable a model-matching system to determine a full first camera pose, even if the first camera cannot view enough fiducials to form a model. If the second camera's field of view is set to a wide-enough field of view to capture enough fiducials to form a model, the second camera model will determine translation and rotation which is shared by the first camera (since they may be attached). Then, the first camera's calculated focal length is substituted for the second camera's focal length. With regard to the linear computation of the homography from any available combination of corresponding keypoints, lines, and/or conics (e.g., common features in a game arena), once the homography is known the camera model may be estimated based on methods known in the prior art.

**[0025]**Typically, the second camera is set to capture a wide view of the scene, and, relative to the first camera, may be steered slowly without rapid changes in orientation and zoom level. Thus, second camera's parameters may be derived based on a known model of the scene, for example, the points (e.g. line intersections), lines and circles that constitute a Hockey rink. Due to the camera wide view, a combination of points, lines, and circles from the rink are likely to be concurrently captured within a video frame forming corresponding pairs (e.g. a line from the scene model of the rink corresponds to its projection in the video frame). Out of these corresponding pairs, an absolute homography may be calculated following the method described above for the relative homography. Out of this absolute homography, the second camera's parameter may be calculated following methods in the art.

User Contributions:

Comment about this patent or add new information about this topic: