Open Access Paper
17 October 2022 Multiple linear detector off-line calibration
Sasha Gasquet, Laurent Desbat, Pierre-Yves Solane
Author Affiliations +
Proceedings Volume 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography; 1230431 (2022) https://doi.org/10.1117/12.2646441
Event: Seventh International Conference on Image Formation in X-Ray Computed Tomography (ICIFXCT 2022), 2022, Baltimore, United States
Abstract
Imaging systems require to be calibrated. The geometric calibration consists in estimating several parameters describing the projection geometry. Just like in computer vision for cameras, intrinsic parameters characterize the internal parameters of X-ray projection system. The extrinsic parameters define the orientation and position of the acquisition system. In X-ray computed tomography (CT), the acquisition systems are generally composed of a detector and a X-ray source. The object to be reconstructed lies in-between. A perfect knowledge of the calibration parameters is needed for the reconstruction algorithm to reduce artefacts. In this paper, we focus on off-line calibration methods for 1D linear X-ray detector systems. We first introduce a calibration method for systems composed of a single linear detector. This method solves the problem of calibration in two steps using calibration objects based on four co-planar lines. Moreover, we generalize the single linear detector geometric calibration method to a multi-linear detector system. We compare four different numerical models and methods. Three are based on non-linear equation systems. Finally, we propose an adaptative calibration object.

1.

INTRODUCTION

A high accuracy geometric calibration of the acquisition system is required to perform the 3D reconstruction of an object from its projections. The algorithms rely on the perfect knowledge of the intrinsic and extrinsic parameters of the system. In x-ray cone-beam CT (CBCT), these parameters describe the relation between a 3D point and its projection point on the detector image plane. Therefore, inaccurate estimations will lead to a poor reconstruction.

Off-line calibration methods of systems using 2D detector are well-known in the computer vision literature.1 Many methods are based on the data acquired using a perfectly known geometrical object called the calibration object. In computer vision, a well-known calibration object is the Tsai grid.2 In x-ray CBCT, calibration methods are adapted from computer vision. Calibration objects, well suited to the circular CB geometry, composed of several balls of high density material, have been designed. The projections of the balls form an ellipse from which the calibration parameters can be estimated.35

Many computer vision methods are based on a pinhole camera model. The model relates a 3D point (X, Y,Z) lying in the scene to a 2D point (u, v) on the detector. The intrinsic parameters αu = kuf, αv = kvf, u0 and v0 are contained in the calibration matrix K2D ∈ ℝ3×4 where f is the focal distance, ku and kv are the densities of pixels along the image axes u and v, respectively, and (u0, v0) is the principal point on the detector in the image coordinates.

00110_PSISDG12304_1230431_page_1_1.jpg

We define the rotation matrix R ∈ ℝ3×3 and the translation vector t ∈ ℝ3×1 representing the orientation and position of the camera. Therefore, the pinhole camera model is defined in the Eq. (2). The parameter s is a scale factor.

00110_PSISDG12304_1230431_page_2_1.jpg

Geometric calibration is the identification of K2D, R and t. In computer vision, K2D, R and t are often estimated from sufficient projections (u, v) of 3D world points (X, Y,Z) using (2). However, these model and methods must be adapted to calibrate a linear detector. A calibration object composed of lines is more suitable to linear cameras or detectors.6 Horaud et al. proposed a two-step calibration method based on four co-planar lines calibration objects. The adaptation of this method to a x-ray system with a linear detector is straight forward as both systems can be described with the same geometric pinhole model.

In this paper, we adapt and generalize the Horaud et al. computer vision calibration method for linear camera to a multiple linear detector x-ray system. We propose a calibration object with a minimal number of opaque lines. A total of four different methods are proposed. In addition, we present a calibration object which can be adapted to different configurations of detectors. Finally, we show the performances of the proposed methods and calibration object in numerical simulations.

2.

THEORY

2.1

Geometry

The system is composed of nD linear detectors, denoted Dl, l = 1, …, nD, and an unique x-ray source denoted S. We denote (O, x, y, z) the world coordinate system centred at the origin O. An illustration of such a system is given in the Fig. 1. Furthermore, we introduce the detector coordinate system associated to the lth detector (Ol, ul, vl,wl) where vl is the image axis, wl is the axis perpendicular to vl pointing towards the source and ul = vl ×wl. The origin of the system is the point Ol which is the orthogonal projection of the source S on the detector Dl. The real vl is the coordinate of Ol along the linear detector, along vl, relative to the pixel 0 of Dl.

Figure 1:

A two linear detector system.

00110_PSISDG12304_1230431_page_2_3.jpg

2.2

Single detector calibration

We start by introducing the single linear detector system calibration method derived from Horaud et al.6

Pinhole linear camera model Let’s consider a point (X, Y,Z) expressed in the world coordinates system and its projection v on the linear detector D. To be seen, the point has to belong to the viewing plane Π which is defined in the Eq. (3) using three real parameters p, q and r.

00110_PSISDG12304_1230431_page_2_2.jpg

Moreover, we adjust the pinhole camera model presented in the Eq. (2) to the system using a 1D detector. The rotation matrix R ∈ ℝ3×3 and the translation vector t ∈ ℝ3×1 are representing the rigid transformation from the world to the source coordinates. The calibration matrix K2D ∈ ℝ3×4 defined in the Eq. (1) becomes K1D ∈ ℝ2×4.

00110_PSISDG12304_1230431_page_3_1.jpg

The pinhole linear camera model is given by:

00110_PSISDG12304_1230431_page_3_2.jpg

Within the plane Π, i.e. using Eq. (3), we can rewrite the Eq. (5) such that the calibration problem is reduced to the estimation of five real parameters n1, n2, n3, n4, n5 from :

00110_PSISDG12304_1230431_page_3_3.jpg

and the three parameters p, q, r from Eq. (3).

Calibration object The calibration object is made of four co-planar lines. Three of them are parallel and the last one is oblique. For example, we can define these lines within the plane Z = 0 by the following equations.

00110_PSISDG12304_1230431_page_3_4.jpg

The intersections of these lines with the plane Π and their projections on D are used to solve the calibration problem. A key idea introduced by Horaud et al. is to use the intersections of (L1), (L2) and (L3) with the plane Π and their projections on D to estimate the intersection point of (L4) and Π using a projective invariant : the cross-ratio.1 Therefore, by translating this object several times along the y and/or z axes, we can acquire enough data to solve the calibration problem.

Calibration problem We denote 00110_PSISDG12304_1230431_page_3_5.jpg and 00110_PSISDG12304_1230431_page_3_6.jpg the sets of known data related to the parallel and oblique lines, respectively, with i = 1, …, nP and j = 1, …, nO, where nP ≥ 5 and nO ≥ 3 are the numbers of parallel and oblique lines, respectively. In Fig. 2 we show a calibration object containing the minimal number of eight calibration lines. Using (6) and the parallels lines set of data, we can estimate the parameters n1, n2, n3, n4, n5 by solving a system of nP equations in the least square sense. Likewise, we estimate the viewing plane parameters p, q, r by solving a system of nO equations based on the Eq. (3).

Figure 2:

The minimal calibration object.

00110_PSISDG12304_1230431_page_5_4.jpg

Calibration parameters estimation The intrinsic and extrinsic parameters can easily be extracted from n1, n2, n3, n4 and n5 using p, q and r (see6). Then, the source position (XS, YS,ZS) can be estimated by solving the linear system (8) where v* and v are two different detector pixels. Indeed, the source belongs to all the backprojection lines and the viewing plane Π.

00110_PSISDG12304_1230431_page_3_7.jpg

2.3

Multi-detector calibration

The first obvious idea to calibrate a multiple linear detector system would be to use the previous method on each detector. However, all subsystems share the same source. In this section, we present four different methods exploiting this property.

In the following, as in section 2.1, the index l refers to the detector Dl, l = 1, …, nD.

Method 1 (M1) We use the previous method on each subsystem to calibrate the system. We then agregate the equations (8) associated to each detector Dl, l = 1, …, nD for estimating an unique source position. The agregated system of equations (9) is composed of 3nD equations and can be easily solved by least squares.

00110_PSISDG12304_1230431_page_4_1.jpg

Method 2 (M2) From the combination of the equations (3) and (9), we get a system of (nO+3)nD non-linear equations (10) in the parameters pl, ql, rl, XS, YS and ZS, l = 1, …, nD.

00110_PSISDG12304_1230431_page_4_2.jpg

This system can be solved using the Gauss-Newton algorithm. We suggest to initialize the algorithm with the results of M1.

Method 3 (M3) Similarly to M2, by combining the equations (6) and (9), we build a system of (nP + 3)nD non-linear equations (11) in the parameters nl,1, nl,2, nl,3, nl,4, nl,5, XS, YS and ZS, l = 1, …, nD.

00110_PSISDG12304_1230431_page_4_3.jpg

We use the same numerical method as for M2 for solving Eq. (11).

Method 4 (M4) The last method combines the equations of M2 and M3. Consequently, the system of (nP + nO + 3)nD equations (12) to solve is non-linear in the parameters pl, ql, rl, nl,1, nl,2, nl,3, nl,4, nl,5, XS, YS and ZS, l = 1, …, nD.

00110_PSISDG12304_1230431_page_4_4.jpg

The resolution can be done using the Gauss-Newton algorithm as for M2 and M3.

2.4

Calibration object

The use of sufficient four co-planar lines calibration objects presented in the section 2.2 could be enough to generate the data required to solve the calibration problem. Nevertheless, we present in this section several improvements.

Minimal calibration object The first improvement is to construct a minimal calibration object composed of 5 parallel lines and 3 oblique lines. The parallel lines are shared among the oblique lines such that 3 groups of 3 parallel plus one oblique line, as in section 2.2, can be provided. The 5 parallel lines are defined by the equations (13).

00110_PSISDG12304_1230431_page_5_1.jpg

The three oblique lines are defined by the equations (14).

00110_PSISDG12304_1230431_page_5_2.jpg

The minimal calibration object is illustrated in the Fig. 2. One can notice that the group of lines in the plane Y = 2ε doesn’t result from a translation as suggested in the section 2.2. The cross-ratio can be adapted in this plane. Besides, we remark that the lines are positioned such that their projections can be spanned all over the detectors using an adequate value of ε.

Adapted object First, we observed that adding to the minimal calibration object the parallel line (L9) defined in the Eq. (15) improved significantly the accuracy of the calibration.

00110_PSISDG12304_1230431_page_5_3.jpg

Then, another improvement is the positioning of the oblique lines relatively to each detector Dl, l = 1, …, nD. The lines are placed such that the intersections of the oblique lines and the viewing planes are at equal distance of the two closest co-planar parallel lines. We denote these lines (L6,l), (L7,l) and (L8,l), l = 1, …, nD. The lines (L6,l) and (L7,l) are positioned in the plane Z = 0 between the lines (L1) and (L2), and the lines (L3) and (L9), respectively. They are inclined at a fixed angle λ. The lines (L8,l) are positioned in the plane Y = 2ε, between the lines (L3) and (L4). The inclinations of these lines are specific to each detector. Nevertheless, the position of the lines (L8,l) doesn’t impact the results as much as the two others obliques lines. Thus, every way of positioning the lines can be used providing that the intersection of the lines and the viewing planes are at 00110_PSISDG12304_1230431_page_6_1.jpg. An example of such a calibration object is given in the Fig. 3.

Figure 3:

(a) Vertical plane and (b) horizontal plane of the object used for the 8-detectors system calibration.

00110_PSISDG12304_1230431_page_5_5.jpg

3.

SIMULATION

We consider a 8-linear-detector system. The distance from the source to the plane containing the detectors is 480mm. The detectors are spaced at a constant angle from the source. The calibration object is positioned halfway between the source and the detectors line. The detectors have 1920 pixels of height 0.4mm. The parameters of the calibration object are set to W = 600mm, ε = 125mm and η = 50mm. The two planes containing line segments of the calibration object are illustrated in the Fig. 3a and 3b, respectively.

We begin with the comparison of the methods presented in the section 2.3. The methods are compared on the average projection errors observed on 100 simulations where we add Gaussian noise N(0, σ2) to the data. The value of σ is set as a fraction of the pixels size.

The average projection errors are calculated by computing the absolute value of the difference between the theoretical projections of 2 oblique lines and the projections obtained using successively the Eq. (3) to compute the intersections of the oblique lines and Πl, and the Eq. (6) to compute the projections of the intersections points. Two projections are calculated using the estimated values of either the parameters nl,1, nl,2, nl,3, nl,4, nl,5 or the parameters pl, ql, rl. The results are presented in the Fig. 4.

Figure 4:

Methods comparison. The graphics show the average projection errors of 2 oblique lines on the 8 detectors using the estimated values of (a) the parameters nl,1, nl,2, nl,3, nl,4, nl,5 (b) the parameters pl, ql, rl, with σ = 0.24px, l = 1, …, nD.

00110_PSISDG12304_1230431_page_6_2.jpg

It can be observed in the Fig. 4a that M3 and M4 improve slightly the estimation of the parameters nl,1, nl,2, nl,3, nl,4, nl,5. We can see in the Fig. 4b that M2 and M4 fail to improve the estimation of the viewing planes parameters whereas M1 and M3 have the lowest errors.

Finally, we compare the results of the proposed calibration object with those obtained with the object presented in the section 2.2. We consider here only one wide object for all the detectors. We set ξ1 = ξ2 = 100mm, α = 0.25 and β = 75. The object is initially positioned in-between the source and the detectors line. Then, it is shifted twice on the y and z axes. Exactly, we shift the object successively by Yshift = 200mm along the y axis and by Zshift = −150mm along the z axis. We use M3 to solve the problem. The results are presented in the Fig. 5. We can see in the Fig. 5a that the objects achieves comparable results on the estimation of the parameters nl,1, nl,2, nl,3, nl,4 and nl,5. However, we can observe in the Fig. 5b that the estimation of the viewing planes parameters are much better with the calibration object proposed in 2.4.

Figure 5:

Calibration objects comparison. The graphics show the average projection errors of 2 oblique lines on the 8 detectors using the estimated values of (a) the parameters nl,1, nl,2, nl,3, nl,4, nl,5 (b) the parameters pl, ql, rl, with σ = 0.24px, l = 1, …, nD.

00110_PSISDG12304_1230431_page_6_3.jpg

4.

CONCLUSION

We have extended the linear camera geometric calibration method introduced by Horaud et al.6 (see section 2.2) to a multiple linear detector system. We have proposed a minimal calibration phantom of 8 opaque lines. But we have observed that adding one opaque line improves highly the accuracy and the stability of the calibration parameter estimation. This calibration object has been adapted to a multiple linear detector system in order to preserve a sufficient oblicity for the oblique lines. We have proposed and evaluated 4 numerical methods exploiting the fact that all subsystems share the same x-ray source.

Our numerical simulations have shown that estimating the geometric parameters of multi linear detector systems taking into account that they share the same x-ray source, improves the geometric calibration.

REFERENCES

1. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, secondCambridge University Press,2004). https://doi.org/10.1017/CBO9780511811685 Google Scholar

2. 

R. Tsai, “A versatile camera calibration technique for high-accuray 3d machine vision metrology off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation, RA-3 (4), 323 –344 (1987). https://doi.org/10.1109/JRA.1987.1087109 Google Scholar

3. 

F. Noo, R. Clackdoyle, C. Mennessier, T. White, and T. Roney, “Analytic method based on identification of ellipse parameters for scanner calibration in cone-beam tomography,” Physics in Medicine and Biology, 45 (11), 3489 –3508 (2000). https://doi.org/10.1088/0031-9155/45/11/327 Google Scholar

4. 

Y. Cho, D. Moseley, J. Siewerdsen, and D. Jaffray, “Accurate technique for complete geometric calibration of cone-beam computed tomography systems,” Medical Physics, 32 (4), 968 –983 (2005). https://doi.org/10.1118/1.1869652 Google Scholar

5. 

M. J. Daly, J. H. Siewerdsen, Y. B. Cho, D. A. Jaffray, and J. C. Irish, “Geometric calibration of a mobile c-arm for intraoperative cone-beam ct,” Medical Physics, 35 (5), 2124 –2136 (2008). https://doi.org/10.1118/1.2907563 Google Scholar

6. 

R. Horaud, R. Mohr, and B. Lorecki, “On single-scanline camera calibration,” IEEE Transactions on Robotics and Automation, 1 (9), 71 –75 (1993). https://doi.org/10.1109/70.210796 Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sasha Gasquet, Laurent Desbat, and Pierre-Yves Solane "Multiple linear detector off-line calibration", Proc. SPIE 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography, 1230431 (17 October 2022); https://doi.org/10.1117/12.2646441
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Calibration

Machine vision

3D modeling

X-rays

Visual process modeling

X-ray detectors

X-ray sources

Back to Top