The fisheye lens, with a field of view (FOV) angle reaching 180 degrees, is highly effective for visual inspection and measurement of large scenes, provided it is accurately modeled and precisely calibrated. Unlike perspective projection lenses with narrower FOVs, fisheye lenses exhibit stronger nonlinearity, greater distortions, and more pronounced aberrations. Moreover, the calibration accuracy of current state-of-the-art imaging models and methods is generally less than 1/10 of a pixel, which fails to meet the high-precision requirements of certain measurement scenarios. In this paper, we introduce a new imaging model and calibration method for fisheye cameras. Supplementing traditional projection and distortion models, we propose image-variant intrinsic orientation parameters to account for the influence of the camera's attitude on intrinsic parameters. Additionally, we develop a corresponding bundle adjustment algorithm for this model. Because traditional calibration objects are too small to meet the high-precision needs of fisheye lenses, we establish a large planar calibration field with numerous control points and capture a series of images from various orientations for bundle adjustment calibration of the imaging model. To address the significant impact of initial parameter values on the bundle adjustment convergence process, we present a new calibration method that enables automatic processing and matching of calibration image data, ensuring robust and reliable results. Calibration experiments using a NIKON D810 camera and a Nikkor 16mm fisheye lens demonstrate that our method achieves a calibration precision of 1/15 of a pixel, surpassing other models and methods reported in the literature. Furthermore, our proposed method is distinguished by its simplicity in operation and automated data processing.
In order to improve the positioning accuracy of humanoid football robots on the playing field[1], this paper proposes a field-assisted positioning method based on monocular vision. By fixing a robot in a specific position and using its monocular camera to capture the key features of other robots and footballs on the field, the latest target detection algorithm YOLOv8 is used to accurately identify targets. Then, the distance measurement model established by the camera pose algorithm and the geometric relationship within the coordinate system are used to accurately calculate the relative coordinates of the robot on the football field. Through experimental verification, we found that among the five test algorithms, the ranging and auxiliary positioning model based on the EPNP algorithm performed best. Its ranging and positioning accuracy within 3 meters can reach a level of less than 10 centimeters, which fully proves confirm the effectiveness and accuracy of this method.
Since feature matching of image pairs brings a heavy computational burden, Structure-from-Motion faces great challenges in efficiency, especially for unordered large-scale image collections. To solve it, we propose a hierarchical image matching method in this paper. Our approach starts with an iterative image retrieval scheme, which can efficiently find potentially overlapping image pairs as candidates and avoid unnecessary computation. Then, feature extraction, feature matching and geometric verification are implemented in candidates to find the verified image pairs and inlier feature correspondences. Experiments on benchmark datasets and large-scale unordered datasets demonstrate that our method performs competitiveness in efficiency, without degrading the accuracy, compared with the state-of-the-art system.
Vision measurement, a fast and high precision measurement technology, has great application potential in aerospace applications such as on-orbit assembly and maintenance. The On-orbit Multi-view Photogrammetry System (OMPS) has insufficient known spatial reference information to assist camera orientation. To solve this problem, this paper proposes a method to realize the calibration of all Cameras External Parameters (CEP) of the OMPS using stars and scale bars. The method firstly establishes the imaging model of stars and scale bars, in which a relative position model is proposed to solve the reference camera position unconstrained problem. Subsequently, based on the constructed error equations of star imaging point, scale bar target imaging point and scale bar length, a multi-data fusion bundle adjustment algorithm is proposed to realize the high-precision calibration of CEP. The practical experiments show that the image plane errors of stars and scale bar targets are 1/7 pixel (1σ) and 1/16 pixel (1σ) respectively, and scale bar length error is 0.045 mm (1σ). Taking the measurement of V-star System (VS) as the true value, the OMPS measurement error of spatial targets in X, Y and Z directions are 0.45 mm (3σ), 0.12 mm (3σ) and 0.15 mm (3σ), respectively. This method can provide an algorithm and data reference for the calibration problem of CEP in the on-orbit application of Photogrammetry (PG).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.