Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System
Abstract
:1. Introduction
2. Mathematical Model
2.1. Imaging Model
- Zc is the optic axis coordinate of point P,
- dx is the ratio coefficient in the x direction,
- dy is the ratio coefficient in the y direction,
- s and γ are the non-orthogonal factor of axes of the image coordinate,
- (u0, v0) is the pixel coordinate of the camera principal point,
- f is the principal distance of the camera,
- R is the 3 × 3 rotation matrix,
- T is the 3D translation vector,
- αx = f/dx and αy = f/dy are the respective scale factors of the u-axis and v-axis of the image coordinate,
- M1 and M2 are the intrinsic parameter matrix and extrinsic parameter matrix respectively, and
- M = M1·M2 is the perspective projection transform matrix.
2.2. MFNS Model
3. Error Analysis
4. System Calibration Based on Geometrical Constraints in Object Space
4.1. CPC Method for MFNS
- MFNS body coordinate ObXbYbZb: fixed on the system frame and defined for ease of use.
- Turntable coordinate OtXtYtZt: Ot is the center of the turntable, Zt points to the forward of the turntable main axis, and Xt points to the forward of the turntable auxiliary axis.
- World coordinate OwXwYwZw: defined by the checkerboard according to the single camera calibration method, where Ow is the corner of the checkerboard, and Xw, and Yw are parallel to the edge of the checkerboard grid.
- Coordinates of cameras X, Y, and Z: defined based on the imaging model in Section 2.1 and recorded respectively as OxXxYxZx, OyXyYyZy, and OzXzYzZz.
- Intrinsic parameter matrix Ak and distortion coefficient kc1k–kc5k;
- Rotation matrix between camera k coordinate and the MFNS; and
- Vector ri from camera principal points Ocx, Ocy, and Ocz to the origin of the MFNS coordinate system Ob, where k = x, y, and z stand for cameras X, Y, and Z, respectively.
- The checkerboard should be fixed in an appropriate location that can be observed by the cameras while the MFNS rotates with the turntable.
- Several pictures of the checkerboards that meet the requirements of the Zhang’s method are taken by cameras X, Y, and Z.
- Alignment of the coordinates should be done subsequently so that the rotation matrix from the turntable coordinate to the world coordinate is approximate to an identity matrix.
4.2. Calibration Results
5. Navigation Experiment
- As the origin of the MFNS coordinate, Ob is defined on the mounting surface of the turntable, and the position of the MFNS is constant (0, 0, 0) while the turntable rotates around its Zt axis. Similarly, φ and θ remain 0 and ψ varies linearly while the turntable rotates uniformly under the ideal condition. The experiment results show that the MFNS worked properly during validation; the stand deviations of the three-axis position error are 1.60, 1.61, and 1.83 mm, respectively; and the stand deviations of the three-axis attitude error are 0.15°, 0.15°, and 0.17°, respectively.
- The calibration method proposed for multiple camera systems deals with calibration results from the single-camera calibration method. In Section 3, we mainly utilize the LSQ for formula derivation and finding the optimization solution. This approach makes understanding the main idea and the process of our method easier, and the experiment results show that the performance of the MFNS after calibration is acceptable. However, the LSQ may not be the most accurate method to solve the problem, because any error will propagate during the process. For instance, when calculating and , according to Equations (A11) and (A12), the results are based on , which is the optimal solution of Equation (A10). More in-depth work may focus on better ways to obtain optimal solutions on system extrinsic parameters.
- Given that the imaging model of the MFNS was built in Section 2, research on the navigation algorithm of the MFNS should be carried out afterwards. The navigation experiment is only conducted to verify the accuracy of the system calibration result, because we simply use the Newton iteration method to find a numerical solution of navigation parameters. The problem of beacon pattern recognition and multiple solutions are ignored by manually matching the beacons and choosing the iteration initial value. Furthermore, a solution in a pose that is less than three beacons observed by the MFNS is difficult to obtain. Therefore, a complete navigation algorithm system is our goal for future work, including solution strategy, error propagation analysis, beacon distribution, and optimized methods.
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
Appendix A. CPC Method for MFNS Calibration
Appendix A.1. Calibration of Rotation Matrix
Appendix A.2. Calibration of Vector ri
Appendix A.3. Operational Method for Coordinates’ Alignment
Step 1: Level adjustment of the checkerboard
Device required
Principle
Step 2: Orientation adjustment of the world coordinate
Device required
Principle
References
- Santos, C.A.; Costa, C.O.; Batista, J. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation. Mech. Syst. Signal Process. 2016, 72, 678–694. [Google Scholar] [CrossRef]
- Vilaça, J.L.; Fonseca, J.C.; Pinho, A.M. Calibration procedure for 3D measurement systems using two cameras and a laser line. Opt. Laser Technol. 2009, 41, 112–119. [Google Scholar] [CrossRef] [Green Version]
- Zyda, M. From visual simulation to virtual reality to games. Computer 2005, 38, 25–32. [Google Scholar] [CrossRef]
- Mirota, D.J.; Ishii, M.; Hager, G.D. Vision-based navigation in image-guided interventions. Annu. Rev. Biomed. Eng. 2011, 13, 297–319. [Google Scholar] [CrossRef] [PubMed]
- Abdel-Aziz, Y.I. Direct linear transformation from comparator coordinates in close-range photogrammetry. In Proceedings of the ASP Symposium on Close-Range Photogrammetry, Urbana, IL, USA, 26–29 January 1971.
- Faugeras, O. Three-Dimensional Computer Vision: A Geometric Viewpoint; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
- Jones, G.A.; Renno, J.R.; Remagnino, P. Auto-calibration in multiple-camera surveillance environments. In Proceedings of the Third IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, Copenhagen, Denmark, 1 June 2002.
- Triggs, B. Camera pose and calibration from 4 or 5 known 3D points. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 278–284.
- Tsai, R.Y. An efficient and accurate camera calibration technique for 3D machine vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 22–26 June 1986.
- Sturm, P.F.; Maybank, S.J. On plane-based camera calibration: A general algorithm, singularities, applications. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999.
- Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Beardsley, P.; Murray, D. Camera Calibration Using Vanishing Points; Springer: London, UK, 1992; pp. 416–425. [Google Scholar]
- Cipolla, R.; Drummond, T.; Robertson, D.P. Camera Calibration from Vanishing Points in Image of Architectural Scenes. BMVC 1999, 99, 382–391. [Google Scholar]
- Wong, K.Y.K.; Mendonca, P.R.S.; Cipolla, R. Camera calibration from surfaces of revolution. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 147–161. [Google Scholar] [CrossRef] [Green Version]
- Scaramuzza, D.; Martinelli, A.; Siegwart, R. A flexible technique for accurate omnidirectional camera calibration and structure from motion. In Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS’06), New York, NY, USA, 4–7 January 2006.
- Liu, T.; Burner, A.W.; Jones, T.W.; Barrows, D.A. Photogrammetric techniques for aerospace applications. Prog. Aerosp. Sci. 2012, 54, 1–58. [Google Scholar] [CrossRef]
- Hughes, C.; Glavin, M.; Jones, E.; Denny, P. Wide-angle camera technology for automotive applications: A review. IET Intell. Transp. Syst. 2009, 3, 19–31. [Google Scholar] [CrossRef]
- Sun, T.; Xing, F.; You, Z. Optical system error analysis and calibration method of high-accuracy star trackers. Sensors 2013, 13, 4598–4623. [Google Scholar] [CrossRef] [PubMed]
- Schwartz, C.; Sarlette, R.; Weinmann, M.; Rump, M.; Klein, R. Design and implementation of practical bidirectional texture function measurement devices focusing on the developments at the University of Bonn. Sensors 2014, 14, 7753–7819. [Google Scholar] [CrossRef] [PubMed]
- Wang, X. Intelligent multi-camera video surveillance: A review. Pattern Recognit. Lett. 2013, 34, 3–19. [Google Scholar] [CrossRef]
- Kumar, R.K.; Ilie, A.; Frahm, J.M.; Pollefeys, M. Simple calibration of non-overlapping cameras with a mirror. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008.
- Rodrigues, R.; Barreto, J.P.; Nunes, U. Camera pose estimation using images of planar mirror reflections. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010.
- Caspi, Y.; Irani, M. Aligning non-overlapping sequences. Int. J. Comput. Vis. 2002, 48, 39–51. [Google Scholar] [CrossRef]
- Dai, Y.; Trumpf, J.; Li, H.; Barnes, N.; Hartley, R. Rotation averaging with application to camera-rig calibration. In Proceedings of the Asian Conference on Computer Vision, Xi’an, China, 23–27 September 2009.
- Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Determining the camera to robot-body transformation from planar mirror reflections. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008; pp. 3865–3871.
- Mariottini, G.L.; Scheggi, S.; Morbidi, F.; Prattichizzo, D. Planar catadioptric stereo: Single and multi-view geometry for calibration and localization. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’09), Kobe, Japan, 12–17 May 2009; pp. 1510–1515.
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Pless, R. Using many cameras as one. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 16–22 June 2003.
- Grossberg, M.D.; Nayar, S.K. A general imaging model and a method for finding its parameters. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV 2001), Vancouver, BC, Canada, 7–14 July 2001.
- Henrik Stewénius, M.O.; Aström, K.; Nistér, D. Solutions to Minimal Generalized Relative Pose Problems. Available online: http://www.vis.uky.edu/~stewe/publications/stewenius_05_omnivis_sm26gen.pdf (accessed on 22 March 2017).
- Li, H.; Hartley, R.; Kim, J. A linear approach to motion estimation using generalized camera models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8.
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. Epnp: An accurate O(n) solution to the pnp problem. Int. J. Comput. Vis. 2009, 81, 155. [Google Scholar] [CrossRef] [Green Version]
- Gao, X.S.; Hou, X.R.; Tang, J.; Cheng, H.F. Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 930–943. [Google Scholar]
- Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Norwood, MA, USA, 2013. [Google Scholar]
Camera Parameter | |||||
αx (pixel) | αy (pixel) | u0 (pixel) | v0 (pixel) | ||
1600 | 1600 | 640 | 512 | ||
Extrinsic Parameter of MFNS | |||||
φ (°) | θ (°) | ψ (°) | Tx (mm) | Ty (mm) | Tz (mm) |
0 | 0 | 0 | 0 | 0 | 0 |
Beacon Position | |||||
Xw (mm) | Yw (mm) | Zw (mm) | |||
0–1000 | - | 0–1000 | |||
Number of MC Simulation Trials for Each Group of Parameters | |||||
10,000 |
Parameter | Camera X | Camera Y | Camera Z | |||
---|---|---|---|---|---|---|
Calibration Result | Error | Calibration Result | Error | Calibration Result | Error | |
αx | 1599.26136 | 1.78966 | 1605.35286 | 1.66249 | 1611.21596 | 2.14183 |
αy | 1599.93302 | 1.64101 | 1603.54359 | 1.73608 | 1610.79607 | 2.14219 |
u0 | 632.61591 | 1.62023 | 619.71227 | 1.80177 | 649.51210 | 1.39920 |
v0 | 522.17870 | 1.61823 | 505.99361 | 1.93821 | 531.29472 | 1.34749 |
α | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 |
kc1 | −0.13013 | 0.00336 | −0.11276 | 0.00301 | −0.10481 | 0.00338 |
kc2 | 0.28701 | 0.02243 | 0.01776 | 0.01776 | 0.15881 | 0.02341 |
kc3 | −0.00040 | 0.00029 | −0.00031 | 0.00034 | −0.00137 | 0.00021 |
kc4 | −0.00004 | 0.00031 | 0.00057 | 0.00030 | 0.00184 | 0.00022 |
kc5 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 | 0.00000 |
Pixel error | (0.11711, 0.10363) | (0.11597, 0.12652) | (0.11593, 0.11232) |
Rotation Matrix | Position Vector | ||||||
---|---|---|---|---|---|---|---|
Matrix | Euler Angle | Calibration Result (°) | Error (°) | Vector | Component | Calibration Result (mm) | Error (mm) |
α | −0.0110 | 0.0485 | 33.8407 | 0.2932 | |||
β | 0.1263 | 0.0657 | −128.9401 | 0.2932 | |||
γ | 0.0861 | 0.0443 | 576.9100 (measured) | 0.2456 | |||
φx | −0.5788 | 0.0413 | 34.0949 | 0.1496 | |||
θx | −90.3523 | 0.0990 | −128.9281 | 0.1496 | |||
ψx | 0.0395 | 0.0153 | −684.7134 | 0.1496 | |||
φy | 90.1636 | 0.0301 | 76.1319 | 0.2529 | |||
θy | −0.5501 | 0.0920 | −36.8373 | 0.4498 | |||
ψy | 0.1925 | 0.0157 | 78.8949 | 0.4959 | |||
φz | −0.1076 | 0.0485 | −32.5004 | 0.5358 | |||
θz | −0.7211 | 0.0657 | 75.7149 | 0.0825 | |||
ψz | 1.1690 | 0.0443 | 76.9967 | 0.1963 | |||
34.1604 | 0.3097 | ||||||
30.8251 | 0.3097 | ||||||
119.7096 | 0.2932 |
Number of Beacons | Position of Beacons (mm) | ||
---|---|---|---|
x | y | z | |
0 | 487.4734 | −720.0265 | 240.8359 |
1 | −51.2618 | −737.6155 | 164.7393 |
2 | −366.1525 | −711.8317 | 131.2681 |
3 | −458.8302 | −215.2964 | −69.2627 |
4 | −454.6380 | 410.3959 | −60.6549 |
5 | −117.4392 | 296.0537 | 83.0754 |
6 | 182.4478 | 279.1964 | 143.2510 |
7 | 481.8346 | 252.9609 | 217.5775 |
8 | 647.3834 | −127.6167 | 178.7407 |
9 | −122.5798 | −44.9064 | 562.5394 |
Estimated position accuracy: | 0.3963 mm |
Parameter | Mean Error | Standard Deviation |
---|---|---|
Roll (φ/°) | −0.0030 | 0.1541 |
Pitch (θ/°) | −0.0192 | 0.1497 |
Yaw (ψ/°) | 0.0066 | 0.1729 |
x position (mm) | −0.0553 | 1.6043 |
y position (mm) | 0.0275 | 1.6108 |
z position (mm) | −0.0669 | 1.8292 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, S.; Zhao, K.; You, Z.; Ouyang, C.; Cao, Y.; Wang, Z. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System. Sensors 2017, 17, 655. https://doi.org/10.3390/s17030655
Shi S, Zhao K, You Z, Ouyang C, Cao Y, Wang Z. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System. Sensors. 2017; 17(3):655. https://doi.org/10.3390/s17030655
Chicago/Turabian StyleShi, Shuai, Kaichun Zhao, Zheng You, Chenguang Ouyang, Yongkui Cao, and Zhenzhou Wang. 2017. "Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System" Sensors 17, no. 3: 655. https://doi.org/10.3390/s17030655
APA StyleShi, S., Zhao, K., You, Z., Ouyang, C., Cao, Y., & Wang, Z. (2017). Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System. Sensors, 17(3), 655. https://doi.org/10.3390/s17030655