Efficient camera self-calibration method for remote...

19
Efficient camera self-calibration method for remote sensing photogrammetry JIN LI 1,* AND ZILONG LIU 2 1 Electrical Engineering Division, Department of Engineering, University of Cambridge, Cambridge,UK 2 Optic Division, National Institute of Metrology, Beijing 100029, China *[email protected] Abstract: Internal parameter calibration of remote sensing cameras (RSCs) is a necessary step in remote sensing photogrammetry. On-orbit camera calibration widely adopts external ground control points (GCPs) to measure its internal parameters. However, accessible and available GCPs are not easy to achieve when cameras work on a satellite platform. In this paper, we propose an efficient camera self-calibration method using a micro-transceiver in conjunction with deep learning. A supervised learning set is produced by the micro- transceiver, where multiple two-dimensional diffraction grids are produced and transformed into multiple auto-collimating sub-beams equivalent to infinite target-point training examples. A deep learning network is used to invert the learnable internal parameters. The micro- transceiver can be easily integrated into the internal structure of RSCs allowing to calibrate them independently on external ground control targets. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (040.1490) Cameras; (280.4788) Optical sensing and sensors; (150.1488) Calibration. References and links 1. K. Torlegård, “Sensors for photogrammetric mapping: Review and prospects,” P&RS 47(4), 241–262 (1992). 2. K. Novak, “Application of digital cameras and GPS for aerial photogrammetric mapping,” Int. Arch. Photogramm. Remote Sens. 29, 5 (1993). 3. L. I. N. Zongjian, “UAV for mapping-low altitude photogrammetric survey,” in International Archives of Photogrammetry and Remote Sensing (2008), pp.1183–1186. 4. F. Agüera-Vega, F. Carvajal-Ramírez, and P. Martínez-Carricondo, “Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle,” Measurement 98, 221–227 (2017). 5. C. V. Tao, Y. Hu, and W. Jiang, “Photogrammetric exploitation of IKONOS imagery for mapping applications,” Int. J. Remote Sens. 25(14), 2833–2853 (2004). 6. J. Li, F. Xing, and Z. You, “Space high-accuracy intelligence payload system with integrated attitude and position determination,” Instrument 2, 3–16 (2015). 7. P. Grattoni, G. Pettiti, F. Pollastri, A. Cumani, and A. Guiducci, “Geometric camera calibration: a comparisons of methods,” In proceedings of IEEE conference on Advanced Robotics, 1991.'Robots in Unstructured Environments', 91 ICAR. (IEEE, 1991), pp.1775–1779. 8. J. Heikkila and S. Olli, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp.1106–1112. 9. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011). 10. B. Ergun, T. Kavzoglu, I. Colkesen, and C. Sahin, “Data filtering with support vector machines in geometric camera calibration,” Opt. Express 18(3), 1927–1936 (2010). 11. Y. L. Xiao, X. Su, and W. Chen, “Flexible geometrical calibration for fringe-reflection 3D measurement,” Opt. Lett. 37(4), 620–622 (2012). 12. S. Thibault, A. Arfaoui, and P. Desaulniers, “Cross-diffractive optical elements for wide angle geometric camera calibration,” Opt. Lett. 36(24), 4770–4772 (2011). 13. J. Li and S. F. Tian, “An efficient method for measuring the internal parameters of optical cameras based on optical fibres,” Sci. Rep. 7(1), 12479 (2017). 14. M. Sirianni, M. J. Jee, N. Benítez, J. P. Blakeslee, A. R. Martel, G. Meurer, M. Clampin, G. De Marchi, H. C. Ford, R. Gilliland, G. F. Hartig, G. D. Illingworth, J. Mack, and W. J. McCann, “The photometric performance and calibration of the Hubble Space Telescope Advanced Camera for Surveys,” PASP 117(836), 1049–1112 (2005). 15. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high- resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014). Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14213 #327547 https://doi.org/10.1364/OE.26.014213 Journal © 2018 Received 3 Apr 2018; revised 12 May 2018; accepted 17 May 2018; published 21 May 2018

Transcript of Efficient camera self-calibration method for remote...

Page 1: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Efficient camera self-calibration method for remote sensing photogrammetry JIN LI1,* AND ZILONG LIU2 1Electrical Engineering Division, Department of Engineering, University of Cambridge, Cambridge,UK 2Optic Division, National Institute of Metrology, Beijing 100029, China *[email protected]

Abstract: Internal parameter calibration of remote sensing cameras (RSCs) is a necessary step in remote sensing photogrammetry. On-orbit camera calibration widely adopts external ground control points (GCPs) to measure its internal parameters. However, accessible and available GCPs are not easy to achieve when cameras work on a satellite platform. In this paper, we propose an efficient camera self-calibration method using a micro-transceiver in conjunction with deep learning. A supervised learning set is produced by the micro-transceiver, where multiple two-dimensional diffraction grids are produced and transformed into multiple auto-collimating sub-beams equivalent to infinite target-point training examples. A deep learning network is used to invert the learnable internal parameters. The micro-transceiver can be easily integrated into the internal structure of RSCs allowing to calibrate them independently on external ground control targets. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (040.1490) Cameras; (280.4788) Optical sensing and sensors; (150.1488) Calibration.

References and links 1. K. Torlegård, “Sensors for photogrammetric mapping: Review and prospects,” P&RS 47(4), 241–262 (1992). 2. K. Novak, “Application of digital cameras and GPS for aerial photogrammetric mapping,” Int. Arch.

Photogramm. Remote Sens. 29, 5 (1993). 3. L. I. N. Zongjian, “UAV for mapping-low altitude photogrammetric survey,” in International Archives of

Photogrammetry and Remote Sensing (2008), pp.1183–1186. 4. F. Agüera-Vega, F. Carvajal-Ramírez, and P. Martínez-Carricondo, “Assessment of photogrammetric mapping

accuracy based on variation ground control points number using unmanned aerial vehicle,” Measurement 98, 221–227 (2017).

5. C. V. Tao, Y. Hu, and W. Jiang, “Photogrammetric exploitation of IKONOS imagery for mapping applications,” Int. J. Remote Sens. 25(14), 2833–2853 (2004).

6. J. Li, F. Xing, and Z. You, “Space high-accuracy intelligence payload system with integrated attitude and position determination,” Instrument 2, 3–16 (2015).

7. P. Grattoni, G. Pettiti, F. Pollastri, A. Cumani, and A. Guiducci, “Geometric camera calibration: a comparisons of methods,” In proceedings of IEEE conference on Advanced Robotics, 1991.'Robots in Unstructured Environments', 91 ICAR. (IEEE, 1991), pp.1775–1779.

8. J. Heikkila and S. Olli, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp.1106–1112.

9. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011).

10. B. Ergun, T. Kavzoglu, I. Colkesen, and C. Sahin, “Data filtering with support vector machines in geometric camera calibration,” Opt. Express 18(3), 1927–1936 (2010).

11. Y. L. Xiao, X. Su, and W. Chen, “Flexible geometrical calibration for fringe-reflection 3D measurement,” Opt. Lett. 37(4), 620–622 (2012).

12. S. Thibault, A. Arfaoui, and P. Desaulniers, “Cross-diffractive optical elements for wide angle geometric camera calibration,” Opt. Lett. 36(24), 4770–4772 (2011).

13. J. Li and S. F. Tian, “An efficient method for measuring the internal parameters of optical cameras based on optical fibres,” Sci. Rep. 7(1), 12479 (2017).

14. M. Sirianni, M. J. Jee, N. Benítez, J. P. Blakeslee, A. R. Martel, G. Meurer, M. Clampin, G. De Marchi, H. C. Ford, R. Gilliland, G. F. Hartig, G. D. Illingworth, J. Mack, and W. J. McCann, “The photometric performance and calibration of the Hubble Space Telescope Advanced Camera for Surveys,” PASP 117(836), 1049–1112 (2005).

15. M. Wang, B. Yang, F. Hu, and X. Zang, “On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery,” Remote Sens. 6(5), 4391–4408 (2014).

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14213

#327547 https://doi.org/10.1364/OE.26.014213 Journal © 2018 Received 3 Apr 2018; revised 12 May 2018; accepted 17 May 2018; published 21 May 2018

Page 2: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

16. J. Li, F. Liu, S. Liu, and Z. Wang, “Optical Remote Sensor Calibration Using Micromachined Multiplexing Optical Focal Planes,” IEEE Sens. J. 17(6), 1663–1672 (2017).

17. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).

18. V. M. Jovanovic, M. A. Bull, M. M. Smyth, and J. Zong, “MISR in-flight camera geometric model calibration and georectification performance,” IEEE Trans. Geosci. Remote Sens. 40(7), 1512–1519 (2002).

19. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).

20. F. Pedersini, A. Sarti, and S. Tubaro, “Accurate and simple geometric calibration of multi-camera systems,” Signal Process. 77(3), 309–334 (1999).

21. M. Kröpfl, E. Kruck, and M. Gruber, “Geometric calibration of the digital large format aerial camera UltraCamD,” Int. Arch. Photogramm. Remote Sens. 35(1), 42–44 (2004).

22. W. Zeitler, C. Doerstel, and K. Jacobsen, “Geometric calibration of the DMC: Method and Results,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 34(1), 324–332 (2002).

23. J. Li and Y. Zhang, “Micro Coded-Aperture Lead-In of Light for Calibrating Remote Sensing Cameras,” IEEE Photonics Technol. Lett. 29(22), 1939–1942 (2017).

24. B. Liu, J. Q. Jia, and Y. L. Ding, “Geometric calibration with angle measure for CCD aerial photogrammetric camera in laboratory,” Laser & Infrared 3, 019 (2010).

25. Q. T. Luong and O. D. Faugeras, “Self-calibration of a moving camera from point correspondences and fundamental matrices,” Int. J. Comput. Vis. 22(3), 261–289 (1997).

26. Y. Furukawa and J. Ponce, “Accurate camera calibration from multi-view stereo and bundle adjustment,” Int. J. Comput. Vis. 84(3), 257–268 (2009).

27. E. Honkavaara, E. Ahokas, J. Hyyppä, J. Jaakkola, H. Kaartinen, R. Kuittinen, L. Markelin, and K. Nurminen, “Geometric test field calibration of digital photogrammetric sensors,” P&RS 60(6), 387–399 (2006).

28. J. Li and Z. Liu, “Optical focal plane based on MEMS light lead-in for geometric camera calibration,” Microsystems Nanoengineering 3, 201758 (2017).

29. D. D. Lichti, C. Kim, and S. Jamtsho, “An integrated bundle adjustment approach to range camera geometric self-calibration,” P&RS 65(4), 360–368 (2010).

30. G. D. Wu, B. Han, and X. He, “Calibration of geometric parameters of line-array CCD camera based on exact measuring angle in lab,” Opt. Precision Eng. 10, 029 (2007).

31. Y. Zhang, M. Zheng, J. Xiong, Y. Lu, and X. Xiong, “On-orbit geometric calibration of ZY-3 three-line array imagery with multistrip data sets,” IEEE Trans. Geosci. Remote Sens. 52(1), 224–234 (2014).

32. D. Mulawa, “On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite. Int. Arch. Photogramm,” Remote Sens. Spat. Inf. Sci. 35, 1–6 (2004).

33. L. Lucchese, “Geometric calibration of digital cameras through multi-view rectification,” Image Vis. Comput. 23(5), 517–539 (2005).

34. M. Bauer, D. Griessbach, A. Hermerschmidt, S. Krüger, M. Scheele, and A. Schischmanow, “Geometrical camera calibration with diffractive optical elements,” Opt. Express 16(25), 20241–20248 (2008).

35. F. Yuan, W. J. Qi, and A. P. Fang, “Laboratory geometric calibration of areal digital aerial camera,” In IOP Conference Series: Earth and Environmental Science (2014), pp.012196.

36. T. Chen, R. Shibasaki, and Z. Lin, “A rigorous laboratory calibration method for interior orientation of an airborne linear push-broom camera,” Photogramm. Eng. Remote Sensing 73(4), 369–374 (2007).

37. F. Yuan, W. Qi, A. Fang, P. Ding, and Y. U. Xiujuan, “Laboratory geometric calibration of non-metric digital camera,” in MIPPR 2013: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications (2013), Vol. 8921, p. 89210A.

38. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).

39. D. S. Lee, J. C. Storey, M. J. Choate, and R. W. Hayes, “Four years of Landsat-7 on-orbit geometric calibration and performance,” IEEE Trans. Geosci. Remote Sens. 42(12), 2786–2795 (2004).

40. J. Takaku and T. Tadono, “PRISM on-orbit geometric calibration and DSM performance,” IEEE Trans. Geosci. Remote Sens. 47(12), 4060–4073 (2009).

41. A. F. Habib, M. Morgan, and Y. R. Lee, “Bundle adjustment with self–calibration using straight lines,” Photogramm. Rec. 17(100), 100635 (2002).

42. D. D. Lichti and C. Kim, “A comparison of three geometric self-calibration methods for range cameras,” Remote Sens. 3(5), 1014–1028 (2011).

43. F. De Lussy, D. Greslou, C. Dechoz, V. Amberg, J. M. Delvit, L. Lebegue, and S. Fourest, “Pleiades HR in flight geometrical calibration: Location and mapping of the focal plane,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 39(B1), 519–523 (2012).

44. S. Fourest, P. Kubik, L. Lebègue, C. Déchoz, S. Lacherade, and G. Blanchet, “Star-based methods for Pleiades HR commissioning,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 39, 513–518 (2012).

45. V. Martin, G. Blanchet, P. Kubik, S. Lacherade, C. Latry, L. Lebegue, and F. Porez-Nadal, “PLEIADES-HR 1A&1B image quality commissioning: innovative radiometric calibration methods and results,” Proc. SPIE 8866, 886611 (2013).

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14214

Page 3: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

46. J. M. Delvit, D. Greslou, V. Amberg, C. Dechoz, F. de Lussy, L. Lebegue, and L. Bernard, “Attitude assessment using Pleiades-HR capabilities,” in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (2012).

47. J. Li, F. Xing, D. Chu, and Z. Liu, “High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination,” Sensors (Basel) 16(8), 1176 (2016).

48. J. Li, Y. Zhang, S. Liu, and Z. Wang, “Self-calibration method based on surface micromaching of light transceiver focal plane for optical camera,” Remote Sens. 8(11), 893 (2016).

49. J. Li, Z. Liu, and F. Liu, “Compressive sampling based on frequency saliency for remote sensing imaging,” Sci. Rep. 7(1), 6539 (2017).

50. J. Li, Z. Liu, and F. Liu, “Using sub-resolution features for self-compensation of the modulation transfer function in remote sensing,” Opt. Express 25(4), 4018–4037 (2017).

51. J. Li and Z. Liu, “Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function,” Opt. Express 25(15), 17134–17149 (2017).

52. S. Li, Y. T. Chun, S. Zhao, H. Ahn, D. Ahn, J. I. Sohn, Y. Xu, P. Shrestha, M. Pivnenko, and D. Chu, “High-resolution patterning of solution-processable materials via externally engineered pinning of capillary bridges,” Nat. Commun. 9(1), 393 (2018).

53. P. Shrestha, Y. Chun, and D. Chu, “A high-resolution optically addressed spatial light modulator based on ZnO nanoparticles,” Light Sci. Appl. 4(3), e259 (2015).

54. M. Wang, Y. Cheng, B. Yang, S. Jin, and H. Su, “On-orbit calibration approach for optical navigation camera in deep space exploration,” Opt. Express 24(5), 5536–5554 (2016).

55. J. Skaloud, M. Cramer, and K. P. Schwarz, “Exterior orientation by direct measurement of camera position and attitude,” Int. Arch. Photogramm. Remote Sens. 31(B3), 125–130 (1996).

56. F. Yılmaztürk, “Full-automatic self-calibration of color digital cameras using color targets,” Opt. Express 19(19), 18164–18174 (2011).

57. P. D. Lin and C. K. Sung, “Comparing two new camera calibration methods with traditional pinhole calibrations,” Opt. Express 15(6), 3012–3022 (2007).

58. K. S. Choi, E. Y. Lam, and K. K. Wong, “Automatic source camera identification using the intrinsic lens radial distortion,” Opt. Express 14(24), 11551–11565 (2006).

59. Z. Wang, “Removal of noise and radial lens distortion during calibration of computer vision systems,” Opt. Express 23(9), 11341–11356 (2015).

60. S. S. Haykin, Neural Networks and Learning Machines, Vol. 3 (Pearson, 2009). 61. T. Chen, R. Shibasaki, and Z. Lin, “A rigorous laboratory calibration method for interior orientation of

airbornelinear push-broom camera,” Photogramm. Eng. Remote Sensing 73(4), 369–374 (2007).

1. Introduction Remote sensing photogrammetry takes measurements from images captured by remote sensing cameras (RSCs) to determine the exact positions and shapes of ground objects [1–3]. It is widely used in image position determination, topographic mapping, and geology. In the photogrammetric process, ground coordinates of a specific object point can be recovered from its image coordinates in a set of overlapping imagery when the internal and external parameters of RSCs are known [4–6]. The internal parameters of RSCs are determined through a calibration process; thus, RSC calibration is a necessary step in remote sensing photogrammetric applications.

Camera geometric calibration methods have been studied for different applications under different conditions [7–33]. Existing RSC calibration techniques can be classified into two methods. The first technique involves angle measurements in the laboratory [34–36], where a single-pixel illumination through collimated light combined with a turntable is used as a calibration target. Recently, multiple pixels illumination in a laboratory was used to replace the single-pixel version for avoiding the turntable vibration influence on RSC calibration [37]. In addition, other well-established camera calibration techniques, such as one-dimensional object method [38], can also be used to calibrate RSCs. The common feature of these methods is that the calibration process of RSCs is performed in a laboratory. However, the laboratory calibration is not sufficient for RSCs because the calibrated internal parameters in a laboratory may drift to a significant degree because of vibrations during complicated launches and on-orbit functioning. The second technique, called on-orbit calibration [39,40], is performed in orbit conditions after the launch of an RSC, which can compensate for the shortcomings of the ground methods. Therefore, on-orbit calibration methods are crucial for on-orbit functioning.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14215

Page 4: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

In the case of on-orbit RSC calibration, existing calibration methods can be classified into four methods: self-calibration bundle adjustment (SCBA) [41,42], ground control point (GCP) [43,44], 180-degree satellite maneuvers (180° SM) [45,46], and point-source focal plane (PSFP) self-calibration [47,48]. The SCBA suffers from long computation times because complex constraint equations are utilised to invert the internal parameters of RSCs. The GCP and 180° SM methods are limited by space and time because they need to use external reference targets. Recently, the PSFP self-calibration method has allowed to refuse from using external reference targets by modifying the focal plane of an RSC. The modified focal plane can produce multiple diffraction point-sources (DPSs) as calibration targets. These DPSs pass through the optical system of the RSC to become collimated light. A dichroic filter reflects the collimated light, which then returns to the optical system and converges with the modified focal plane. Based on the relationship between DPS position and image points, the internal parameters can be calculated. This method is very suitable for on-orbit cameras calibration without the aid of external reference targets also enabling real-time monitoring of the variation of internal parameters. However, several factors limit the accuracy and functionality of this technique, one of the most important among which is the PFP fabrication. Modifying the focal plane of an RSC to establish a PFP is very difficult; thus, the development of the RSC becomes costly. Consequently, such calibration method is limited to specifically customised cameras.

In this paper, we propose an efficient camera self-calibration method with deep learning. A micro-transceiver produces multiple two-dimensional diffraction grids as a learning training input set, which is then transformed into multiple auto-collimating sub-beams (equivalent to infinite target points). The auto-collimating sub-beams pass through the camera to form image grids as the supervised training set. Finally, a learning network is used to obtain the optimised internal parameters of the camera. In the proposed method, the micro-transceiver can be easily integrated into the internal structure of RSCs allowing the calibration independently on external ground control targets. The proposed method has a potential use in on-orbit high resolution cameras [49-51]. In the future, some advanced fabrication technologies, such as high-resolution patterning [52], and optical devices, such as optically addressed spatial light modulators [53], will be used to obtain new micro-transceivers for RSCs.

2. Positioning model of photogrammetry Image position determination (positioning) is the reconstruction process of three dimensional coordinates of object points using extracted corresponding image coordinates from an overlapping remote sensing image set. Based on the optical geometric imaging relationship, the positioning established uses the collinearity property of the object point, camera projection centre, and image point. The positioning process employs three coordinate systems: the ground coordinate system, the image coordinate system, and the camera coordinate system. Let the coordinates of an object point be denoted by ( ), ,I I IX Y Z in the ground coordinate system. In the image coordinate system, the coordinates of the corresponding image point are expressed by ( ),i ix y . In the ground coordinate system, the

coordinates of the camera projection centre are expressed by ( ), ,O O OX Y Z . ( ), ,p px y f expresses the internal parameters of the camera, R is the coordinate transform matrix between the image coordinate system and the ground coordinate system, and ( ),x ydist dist represents the deviations of the collinearity condition of the positioning model. The positioning mathematical model can be expressed by above parameters based on the collinearity property. Here, a vector form is used to express the projection model and Fig. 1 shows the imaging vector relationship of an image point and object point.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14216

Page 5: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

First, the following terminology is used in Fig. 1: ( ), ,GcR ω ϕ κ denotes the coordinate

rotation transform matrix, which transforms a vector in the camera coordinate system (C) into a vector in the ground coordinate system (G); ( ), ,ω ϕ κ are the attitude parameters of the camera, which are determined by attitude measurement devices, such as star trackers and gyroscopes; G

Ir is the vector that expresses the coordinates of an object point (I) in the ground coordinate system (G); G

cr is the vector that expresses the coordinates of the camera’s projection centre (C) in the ground coordinate system; c

ir is the vector that represents the coordinates of the image point (i) in the camera coordinate system, which is the camera-scaled vector that connects the image point (i) and the camera’s projection centre; iS is the scale factor which is the ratio between the magnitudes of the vector connecting the camera’s projection centre and the object point and the vector connecting the camera’s projection centre and the image point. The position coordinate of the object point (I) is the sum of the camera projection centre coordinate (in the ground coordinate system) and the camera scaled vector ( c

ir ) attached to the coordinate rotation transform matrix ( GcR ) and the vector ratio

factor ( iS ). The vector form of the positioning model [54,55] is expressed by

( )= + , ,G G G cI c i c ir r S R rω ϕ κ (1)

( )= + , ,I o i p x

GI o i c i p y

I o

X X x x distY Y S R y y distZ Z c

ω ϕ κ− −

− − −

(2)

Equation (2) can be also expressed by the following equation:

( )

-1= -

, ,-

i p x I o

i p y I oGi c

I o

x x dist X Xy y dist Y Y

S Rc Z Z

ω ϕ κ

− − − − −

(3)

Based on Eq. (3), the ground coordinates of an object point can be determined using the image coordinates, internal parameters, and external parameters of RSC at exposure time. The external parameters are usually determined using a georeferencing method, while the internal parameters are determined using a calibration procedure. Therefore, the internal parameters calibration is the prerequisite of the positioning of RSCs. The laboratory calibration procedure has low efficiency and is easily influenced by the vibration of the angle rotation equipment because it uses single pixel illumination in combination with angle rotation equipment (such as a turntable). Traditional on-orbit calibration uses a prearranged calibration test field with a bundle adjustment self-calibration, where the internal parameters, external parameters, and ground coordinates are determined using GCPs in the test field. Unfortunately, the accessible and available GCPs are not easily achieved when cameras work on the satellite platform. To overcome these drawbacks, in this paper, we report on an efficient camera self-calibration method using a micro-transceiver, which can be easily integrated into the internal structure of RSCs and produces equivalent GCPs to enable the convenient calibration of RSCs (in ground or orbit conditions) without the limitation of external ground control targets.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14217

Page 6: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Fig. 1. Geometric collinearity relationships of remote sensing imaging

3. Proposed self-calibration methods In this paper, we use optimised encoded apertures, a beam splitter, a CMOS sensor, and a point-source integrated into a micro-transceiver of light (MTOL), which can generate a 2D virtual calibration grid and receive multiple sub-beams carrying optical distortion and internal parameter variation information. Figure 2 sketches the structure of the MTOL. The MTOL is composed of a single light emitting diode (LED), a collimating-expanding beam lens (CEBL), an encoded aperture mask (EAM), a beam splitter (BS), a CMOS sensor, and electronics. The LED can be considered as a point light source (PLS). The mask can generate multiple PLSs. The BS produces the equivalent effect of the same plane between incoming and outgoing grids. The CMOS sensor receives the beam traversing optical system. The image points in combination with encoded aperture position information are utilised to calculate internal parameters. The MTOL is installed on the focal plane (FP) of a remote sensing camera. The MTOL is then referenced as the FPMTOL.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14218

Page 7: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Fig. 2. Sketch of the original structure light micro-transceiver (a). It is composed of LED, two lenses, encoded aperture mask, CMOS sensor, and beam splitter. The available integration position in the camera for the light micro-transceiver (b), where the light micro-transceiver can be installed in the truss of the secondary mirror or the staggered area on the focal plane of the camera. The focal plane assembly of the camera (c), where the CCDs are arranged as a staggered way.

Fig. 3. Calibration model using a light micro-transceiver (a) and the imaging position of a virtual point (b).

Figure 3 shows the calibration model using a light micro-transceiver. In the camera coordinate system, the origin is the camera projection centre (Oz.), and the Z axis is the principal axis. The image plane is located on the focal plane; thus, the principal distance is the focal length, which is expressed by f. The intersection point between the principal axis and the image plane is the origin of the image coordinate system. A light micro-transceiver is placed on the front of the camera. In the light micro-transceiver, the array apertures are located on its focal plane; thus, the outgoing lights are collimated and then enter the camera. The collimated lights from the target points of the light micro-transceiver are equivalent to the lights of the infinite points. In this case, the virtual points are placed on the front of the camera (see Fig. 3(a)). Let a virtual point be P with coordinates (X,Y,Z) in the camera coordinate system. The coordinates of the corresponding image point (Pc) are (u,v). The collinearity equation of the geometric imaging relationship can be expressed as:

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14219

Page 8: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

=f u vZ X Y

= (4)

Based on Eq. (4), the imaging model [56,57] can be expressed as:

0 0

= 0 01 0 0 1 1

XZu fYv fZ

(5)

In an actual system, the origin of the image coordinate system is not located on the intersection point between the principal axis and the image plane. Let the coordinate of the origin (the principal point) of the image coordinate system be (u0, v0). Equation (5) can be rewritten as:

0

0

0= 0

1 0 0 1 1

XZu f uYv f vZ

(6)

In Eq. (6), (X/Z, Y/Z) is the incoming light direction of the virtual point, which is determined by the position of the corresponding encoded aperture of the light micro-transceiver. Let the position of the corresponding encoded aperture of the light micro-transceiver and the focal length of the optical system of the light micro-transceiver be (xLMT, yLMT) and fLMT, respectively; thus, α = X/Z = xLMT/ fLMT, β = X/Z = yLMT/ fLMT. Here, the matrix K = [f 0 u0; 0 f v0; 0 0 1] is the transfer matrix of the camera. In actual cameras, the optical system has distortions. The real coordinates of the image point are expressed as:

( )ˆ

= ( , )= ,ˆu u u

u v Kv v v

α β

+ ∆ + ∆

(7)

The most common distortion model of the camera is the radial distortion [58,59]. Such camera distortion can be expressed as:

[ ] ( )2 41 2( , )= T k r k rα β α β∆ + + (8)

where r2 = α2 + β2. Based on Eqs. (6)-(8), the imaging equation can be expressed as:

( )0 2 41 2

0

ˆ= + 1+

ˆuu

f k r k rvv

αβ

+ +

(9)

In the captured image, the measured positions of all image points conform to Eq. (9). Based on the distortion theory, camera has the minimal distortion at the principal point and principal distance position. In other words, in a set of the measured points D{[ui,vi]} ( [ ]1 i N∈ , N is the objective point number of the light micro-transceiver), the theoretical position of all measured image points using Eq. (9) with the camera internal parameters (f, u0, v0) is closest to the measured position. The solving process of the internal parameters (f, u0, v0) is an optimization problem. When Eq. (9) is seen as a predictor, inverting internal parameters (f, u0,

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14220

Page 9: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

v0,) is a deep learning process as deep learning is an input-output mapping, and it finds a predictor of an output when given a high dimensional input. Therefore, we use supervised deep learning to obtain internal parameters (f, u0, v0).

The input training set is a multi-view feature learning setting, where the paired observations from two views, view 1 and view 2, are accessed; the set is denoted by P = [(α1,β1),(α2,β2),…,(αN,βN)], αD

nα ∈ , βDnβ ∈ for n = 1,…,N, where N is the training set

size and n is the time index expressing the instant of time at which the stimulus is applied to the learning network. The corresponding output training set is denoted by Q = [(u1,v1),(u2,v2),…,(uN,vN)]. The input matrices from view 1 and view 2 can also be denoted by

1Ψ [ , , ]Nα α= … and 1Ω [ , , ]Nβ β= … , respectively. This network takes as input a paired

observation, ( ,n nα β ), which has three features, ( ) ( ) ( )(1) (1) (2) (2) (3) (3)n n n n n n, , α , , ,α β β α β , where

(1) (2) 2 (3) 4 (1) (2) 2 (3) 4n n n n n n , α , , ,  , ,n n n n n n n n n nr r r rα α α α α β β β β β β= = = = = = and rn

2 = αn2 + βn

2. At stimulus time n, two input vectors consisting of the three elements from view Ψ are denoted by (1) (1) (2) (3)

n n n (n) [ , , ]x α α α= , (2) (1) (2) (3)n n n(n) [ , , ]x β β β= from view Ω , and the corresponding

output vectors are denoted by ( ) ( )1nd n u= , ( )2

n( )d n v= from Q. The corresponding weights of the three features are defined as w1, w2, and w3, where w1 = f, w2 = f × k1, w3 = f × k2. The learnable parameters set is the internal parameters matrix, denoted by w(1) = ( (1)

0w , (1)1w , (1)

2 w , (1)3 w ) and w(2) = ( (2)

0 w , (3)1w , (4)

2 w , (5)3 w ), where (1)

0w = u0, (2)0w = v0,

(1) (2)1 1 1w w w= = , (1) (2)

2 2 2w w w= = , and (1) (2)3 3 3w w w= = . The deep learning finds a predictor of

an output ( ( ) ( ) ( )1 2, ( )d n d n ) given a multi-dimensional input ( (1) (2)(i), (i)x x ). The mapping implemented by deep learning is denoted by v, which is expressed as:

(1) (1) (1) (1) (1) (1) (1) (1)0 1 1 2 2 3 3( ) ( ) ( ) ( )v n w w x n w x n w x n= + + + (10)

(2) (2) (2) (2) (2) (2) (2) (2)0 1 1 2 2 3 3( ) ( ) ( ) ( )v n w w x n w x n w x n= + + + (11)

We adopt the supervised on-line learning with multilayer perceptions in the back-propagation algorithm, which is shown in Fig. 4. We design a multilayer perception with an input layer of source nodes, multiple hidden layers, and an output layer consisting of more

neurons, which is shown in Fig. 5(a). ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ }1 2 1 2

1, , ,

N

nx n x n d n d n

==I denotes the

training sample used to train the network in a supervised manner. The synaptic weights of the multilayer perception are adjusted on an example by example basis. The cost function to be minimized is the total instantaneous error energy. Consider an epoch of N training examples arranged in the order

( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ }1 2 1 2 1 2 1 21 , 1 , 1 , 1 , , , , ,x x d d x N x N d N d N . The first

example pair ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ }1 2 1 21 , 1 , 1 , 1x x d d in the epoch is presented to the network,

and the weight adjustments are performed using the method of gradient descent. Then, the second example ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ }1 2 1 22 , 2 , 2 , 2x x d d in the epoch is presented to the

network, which leads to further adjustments to weights in the network. This procedure is continued until the last example is accounted for. Let ( ) ( ) ( ) ( )1 2( , n )j jy n y denote the function

signal produced at the output of neuron j in the output layer by the stimulus ( ) ( )( )(1) (2),x n x n applied to the input layer. In the output layer, the total error signal of the whole network is defined by

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14221

Page 10: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

(1) (1) (1)( ) ( )jj C

e n y d n∈

= −∑ (12)

(2) (2) (2)( )= ( )jj C

e n y d n∈

−∑ (13)

The instantaneous error energy of the whole network is

( ) ( )2 2(1) (2)1 1( ) ( ) + ( )2 2

E n e n e n= (14)

Fig. 4. The flow chart of the supervised deep learning with the back-propagation. After a training example is presented on the deep neural network, the forward and backward computations are iterated. In the forward pass, the synaptic weights remain unaltered throughout the network, and the function signals of the network are computed on a neuron-by-neuron basis. The backward pass starts at the output layer by passing the error signals leftward through the network, layer-by-layer, and recursively computing the local gradient for each neuron by propagating the changes to all synaptic weights in the network.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14222

Page 11: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

where the set C includes all the neurons in the output layer. Figure 5(b) shows a neuron j being fed by a set of function signals produced by a layer of neurons to its left. The induced local field ( ) ( ) ( )1 2

2 2( , ( ))j jn nν ν produced at the input of the activation function associated with neuron j is

( )(1) (2) (1) (1) (2) (2)2 2

1 1, ,

m m

j j ji i ji ii i

w y w yν ν= =

= ∑ ∑ (15)

where m is the total number of input signals applied to neuron j. The synaptic weight ( ) ( )1 2

0 0( ( ), ( ))j jw n w n is the bias applied to neuron j, where the corresponding input is a fixed

value ( ) ( )( )1 20 0, ( 1, 1)y y = + + . The function signal ( ) ( )1 2

2 2( ( ), ( ))j jy n y n appearing at the output of

neuron j at iteration n is ( ) ( ) ( ) ( )1 22 2( ( ), ( ))j j j jn nϕ ν ϕ ν .

( ) ( ) ( )( )(1) (2) (1) (2)2 2 2 2, ,j j j j j jy y ϕ ν ϕ ν= (16)

In the learning network, the synaptic weights0 ( ) ( )1 2( ( ), ( ))ji jiw n w n are updated based on ( ) ( )1 2(Δ ( ),Δ ( ))ji jiw n w n , which is expressed as

( ) ( ) ( ) ( ) ( )( )(1) (2) (1) (1) (2) (2), = ,ji ji j i j iw w n y n n y nηδ ηδ∆ ∆ (17)

( ) ( )( ) ( )(1) (2) (1) (1) (2) (1), = ( ( )), ( ( ))j j j j j jn n e n e nδ δ ϕ ν ϕ ν′ ′ (18)

(1) (2)

(1) (2)(1) (2)

( ) ( )( ( ))= , ( ( ))=

( ) ( )j j

j j j jj j

y n y nn n

n nϕ ν ϕ ν

ν ν∂ ∂

′ ′∂ ∂

(19)

where η is the learning rate, ( ) ( )1 2( , )j jδ δ is called the local gradient, ( ) ( )1 2( , )i iy y is the input signal of neuron j. Equation (18) is established when neuron j is located in the output layer of the learning network. When neuron j is located in a hidden layer of the learning network, the desired response does not exist. To solve this problem, the back-propagation algorithm [60] adopts that the error signal for a hidden neuron would have to be determined recursively and works backwards in terms of the error signals of all the neurons, to which that hidden neuron is directly connected. Therefore, when neuron j as a hidden node of the network, the local gradient is the product of the derivative of the activation function ( )ϕ ⋅ and the weighted sum, which is expressed as:

( ) ( ) ( )( ) (1) (1) (1)= ( ( ))Tj j j k kj

kn n n w nδ ϕ ν δ′ ∑ (20)

where T = 1,2, ( )Tkδ is the local gradient of the neurons in the next hidden or output layer that

is adjacent to neuron j, ( )Tkjw is the connected synaptic weights. The algorithm used in the

deep learning network is shown in Algorithm 1.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14223

Page 12: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Algorithm 1. Calculating the internal parameters with deep learning

1: input training examples: ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ){ }1 2 1 2

1, , ,

N

nx n x n d n d n

=

2: calculate three features for each example: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ){ }1 1 1 1

1 2 3, ,x n x n x n x n= and ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ){ }1 1 1 11 2 3, ,x n x n x n x n= .

3: user-selected learning rate: η 4: initialize w(1) = (w0

(1), w1(1),w2

(1),w3(1)) and w(2) = (w0

(2), w1(2),w2

(2),w3(2))

5: repeat 6: for n = 1,2,…,N-1 do 7: Forward computation

a) compute the induced local filed ( ) ( ) ( ) ( )( )1 2,jl jlv n v n for neuron j in layer l:

( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )1 2 1 1 2 2, , , , 1 , , 1, y , yj l j l ji l ij l ji l ij l

i i

v n v n w n n w n n− −

= ∑ ∑

where ( ) ( )1, 1(yij l n− , ( ) ( )2

, 1(yij l n− is the output signals of neuron i in the previous layer l-1, and ( )1,ji lw ,

( )2,ji lw is the synaptic weight of neuron j in the layer l that is fed from neuron i in layer l-1.

b) compute the output signal of neuron j in layer l is ( ) ( ) ( )( )1 1

, ,j l j j ly v nϕ= , ( ) ( ) ( )( )2 2, ,j l j j ly v nϕ= .

8: Backward computation: a) compute the local gradients of the network

( ) ( )( ) ( ) ( ) ( )( )( ) ( )( ) ( ) ( ) ( )

', ,

, ', , 1 , 1

k

for neuron in the output layer

( ) for neuron in hidden layer

T Tj L j j L

Tj l T T T

j j L kj l kj l

e n v n j Ln

v n n w n j l

ϕδ

ϕ δ + +

=

∑, T = 1,2

b) update synaptic weights: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ), , , , 11T T T Tji l ji l j l i lw n w n n y nηδ −+ = +

9: end for 10: until converged 11: return weights.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14224

Page 13: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Fig. 5. (a) Supervised training network for calculating internal parameters with two hidden layers, (b) single node structure of output neuron k connected to hidden neuron j.

In the light micro-transceiver, we use the surface micro-machining process to fabricate the EAM. Figure 6 shows the EAM structure and the diffraction model on the outgoing plane of the BS based on the Fresnel diffraction theory. The whole structure is stacked. From the light incident surface to the exit surface, there are the antireflection film, neutral-tinted glass substrate, mask layers, and secondary anti-reflection layer. The neutral-tinted glass is used as the coating substrate. Optical anti-radiation quartz glass is used as the glass substrate. The mask layers are composed of gold film and tantalum film. The chromium film is used as the secondary anti-reflection layer. We use the surface micro-machining method to fabricate the EAM. The outgoing rays from the CEBL are collimated. Based on the sub-wave coherent superposition theory, for the parallel light with any incidence angle on the EAM plane, the wave-front point within the internal aperture area can be considered as the centre of the secondary disturbance, where the existing sub-waves perform coherent superposition at the outgoing plane of the BS, forming a diffraction pattern. The complex amplitude of parallel rays can be expressed as

exp( )E A ik r=

(21)

where k

is the wave vector in the light propagation direction, r is the radius vector of any point in the light propagation direction, and A is the amplitude of light electric field. In Fig. 6, the original point of the coordinate system X0AY0 is the centre of the aperture, and the side direction is the coordinate axis direction. Let the direction cosines of the incident light wave vector be (cos(a), cos(b), cos(c)). The complex amplitude of the light-plane wave at arbitrary point 0 0( , ,0)Q x y within the scope of the aperture can be expressed as:

( ) ( ){ }0 0exp cos cosE A ik x a y b= + (22)

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14225

Page 14: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

The light passing through the aperture travels to the outgoing plane of the BS. The integral complex amplitude of the light-plane wave at arbitrary point P can be expressed as:

( ) ( ) ( ) ( )exp ikrE P C K E Q d

rθ σ=

∑∫∫ (23)

where r is the distance between P and Q, C is a constant value, and K(θ) is the inclination factor. Based on Fresnel-Kirchhoff formula, Eq. (23) can be expressed as

( )[ ]0 0cos( ) cos( )ik x a y b r

aperture

A eE P di r

σλ

+ +

= ∫∫ (24)

where λ is the wavelength, A is constant value, k is the wave vector, r is the distance between area element dσ on the encoded mask plane and the outgoing plane point P; a and b are two incident direction cosines.

Fig. 6. (a) Schematic overview of the EAM and (b) diffraction model of the EAM with a square aperture on the mask plane.

Based on Eq. (24), the relationship between the beam and the aperture sizes can be determined. Figure 7 shows the beam profiles depending on the aperture size. Figure 7(a) indicates that the maximum profile of the grid is produced at the minimum aperture size. On the other hand, the profile decreases with increasing aperture size as the diffraction effect is gradually reduced. In these profiles, the first order of the diffraction grid is constituted by bright spots, while the second order is distributed along the side direction (X-axis and Y-axis) of the square aperture. Moreover, the second order tends to converge to the centre of the first order with an increase of the aperture size. We found that the profile size starts to increase with an increase of the aperture size when the aperture size is larger than a fixed value because the diffraction effect is further decreased and the rectilinear light propagation arises. Figure 7(b) indicates that the maximum intensity of the diffraction grid gradually increases with the increase of the aperture size because more light passes through the aperture. Then, the maximum intensity decreases with further increase of the aperture size. Meanwhile, the side-lobes start to increase with further increase of the aperture size.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14226

Page 15: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Fig. 7. (a) Observed profiles at different aperture sizes (the unit is pixel), and (b) observed intensity profiles at different aperture sizes.

As shown in Fig. 7, the aperture size influences on the image spot size and intensity. A larger image spot has more dispersed energy distribution. A large spot is unfavourable to extract its centre position using a centroid extraction algorithm, which influences the calibration accuracy. The aperture size should be optimized to minimize the diffraction effect and achieve a concentrated calibration grid. For this propose, we optimize the aperture size to achieve the converged spot. Under various aperture sizes, the image spot is extracted when the intensity threshold value is 0.4 after the normalization of the spot image. In Fig. 8, red curve shows the relationship between the spot size and aperture size, and blue curve shows the relationship between the covering pixel number of the spot and the aperture size. With the increase of the aperture size, the aperture diameter first decreases and then increases. When the range of aperture size is 14–17 pixels (each pixel size is 5.3 μm),  the  diameter  of  the extracted aperture is minimum. The number of covered pixels also decreases at first and then increases during the increase of aperture size within the same range. At the aperture size of 15 pixels, the minimum covering pixel number can be obtained. Therefore, the optimum design values can be obtained when the aperture size is 15 × 15 pixels (79.5 μm × 79.5 μm). Based on  the  optimized  aperture,  the  minimum  distance  between  two  apertures  is  set  to  80  μm, which ensures the image spots of two adjacent apertures do not interfere with each other.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14227

Page 16: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

Fig. 8. Relationships among the diameter of grids, covered pixel number and the aperture size, where the lowest value is the optimized encoded aperture distance.

4. Results To investigate the performance of this new approach, we experimentally demonstrated calibrating of RSCs with a light micro-transceiver. A sketch of the optical system of the experimental setup with the light micro-transceiver is shown in Fig. 9(a). Light emitted from a 9-W LED is collimated and expanded by two lenses (L1 and L2). The collimated beam is then diffracted by the EAM (M1), which is placed in front of the L2. The diffracted sub-beams change the propagation direction through a beam splitter (BS) and enter a remote sensing camera (C1).The beams passed through the C1 are reflected by a dichroic filter (F1) and enter the C1 again. Then, the beams carrying internal parameter information pass through BS and converge to CMOS sensor. We use the beam splitter (BS) to ensure the outgoing sub-beams on the focal plane of the C1.

Fig. 9. Schematic overview of calibrating of RSC optical system with the light micro-transceiver (a), the single pixel illumination method (b), and the setup of two methods (c), where L1 to L2 are the lenses; M1 is the EAM; C1 is the RSC optical system; F1 is the dichroic filter; BS is the beam splitter; CMOS is the image sensor; turntable α implements the horizontal rotation to calibrate the horizontal coordinates of the principal point, and turntable β implements the vertical rotation to calibrate the vertical coordinates of the principal point.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14228

Page 17: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

We also use the single pixel illumination method [61] as the reference to calibrate the cameras. Figure 9 (b) shows the setup of the reference method, where the single object is used as the calibration target; the multiple points at the different fields of view are obtained using two turntables (α and β) . This method first calibrates the horizontal direction using turntable α and then calibrates the vertical direction using turntable β. In the horizontal direction, based on the imaging geometric relationship, the internal parameters are calculated as:

( )

( )22

tan tan

tan tani i i i

i i

n y yf

n

ω ω

ω ω

× −=

−∑ ∑ ∑∑

(25)

( )

0

tani iy fy

nω−

=∑ ∑ (26)

where n is the measurement number using the horizontal turntable α, ωi is the rotation angle, and yi the image coordinates in the horizontal direction. We use three RSCs composed of Cassegrain optical systems; the camera parameters are shown in Table 1.

Table 1. Camera parameters used in the calibration experiments.

Parameters Camera 1# Camera 2# Camera 3# Focal length 2 m 1.026 m 8 m Image resolution 1024 × 1280 pixels 1024 × 1280 pixels 1024 × 1280 pixels Pixel size 5.3 µm 5.3 µm 8.75 µm F/ratio 10 10 13.3

Fig. 10. (a) Observed light micro-transceiver calibration grids, and (b) magnified grid and extraction results using a centroid algorithm.

After installing the light micro-transceiver on the focal plane of the RSC, an image of the virtual diffractive objective points was taken by the CMOS. After adjusting the integration time, the CMOS recorded all image points. A centroid algorithm was utilised to extract the centre with 0.1 pixel accuracy of each point in grey-scale mode, as shown in Fig. 10. The extraction accuracy of the sub-pixel position of each image point produced an ignorable effect on the calibration parameters because the error was 0.53 μm for the CMOS pixels size of 5.3 μm. From the captured  image,  the  image coordinates of multiple  image points  in  the  image coordinate system could organise a learning training set. In the learning training set, the

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14229

Page 18: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

internal parameters (principal distance and principal point) were calculated using the deep learning network.

In the single pixel illumination method, an image of a point is captured at each exposure time. Two turntables (α and β) implement the equivalent 2D array grids. We use the same position extraction algorithm to obtain the position set of the image points at different FOVs. In the position set, we calculate the principal distance and principal point using the deep learning network. The calibrated results of internal parameters of three cameras are shown in Table 2.

Table 2. Calibrated internal parameters obtained with two different methods.

Methods Camera 1# Camera 2# Camera 3# Angle measurement f 1999.2889 1026. 3182 8001.8321 mm

u0 0.4468 mm 0.3329 mm 0.9029 mm v0 0.5292 mm 0.3977 mm 0.8057 mm

Micro-transceiver f 1999.2673 1026. 3007 8001.7982 u0 0.4502 mm 0.3301 mm 0.8999 mm v0 0.5254 mm 0.3933 mm 0.8012 mm

We also evaluate the calibration accuracy of the two methods with the standard deviation. Figure 11 shows the calibration difference between the calibrated value and the mean value. Then, we calculate the standard deviation as the calibration accuracy. The angle measurement has the accuracy of 0.0234 mm, while our method has 0.0118 mm. The focal length given in both calibration cases is nearly the same, having less than a 50-µm error. Using multiple points produced by the light micro-transceiver we can calculate the internal parameters without the turntable platform vibration. We also use the same method to evaluate the accuracy of the principal point. We calculated the standard deviation of the principal point as about 0.14 pixels (0.74 um) with the proposed method. The standard deviation of the principal point is 0.37 pixels (1.96 um) with the angle measurement method.

Fig. 11. The measurement error of the principal distance of the RSC.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14230

Page 19: Efficient camera self-calibration method for remote ...static.tongtianta.site/paper_pdf/9925f18c-4a15-11e9-a752-00163e08… · Efficient camera self-calibration method for remote

5. Conclusions The self-calibration method, which we proposed in this paper, utilises a light micro-transceiver in conjunction with deep learning to efficiently calibrate the internal parameters of RSCs. Micro-transceiver produced multiple two-dimensional diffraction grids as a learning training set, which were transformed into multiple auto-collimating sub-beams equivalent to infinite target points, and a deep learning network for the calibration model was established. The aperture size of the encoded aperture mask of the light micro-transceiver was optimized to minimize the diffraction effect and to achieve a concentrated calibration grid, which provided an efficient calibration target. The micro-transceiver can be easily integrated into the internal structure of RSCs allowing to calibrate them independently on external ground control targets. With verified experimental results from RSC observation, the proposed method was proved to be effective and easy to implement; it meets the accuracy requirements of a high-accuracy positioning camera. We believe that this proposed concept can be used in the on-orbit calibration applications.

Funding This research was supported by NSFC National Natural Science Foundation of China (NSFC) (61505093, 61505190); National Key Research and Development Plan (2016YFC0103600).

Acknowledgments The authors would like to express their sincere thanks to Professor Fei Xing and Dr. Yuan Zhang for the suggestions of experiments.

Vol. 26, No. 11 | 28 May 2018 | OPTICS EXPRESS 14231